Skip to Content

Empowering Insights to Fight Digital Discrimination with We, the Data by Wendy H Wong

Human Rights in the Digital Age. We, the Data by Wendy H. Wong exposes the alarming reality of digital discrimination in our data-driven world. This groundbreaking book uncovers the hidden biases embedded in algorithms and the far-reaching consequences of data misuse on marginalized communities.

Discover the eye-opening revelations in We, the Data and learn how you can fight against digital discrimination. Read on for a comprehensive summary and review of this must-read book.

Genres

Non-fiction, Technology, Social Science, Politics, Law, Ethics, Sociology, Data Science, Human Rights, Digital Activism

Empowering Insights to Fight Digital Discrimination with We, the Data by Wendy H Wong

We, the Data explores the pervasive issue of digital discrimination in our increasingly data-reliant society. Wendy H. Wong delves into how algorithms, powered by biased data, perpetuate and amplify existing inequalities. The book exposes the ways in which data is collected, analyzed, and used to make decisions that impact various aspects of our lives, from employment and housing to healthcare and criminal justice.

Wong argues that the lack of diversity and inclusion in the tech industry contributes to the development of biased algorithms. She highlights real-world examples of digital discrimination, such as facial recognition systems that misidentify people of color and predictive policing algorithms that disproportionately target low-income neighborhoods.

The book also examines the role of big tech companies in shaping our digital landscape and the need for greater transparency and accountability in their data practices. Wong advocates for algorithmic audits, diverse teams in tech, and stronger regulations to combat digital discrimination.

Throughout the book, Wong emphasizes the importance of data literacy and digital activism. She calls on individuals to become more aware of how their data is being used and to demand change from tech companies and policymakers.

Review

We, the Data is a timely and thought-provoking book that sheds light on the urgent issue of digital discrimination. Wendy H. Wong’s extensive research and engaging writing style make complex concepts accessible to a wide audience.

The book’s strength lies in its ability to bridge the gap between technology and social justice. Wong effectively demonstrates how data-driven decisions can have real-world consequences, particularly for marginalized communities. The examples and case studies provided are eye-opening and serve as a powerful call to action.

While the book thoroughly explores the problems associated with digital discrimination, it could have benefited from more in-depth discussions on potential solutions. Nevertheless, Wong’s recommendations for increased diversity in tech, algorithmic audits, and stronger regulations provide a solid foundation for further dialogue and action.

We, the Data is a must-read for anyone concerned about the impact of technology on society. It is an essential resource for policymakers, tech professionals, and digital activists seeking to create a more equitable and just digital world.

Recommendation

Human life is being increasingly distilled into data points, yet few people know who is collecting their data and how they’re using it. Professor Wendy H. Wong offers critical insights into how “datafication” is transforming humanity and the social fabric, thus redefining the very nature of community. The socio-political choices that people make regarding datafication could influence how Big Data, AI and other technologies affect life on an individual and collective level, explains Wong in this compelling read. Learn how to become a “data citizen” by using your voice to protect your interests and humanity’s common future.

Take-Aways

  • Individuals must shift from being “data subjects” to becoming “data stakeholders.”
  • A human rights approach to data can hold data collectors accountable.
  • Establishing individual data rights presents a complex challenge.
  • Buying and selling facial data may harm human dignity and autonomy.
  • Your data doesn’t die when you do.
  • Big Tech companies need checks and balances.
  • To ensure meaningful data stakeholdership, society must establish data literacy as a human right.
  • Standards won’t emerge without collective action.

Summary

Individuals must shift from being “data subjects” to becoming “data stakeholders.”

People generate approximately one million terabytes of data every day, according to research from IBM. “Smart” devices – including phones, thermostats, refrigerators and TVs – collect information about your preferences, health, location, and more. The Big Tech companies control much of the world’s data, acting as “data collectors” while treating the public as passive “data subjects.” Individuals must empower themselves to take ownership of their role as co-creators of data, asserting their rights as “data stakeholders” or “data citizens.”

“Datafication is fundamentally different from other kinds of technological changes because it changes humanity in a personal way.”

How people respond to the “datafication” of daily life – that is, the gathering and use of human-generated information – will shape humanity’s collective future. It’s time to hold the companies collecting your data accountable. As a data citizen, you should have the right to prevent firms from using your data in ways that don’t align with your best interests. It’s a mistake to view data-fueled technologies such as AI as “neutral” or to trust that businesses are using your data “for good.” People with profit-driven interests steer the creation of these technologies, and there’s no guarantee that their innovations will serve the public well. If, at present, you don’t feel empowered to claim your role as a data citizen, you’re not alone. Becoming a data stakeholder requires a perspective shift, collective action and global organization.

A human rights approach to data can hold data collectors accountable.

Your data is “sticky,” like gum on the sole of your shoe: It’s tough to scrub from the internet. Data is sticky for four reasons:

  1. Data collectors – and the human data sources who lack control over how their information spreads – both contribute to data creation.
  2. Data captures the mundane, seemingly irrelevant activities of everyday life, such as when you leave for work each day or how many days you order takeout from the same restaurant.
  3. Data is constantly sold, shared and merged with other data.
  4. Data exists forever, because there’s no way to track or control all the ways and places it travels.

Data’s stickiness has a human toll. It can undercut dignity, autonomy and equity, which are foundational human rights. Society must find ways to bring these values to bear on data collection practices.

“Human rights provide the key to shaping datafication into a more human-driven reality.”

The human rights issues that datafication raises are emerging and complex. What are the ethics of enabling algorithms to evaluate refugee-status eligibility? Who should manage the data of people who die in wars? Lawyers Yuval Shany and Dafna Dror-Shpoliansky argue that there’s a need to define and respect the rights of “online persons.” It’s nearly impossible to ensure that your sensitive data – including deleted Tweets and trashed emails – doesn’t end up somewhere you didn’t intend or anticipate.

At RightsCon, an annual global human rights summit, multistakeholder efforts are already underway to apply a human rights framework to the tech industry. However, challenges lie ahead: Public entities, such as states, often struggle to regulate corporate-created digital platforms, like TikTok, because users usually “agree” to sweeping data-collection and use practices when they sign up for the service. Sometimes nations are complicit in using private platforms to surveil citizens illegally, further highlighting the need for a shift toward data stakeholdership.

Establishing individual data rights presents a complex challenge.

Some argue in favor of establishing data rights akin to those that protect personal property, but the co-creation of data makes this exceedingly difficult to accomplish. While your data is an asset, it is different from property in that both you and the people who collect your data have some claim to ownership. Data co-creation is beyond your individual control, as it occurs in conjunction with data collectors’ activities and socially through your relationships with other people. As writer Shoshana Zuboff explains, data collectors treat human experience as “free raw material for translation into behavioral data.”

The companies collecting your data filter, aggregate and compress it, generating decontextualized predictions. As a person, you have considerably less power in this dynamic, as your data becomes individually insignificant when amassed in enormous data sets. This is troubling, given that data is like DNA: It’s a unique marker of different facets of your identity. Those who have control of your data essentially control parts of your selfhood.

“While the attitude in Big Data circles is that more data are better, human rights views prioritize the human behind the data.”

The fact that your data can exist forever also undercuts your ability to exercise rights similar to property rights: If you can’t verify whether your data exists or has been deleted, how can you secure an ownership claim? Moreover, not all data provides the same value or poses the same risks. Some of your data, for example, reveals metadata – such as when you sent an email – while other data focuses on content – like the words in your email. Additionally, some forms of information, such as health data, are more sensitive than others. This data more dramatically affects humanity because it can affect employment and medical decision making. Asserting your rights as a data citizen may consist of establishing guidelines about what categories of data shouldn’t be processed.

Buying and selling facial data may harm human dignity and autonomy.

Facial recognition technology (FRT) enables collectors to create data from your face and use it in ways you can’t control. Given that your face is central to establishing your identity, it’s worth questioning whether buying and selling your facial data encroaches on your personhood. Should facial data be market-inalienable, like kidneys, or more akin to your photos, which you can sell? Does treating facial data like property that can be bought and sold threaten your human dignity and autonomy? One thing is certain: Companies will use and reuse your facial data indefinitely until people take action, demanding human rights-based protections for their facial data.

“We need to draw the line between what is intrinsic to our humanity and can never be traded and things that are not as vital to our core selves.”

Some countries, like Canada, treat biometric data, including facial data, as private and thus subject to laws such as the Personal Information Protection and Electronic Documents Act (PIPEDA) and the Privacy Act. The idea of using consent to control how facial data is gathered is questionable, however. Proper consent requires comprehensive knowledge of how someone plans to amass, share and otherwise use your data and the possible harm you could suffer due to these practices – an all-but-impossible set of circumstances.

One of the most significant risks of using FRT is that it’s frequently inaccurate. In fact, there are cases of FRT incorrectly identifying individuals as criminals when police try to use it as a law enforcement tool. Bias is another problem – algorithms may fail to read darker faces accurately. Given the potential for accidental or deliberate misuse, it’s clear that people must establish ethical guidelines for the technology. Otherwise, FRT might contribute to social harms like discrimination and enable “surveillance states.”

Your data doesn’t die when you do.

When you die, what will happen to your data, such as your photos, messages and emails? Data does not die, as humans do. Indeed, a range of products and apps are emerging that are specifically designed to allow a digital version of yourself to live on after death, thus radically transforming the nature of human community. Microsoft now has a patent to create bots – powered by “social data” – with personalities based on living and deceased individuals. In the future, it might be possible to learn more about a family member through their bot after their death than you could during their lifetime, as the bot could reveal secrets the individual chose not to share while alive.

“Thanks to datafication and AI, we no longer die (digitally) as easily.”

Just as a human rights framework establishes standards for living with dignity, the dignified treatment of posthumous data also requires standards. The potential to create bots of specific individuals raises a host of ethical dilemmas related to autonomy, privacy and consent. Even if someone agrees to the creation of such a bot, it’s not at all certain that the bot could reasonably exercise the nuance, discretion and judgment required to reflect the wishes of the deceased accurately. It’s essential to establish norms surrounding digital selves – how they’re created and by whom. People must consider whether protections are needed to prevent “digital immortals” from ill-treatment and degradation, and how to prepare for hacks and malfunctions.

Big Tech companies need checks and balances.

Big Tech companies like Google, Amazon, Meta, Apple and Microsoft (GAMAM) offer services that people now largely view as indispensable. These companies can guide user behaviors and control access to the information that appears on users’ platforms. GAMAM provide services to the public, but unlike public institutions or governments, they do not look after people’s interests: Their goal is to further their private interests.

“Datafication technologies are not just products; they are producing a mode of human existence that is running roughshod over the institutions we’ve established in previous decades and centuries.”

It’s time to start holding the corporations that are expanding and benefiting from datafication accountable “to the ethos and full spectrum of human rights.” Harvard law professor Noah Feldman once proposed creating a “supreme court” called the Oversight Board (OB) that would oversee Facebook. In 2019, Facebook announced the creation of its own OB. Today, Meta’s OB members vote impartially on whether to remove or reinstate content. They base their decisions on “human rights norms around freedom of expression” and Facebook’s Community Standards. If GAMAM – today’s “global tech governors” – want to continue to hold on to power, data stakeholders must demand that more companies develop similar systems of checks and balances.

To ensure meaningful data stakeholdership, society must establish data literacy as a human right.

The public must have access to the resources and opportunities required to become data literate. Data literacy gives people the knowledge to make informed decisions in today’s data-centric world. The United Nations Educational, Scientific and Cultural Organisation (UNESCO) defines literacy as a human right, because the ability to read is essential for anyone hoping to “participate fully in their community and wider society.” Data literacy – the capacity to understand, “work with, analyze and argue with data” – should also be a human right. Making data literacy a human right would help people become data stakeholders.

“As data-literate individuals, we can make better choices.”

Libraries can play a crucial role in helping the public develop data literacy. Many libraries are already expanding their resources, offering technical training and providing access to new technologies like 3D printers. It’s not a stretch to imagine that libraries, an invaluable resource for developing linguistic literacy, could offer data literacy resources to the public. Becoming data literate would help ensure that individuals properly consent to sharing their data, as they’ll more fully understand the implications of doing so.

Standards won’t emerge without collective action.

Moving toward a human rights approach to datafication isn’t the only shift needed to protect the collective good. Society must establish frameworks to settle questions connected with data-driven tech developments, such as what constitutes personhood in virtual contexts. For example, should governments grant rights to lifelike digital entities?

“Data are sticky, but they are also human. We are in the data.”

The brunt of the responsibility for protecting your rights as a data citizen falls on you. If people don’t take collective action, it’s unlikely that corporations will change their practices. The public must organize, calling for global standards for all data collectors. Failing to work toward change may worsen inequality and result in a loss of autonomy and human dignity. New solutions, such as data trusts that act as intermediaries to serve the interests of trustees, must emerge. Corporations and governments shouldn’t be the sole stewards of your data. Remember, as a co-creator of data, you’re entitled to have a voice in whether and how your data comes to life.

About the Author

Wendy H. Wong is the award-winning author of Internal Affairs: How the Structure of NGOs Transforms Human Rights and the co-author of The Authority Trap: Strategic Choices of International NGOs. She’s also a political science professor at the University of British Columbia, Okanagan.