Professor Mark Shepard offers smart insights about the societal shift into what he calls a “post-truth” era. He warns that for-profit entities target consumers in ways that create “micro-publics,” each with its own socially constructed version of the truth. Shepard cites philosopher Bruno Latour, who described scientific facts as socially constructed, asserting that people tend to value most the information they get from trusted social networks. Shepard reflects on how the challenge of establishing a shared truth might affect humanity and urges readers to value common sense and strive for collaboration.
- Society is moving into a “post truth” world in which shared fictions unite clusters of people.
- “The real” is always subject to negotiation, and data always contains bias.
- Maps can simplify complexity to reflect existing power dynamics.
- “Quantified” urban spaces transform daily life into a product.
- Today’s data driven culture fuels a “reputation economy” and new forms of social judgment.
- Smart technology in the home can influence users’ sense of identity and their behavior.
- For-profit companies create “micropublics” that come to believe different versions of the truth.
- Amid social division and polarization, finding common ground can be crucial to survival.
Society is moving into a “post-truth” world in which shared fictions unite clusters of people.
In 2016, the Oxford Dictionary’s “word of the year” was “post-truth,” defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” That year featured several “post-truth moments”: For example, then-US President Donald Trump called climate change a “hoax.”
“The ideological totality that we simultaneously relate to and are alienated by is the statistical imaginary: a market segment, a demographic tranche, a body without organs distributed across a probabilistic feature space composed of data points registered in discrete time series.”
For-profit entities leverage machine learning and an abundance of behavioral data to create “micro-publics”: clusters of people who relate to one another over a shared “ground fiction,” as opposed to a “ground truth.” Market segmenting and the data sets that link people into networks can affect their perception of what is true and, thus, affect which people they regard as part of their “in” group. The micropublics you belong to determine the degree to which you experience privilege or oppression.
“The real” is always subject to negotiation, and data always contains bias.
The society emerging in the age of “deep fakes” has the potential to hold people with multiple, alternate perceptions of reality, ushering in a period of epistemological crisis – that is, upheavals in ideas about knowledge. While many people innately trust facts backed by data, that information is not necessarily objective. In the early 18th century, for example, the word “data” referred to the principles someone might use as the basis for an argument.
Today, people tend to view data as neutral evidence. According to informatics expert Geoffrey Bowker, truly “raw data” doesn’t exist, since data itself contains biases: People must choose which data they trust, collect and mine, and how to interpret it.
“Humanistic inquiry acknowledges the situated, partial and constitutive character of knowledge production, the recognition that knowledge is constructed…not simply given as a natural representation of preexisting fact.”
In the 1990s, postmodern philosopher Jean Baudrillard coined the term “hyperreal” to describe people’s inability to distinguish consciously between reality and simulations of reality. People consume media that presents them with a “copy” of what they believe is real – material Baudrillard refers to as the “simulacrum.” They then embrace the simulacrum that they believe in as if it is a truth in itself. Today, the “truth” exists in a constant state of negotiation.
Maps can simplify complexity to reflect existing power dynamics.
Cartographers make “geospatial truth claims” and have the power to blur the boundary between fact and fiction. For example, in 1937, the mapmaking company General Drafting depicted a fictitious town, Agloe, in Delaware County as a “copyright trap,” to determine whether other cartographers had copied their geographic data. This fictional town began to manifest in physical reality after the mapmakers created it. In the 1950s, someone built the Agloe General Store in the area the map identified as Agloe. Esso, a company using General Drafting’s data legally, tried to sue General Drafting competitor Rand McNally for copyright infringement. Rand McNally included Agloe on its maps, arguing that the made-up hamlet had become real.
“Cartographers make truth claims with their maps. They abstract the world by projecting three-dimensional objects onto a planar surface, selecting specific attributes to emphasize over others and reducing the complexity of the whole through processes of generalization.”
Because maps reduce complex data to its simplest form, making maps requires prioritizing which details to emphasize and which to leave out. Although cartography processes are now automated, they are far from neutral. To understand the biases of maps, you must know who governs the data cartographers used to create them. For example, Google automatically creates maps that depict “areas of interest,” thus reflecting its bias that everyone wants to find retail stores, cocktail bars or restaurants.
The University of Southern California Spatial Analysis Lab (SLAB) depicts underrepresented narratives with alternative cartographies. SLAB presents, for example, maps that use linguistic landscape data to better display the presence of non-English speaking communities.
“Quantified” urban spaces transform daily life into a product.
“Data blasé” refers to having a sense of “indifference toward the realities of living in a post-truth world.” Today, the cloud and its material infrastructure – internet servers – enable organizations, such as social media companies, to maximize public engagement and capture attention. People overwhelmingly seem to accept the pervasiveness of cloud computing as part of modern life, despite the radical shifts it triggers in society. By 2023, Cisco estimates that the world will have three times as many networked devices as human beings and that many of those people will communicate with one another machine to machine.
“This cloudscape is as broadly pervasive as it is largely invisible, although it tends to render as beige across collective consciousness.”
Network infrastructure and data centers, which use high amounts of electricity, enable today’s data driven culture. This is akin to how railway networks and train stations supported new consumer mobility in the 19th century. Unlike railroad corporations building train stations, however, online companies design their networks and data centers to be as “invisible” as possible, situating them far from dense urban centers.
Those who research and design urban environments are leveraging machine learning and Big Data as governments and organizations push to quantify as many aspects of urban life as possible. While local institutions theoretically could use data to improve residents’ welfare, collecting and reusing human data transforms urban activities into “products to be bought and sold” to the highest bidder.
Today’s data-driven culture fuels a “reputation economy” and new forms of social judgment.
Your social value and access to employment opportunities and resources often hinge on your reputation on social media platforms. That’s a manifestation of the reputation economy. Your digital reputation has become a form of currency. Employers use it to help determine your desirability as a potential or current employee. Service providers – from Airbnb hosts to Uber drivers – depend on it for their livelihoods.
People now engage in a phenomenon called “social cooling” or self-censorship. They attempt to conform online to social norms that correlate with a good social reputation, as measured by online algorithms. Failure to display a certain type of digital reputation can trigger negative consequences. For example, health insurance companies may charge you more per month if their algorithms determine you’re a higher risk.
“Statistical imaginaries are bound by intersectional constructions of identity, where attributes such as race, gender, class, sexuality, religion, age, ability, appearance, and the like intersect in not only positioning people’s identity but also determining how they are privileged and discriminated against within society.”
The prevalence of data is leading to “predictive-policing” practices. For example, a police department may use an algorithm that determines whether the population of certain neighborhoods includes more minorities. They could classify such areas as “higher risk” and allocate more police cars to them. Former Federal Communications Commission chair Edith Ramirez warned that predictive policing could lead to “data determinism.” In a culture that embraces data determinism, people judge one another, and themselves, for failing to conform to the correlations or inferences they’d like algorithms to make about them.
Smart technology in the home can influence users’ sense of identity and their behavior.
As smart home technology becomes more prevalent, networked devices affect and influence users in their most intimate, domestic spaces. The predictive capabilities of conversational AI devices influence user behavior within their domestic spaces, such as by suggesting actions like locking the front door.
“Our home is the first site of cultural education; it’s where we learn to be a person. By allowing these devices in, we outsource the formation of our identity to a virtual assistant whose values are programmed by a small, homogenous group of developers. (Lauren McCarthy, artist) ”
These “artificial cohabitants” are, effectively, “new domestic partners.” Admitting them into your daily life raises new, complex concerns, because they exploit their human counterparts’ vulnerabilities for financial gain.
When marketers and advertisers manipulate consumer behaviors to trigger a purchase, they exploit cognitive biases, such as engineering “loss aversion” by offering customers limited-time deals and provoking their fear of missing out on a bargain. They may also capitalize on consumers’ tendencies to make irrational, rapid decisions when nudged to do so.
Marketers and advertisers use behavioral economics techniques and concepts to shape domestic behavior, gleaning data from devices that offer domestic convenience, such as Alexa or Siri. Advertisers who can discern if a customer is likely to be pregnant might target messages to women before childbirth in hopes of influencing a customer to become attached to their merchandise for years to come. McCarthy warns that targeted messages can nudge you into deciding to carry out the actions that agents, marketers or salespeople desire.
For-profit companies create “micropublics” that come to believe different versions of the truth.
Cambridge Analytica – a subsidiary of the Strategic Communication Laboratories (SCL) Group, a British behavioral research and communications company – was instrumental in helping former US president Donald Trump secure an election victory and in assisting the UK’s Conservative party in gaining sufficient votes to pass Brexit. The SCL worked with the UK Ministry of Defence, boasting of its expertise in “psychological operations.”
“When public space is no longer the geography of the public sphere and the public sphere is reduced to a multitude of micropublics as small as one, each seeing and hearing something different, we find ourselves confronted with a spatial epistemology not of assurance…but of uncertainty, for which doubt is the primary affect.”
The company has a history of manipulating people’s thinking, behavior and emotions using a variety of tactics such as fake news and disinformation. Agents use social media to garner support for partisan political issues in three ways. First, they analyze an individual’s personality, perhaps through something as seemingly innocuous as his or her response to a social media quiz, then target the user using psychographic segmentation. Second, they engage in behavioral micro-targeting, predicting how an individual’s needs might change based on factors like race and emotional vulnerabilities. And third, these agents deliver tailored social media content designed to exploit people’s individual weaknesses to persuade them to take a desired action.
Amid social division and polarization, finding common ground can be crucial to survival.
Media theorist Eric Kluitenberg uses the term “affect space” to describe three inter-layered aspects of reality: technological, affective and social. The technological components of social media enable communication. Their affective components, such as persuasive images and slogans, engage users. And their social components – physical urban spaces – are where people produce affective content.
When polarizing, violent events, such as incidents of police brutality, transpire in public spaces, they can trigger societal protest and divisiveness, creating affective dissent within the affected space. Such activity prompts people to use hashtags and create content aimed at differentiating their reactions from those of people with whom they do not align.
“The public realm today is constituted more by what we might describe as spaces of surveillance than spaces of appearance.”
As human beings, code, data, power and knowledge become increasingly entangled, people need to reflect on the question of their own individual agency and how best to act separately and collectively in this new context.
French philosopher Michel Foucault cites the ancient concept of truth-telling or parrhesia. To encourage more truthful discourse, he urges collective bodies to avoid the coercive forms of control and surveillance prevalent in authoritarian societies. Anti-fascist philosopher Hanna Arendt called on people to find common ground, even amid disagreement, and to determine a common course of action. But is that possible in a post-truth world of micro-publics with disparate realities?
Today, when facing complex global threats such as pandemics that affect people across multiple, different micro-publics, survival may hinge on engaging collaboratively to enable commonsense, shared understanding.
About the Author
Mark Shepard, associate professor of architecture and media study at Buffalo State University, edited the MIT Press book Sentient City: Ubiquitous Computing, Architecture, and the Future of Urban Space.
“There Are No Facts: Attentive Algorithms, Extractive Data Practices, and the Quantification of Everyday Life” by Mark Shepard is a thought-provoking book that delves into the intricacies of how data is collected, analyzed, and utilized in today’s world. Shepard, an accomplished scholar in the field of media studies, provides a comprehensive examination of the ways in which algorithms and data practices have seeped into every aspect of our lives, often without us even realizing it.
The book is divided into three main sections, each of which focuses on a different aspect of data collection and analysis. The first section, titled “Attentive Algorithms,” explores the concept of algorithmic attention and how it has become a dominant force in shaping our digital experiences. Shepard expertly dissects the ways in which algorithms are designed to capture our attention, keep us engaged, and ultimately, manipulate our behavior. He also delves into the implications of these practices, highlighting the potential for algorithms to create echo chambers, amplify existing biases, and perpetuate systemic inequalities.
In the second section, “Extractive Data Practices,” Shepard sheds light on the various methods by which data is collected, processed, and utilized to generate value for corporations and organizations. He examines the concept of “data extraction” and how it has become a central aspect of contemporary capitalism, with companies exploiting personal data to create targeted advertising, predict consumer behavior, and maintain their market dominance. Shepard also raises important questions about the ethics of data extraction, highlighting the power imbalances between companies and individuals, and the ways in which our personal information can be used against us.
The final section, “Quantifying Everyday Life,” focuses on the ways in which data and algorithms are increasingly integrated into our everyday experiences, from smart homes to wearable technology. Shepard explores the notion of “quantified selves,” where our daily activities, emotions, and behaviors are tracked, measured, and analyzed to create a digital representation of our lives. He also discusses the implications of this trend, including the potential for surveillance, control, and the erosion of privacy.
Throughout the book, Shepard draws on a wide range of examples and case studies to illustrate his points, from social media platforms to smart cities, and from data-driven healthcare to predictive policing. He also engages with other scholars and thinkers in the field, such as Bernard Stiegler, Judith Butler, and Jürgen Habermas, to provide a comprehensive and nuanced analysis of the issues at hand.
One of the strengths of “There Are No Facts” is Shepard’s ability to break down complex concepts and make them accessible to a wider audience. The book is written in an engaging and clear manner, making it suitable for readers with varying levels of familiarity with the topic. Additionally, Shepard’s use of examples and case studies helps to illustrate the abstract concepts he discusses, making the book feel more concrete and relevant to contemporary issues.
In conclusion, “There Are No Facts: Attentive Algorithms, Extractive Data Practices, and the Quantification of Everyday Life” by Mark Shepard is an essential read for anyone interested in understanding the intricacies of data collection, analysis, and utilization in today’s world. Shepard provides a thorough examination of the ways in which algorithms and data practices have seeped into every aspect of our lives, raising important questions about privacy, surveillance, and control. The book is a timely contribution to the field of media studies, and its accessible writing style makes it suitable for a broad audience.