Sophisticated technologies and deepening knowledge of unconscious human behavior drive people to fundamentally change their behavior without knowing how or why – and often for the profit of technology companies. Unconscious forces drive human behavioral patterns, such as racial bias and addiction. Contemporary technologies can harness those forces for political and economic purposes. Technology feeds humans’ worst predispositions – emotional, political, even aesthetic – back to them in ways they can’t resist. NBC technology correspondent Jacob Ward describes the three “loops” by which technologies compromise how people make decisions and live. Ward’s passionate insights will engage anyone concerned by digital technologies and artificial intelligence (AI) affecting contemporary society – and their own decision-making.
- “Unconscious habits” inform human experience.
- Three basic rules govern people’s decisions and prove crucial to how technology controls people’s lives.
- Human behavior seems free, but “guidance systems” control it.
- Businesses use technologies to manipulate human behavior, yet people trust them.
- People don’t understand AI’s nature, limits or manipulative power.
- Technologies may so dominate life that people may forget how to choose what they most like or how to communicate with one another.
- AI can’t improve everything.
“Unconscious habits” inform human experience.
Following the First World War, seriously damaged men streamed through Austrian medical clinics, and many had disjointed perceptions of the world. For some, ordinary peripheral perceptions that people usually don’t notice were accentuated and unbearable.
Austrian neurologist and psychiatrist Otto Pötzl treated and researched these patients. His 1917 essay about one patient described how the brain receives and processes information without all of it entering consciousness. Subsequent research showed that the reality humans experience is only the best, most efficient version the brain pieces together from an overwhelming excess of information.
“We believe the story our mind is telling us because we believe that’s the only story there is…As we build machines and systems that organize, simplify and mutate our stories for us, we are just as vulnerable to believing those new tales as well.”
The brain isn’t a “closed system.” The brain can unconsciously reconstruct perceptions from all the senses and unconsciously assimilate and communicate emotional states. Researchers are increasingly identifying the unconscious patterns and predispositions people develop over the course of their lives that guide their decisions and actions. The science behind this may be young and undeveloped, but people in politics and business already exploit people’s unconscious patterns in order to manipulate their behavior.
Three basic rules govern people’s decisions and prove crucial to how technology controls people’s lives.
Between 1971 and 1979, psychologists Daniel Kahneman and Amos Tversky published a series of papers that proved fundamental for the burgeoning field of “behavioral guidance” that seeks to influence and shape people’s decisions. In a groundbreaking 1974 essay, Kahneman and Tversky articulated unconscious biases people bring to decision-making and three “heuristics” that distort human decision-making: “representativeness, availability and anchoring.”
Representativeness reflects what people associate with a specific category; this can lead people to misjudge whether something is in that category, such as whether a person is a doctor. Availability shows that people regard things that come to mind quickly and easily as more likely to occur – such as crime. The fact that you remember a crime from, say, seeing one on the news, doesn’t mean that it’s more likely to occur again. Anchoring means that a probability people already have in mind influences their estimations of the probability of something occurring. The probability you start with shapes the probability you arrive at – regardless of evidence you find in the meantime.
“Kahneman and Tversky had identified three handy ways we make short work of the world’s complications, but they also knew the three heuristics posed a real danger to our ability to make good choices in the modern world.”
Other heuristics followed. Baruch Fischhoff, a graduate student working under Kahneman and Tversky, discovered “hindsight bias”: Things that have already happened seemed like they couldn’t have happened otherwise – and people gave them a higher probability of recurring than things that hadn’t happened yet. Fischhoff’s student Paul Slovic created a firm called Decision Research that explored, among other things, people’s level of irrationality in assessing the risk of something bad happening. Slovic concluded that emotions, like representativeness and availability, are part of an unconscious process through which people make decisions.
This research helped uncover the systematic “patterns of human irrationality” that shape decision-making. Humans are vulnerable to irrational decision-making. And people are designing technologies that play on those vulnerabilities and that people don’t know how to resist. This is part of what Ward calls the first loop.
Human behavior seems free, but “guidance systems” control it.
Robots have difficulty performing simple human tasks. For example, robots at a competition sponsored by the Defense Department think tank Defense Advanced Research Projects Agency (DARPA) couldn’t climb a ladder, much less undertake elaborate missions. The problem lies in the massive amount of data required to perform such actions. People must help robots process all that data explicitly and in a way, consciously. Human beings don’t operate that way.
“We are geared to unconsciously offload difficult cognitive tasks to automatic systems, whether they be our emotions or a collection of shiny but unreliable robots.”
In 2000, psychologists Keith Stanovich and Richard West concluded that human minds operate in accordance with two systems, “System 1” and “System 2.” System 1 makes quick, unconscious, relatively straightforward decisions of the kind that impede robots from climbing ladders. System 2 is responsible for the more reflective, analytical decisions that require time and energy. Most judgments people make fall under System 1, but System 2 oversees System 1. Most failures in rational choice are failures in both systems. System 1 causes the failure; System 2 doesn’t detect it.
People’s brains deceive them into thinking they make free choices. In fact, their behavior is mostly driven by unconscious forces or “guidance systems.” Even though people’s choices are nowhere near as free as they imagine, they judge and deplore people whose behavior is influenced by forces they can’t control, such as those living in poverty or addicts. Humans created technologies, and the for-profit businesses associated with them, that exploit human cognitive vulnerabilities and delusions. They promote activities, such as substance abuse, regarding which people have scant ability to assess costs and benefits.
Businesses use technologies to manipulate human behavior – yet people trust them.
People resist uncertainty, need reassurance and more often than not remain blind to lessons they might learn from the past. Their emotions drive their decision-making. To some extent, this mental partitioning provides evolutionary advantages.
“While I believe it’s clear that the mental and physical health of entire generations could be at stake, I also believe that capitalism, culture, and our conviction that we are in charge of our own destinies are blinding us to the threat.”
Entrepreneurs grasp that they can exploit people’s unconscious predispositions for business purposes. Companies are busy finding ever more precise ways to influence human behavior. The unconscious in general fuels human behavior, but for-profit businesses dependent on digital technologies sample those unconscious patterns and reproduce them for customers. These are manipulative strategies, but people believe and trust them.
Facebook and Google, for example, deploy such approaches with vast computing capacity and “algorithmic sophistication.” Systems people don’t understand or find confusing reduce their critical acumen. System 2’s functions for correcting automatic System 1 can easily suffer disruption. For decades, people have placed unjustified levels of trust in what machines tell them. For Ward, this is the second loop’s core.
People don’t understand AI’s nature, limits or manipulative power.
Invented in a feverish burst of creativity in the mid-1950s, AI was an intellectual revolution that spurred a third and more advanced phase – the third loop – of how technologies shape human behavior and life.
“AI looks impossibly sophisticated and entirely inscrutable. We, the people whose behavior it will shape, just don’t understand what it is and what it isn’t.”
AI isn’t a “robotic intelligence” designed to replicate and ultimately replace the human mind’s complex and often subtle operations. An AI is any system that learns through data to perform a given task. AI systems learn to perform tasks in at least three ways. In “machine learning,” the system makes predictions based on and limited to existing data patterns. In “supervised learning,” the system deciphers patterns and correlated outcomes within accurately “labeled data” and thereby predicts future patterns and correlations. In “reinforcement learning,” the system eliminates incorrect raw data, approves of correct data and searches for patterns in the correct data.
AI is worrisome precisely because the human mind is likely to trust whatever the AI system tells it. The critical, reflective System 2 mind is always ready to offload decision-making to the emotions, impulses and unconscious patterns inscribed in System 1.
AI systems need an “objective function.” An AI system’s objective function is whatever task(s) humans want the system to perform. Another important feature of such a system is how rigorously the system pursues its objective function. An AI system, and the way the machine-learning system enables it, will remain a mystery to most people. The system provides answers – which people are naturally predisposed to believe and trust – but doesn’t reveal how it arrives at the answers.
Machine learning systems do their work without any transparency. This is inconsequential for trivial tasks, but it becomes ethically problematic as AI takes on increasingly sophisticated human aims, such as choosing which person will do a job optimally. Some people advocate for greater transparency or “explainability” in AI, but achieving that presents formidable technical challenges.
Technologies may so dominate life that people may forget how to choose what they most like or how to communicate with one another.
“Pattern-recognition technology” can shape human life at every level and all over the world. Algorithms, for example, can select what you eat, drink and wear, and what entertainment event you should go to. These algorithms will eventually connect with one another and literally shape an entire human life. In so doing, this conjunction of algorithms will limit human freedom and emphasize people’s most problematic unconscious impulses.
“I worry that as we become caught in a cycle of sampled behavioral data and recommendation, we will be instead caught in a collapsing spiral of choice, at the bottom of which we no longer know what we like or how to make choices or how to speak to one another.”
When AI dominates one part of your life, it inevitably spreads to every other part of your life. For example, early on in the COVID-19 pandemic, public health officials discussed a COVID-19 “passport” for those with a negative test or vaccination. Google and Facebook planned to create a Bluetooth-driven app that would alert people when they had been close to other people who had agreed to have their contacts recorded and contracted the infection.
Soon enough, a company used AI to develop drones that could identify people with COVID-19 symptoms from the air. Such surveillance systems could ultimately gather much other data, such as people’s heart rate and skin tone. And its operators could employ such a system for purposes other than health data. For example, a police department that already monitored people at train stations and used facial recognition software, contacted the company and proposed collaborating. Police-operated AI surveilled and analyzed people in this commuter town near New York City, and they knew nothing about it. Nothing in these systems limits what police departments or other institutions do with the data they accumulate.
AI can’t improve everything.
Pattern-recognition technologies such as AI or Facebook’s algorithms, by design, engage people’s biases and the shortcuts System 1 takes in decision-making. They also pander to for-profit incentives. But people may well use these technologies, and their accompanying business incentives, for good. Ultimately, the crucial question is whether for-profit companies may infiltrate every aspect of people’s lives with AI, or whether people can preserve and protect traditional, slow, System 2 decision-making.
About the Author
Jacob Ward is a technology correspondent for NBC News, reporting on technology’s social implications.
“The Loop: How Technology Is Creating a World Without Choices and How to Fight Back” by Jacob Ward is a thought-provoking exploration of the impact of technology on our lives and the diminishing choices we face as a result. The book delves into the ways in which technology influences our behavior, limits our options, and provides strategies for reclaiming control.
Ward begins by examining the concept of “The Loop,” which refers to the feedback mechanisms used by technology platforms to shape our preferences and behavior. He explores how algorithms and data collection create personalized experiences, often leading to a narrowing of choices and a reinforcement of our existing beliefs and preferences.
The book takes a deep dive into various aspects of our lives that are influenced by technology, including social media, online shopping, dating apps, and smart devices. Ward highlights how these technologies are designed to capture our attention, manipulate our behavior, and limit our choices by presenting us with curated content and recommendations.
Ward also explores the psychological and societal implications of living in a world shaped by technology. He discusses the impact on our mental health, relationships, and privacy. The book raises important questions about the ethical implications of technology’s influence and the need for individuals and society to take a more active role in shaping its direction.
“The Loop” offers practical strategies for fighting back against the diminishing choices presented by technology. Ward provides insights on how to become more aware of the influence of technology on our lives, set boundaries, and make deliberate choices. He encourages readers to be critical consumers of technology and advocates for collective action to push for more transparent and accountable systems.
“The Loop: How Technology Is Creating a World Without Choices and How to Fight Back” by Jacob Ward offers a compelling and thought-provoking examination of the impact of technology on our lives. Ward’s exploration of “The Loop” and its influence on our choices provides valuable insights into the ways in which technology shapes our behavior and limits our options.
One of the strengths of the book is Ward’s ability to explain complex concepts in an accessible manner. He breaks down the mechanisms behind algorithms, data collection, and personalized experiences, making them understandable for readers without a technical background. This allows readers to grasp the extent of technology’s influence on their lives.
Ward’s exploration of various aspects of our lives affected by technology, such as social media and online shopping, provides a comprehensive overview of the issues at hand. He effectively highlights the ways in which these technologies manipulate our behavior and limit our choices, leading to a thought-provoking reflection on the consequences for individuals and society.
The book also offers practical strategies for reclaiming control and making deliberate choices in the face of technology’s influence. Ward’s suggestions for setting boundaries, becoming more aware, and advocating for change are actionable and empowering. The emphasis on individual and collective responsibility encourages readers to take an active role in shaping the future of technology.
While “The Loop” provides valuable insights and raises important questions, some readers may find that certain topics could benefit from more in-depth exploration. The book covers a wide range of issues, and as a result, some areas may feel less thoroughly examined than others. However, Ward’s intention seems to be to provide a broad overview and ignite further discussion rather than providing a deep dive into each topic.
Overall, “The Loop: How Technology Is Creating a World Without Choices and How to Fight Back” is a highly recommended read for anyone interested in understanding the impact of technology on our choices and behavior. Jacob Ward offers a compelling analysis, practical strategies, and thought-provoking insights that encourage readers to critically examine their relationship with technology and take steps towards reclaiming their autonomy in a digitally-driven world.