Huge technology companies are amassing unprecedented levels of wealth and power while also distorting democracy, Jamie Susskind writes in this detailed study. A British barrister, tech expert and skilled author, Susskind argues for a massive effort to rein in big tech, an initiative that would require new laws, robust regulation and greater enforcement. He acknowledges that none of this will be easy, because Silicon Valley will resist any and all checks on its power. And even if a “digital republicanism” emerges, the public sector will still have to sprint to catch up with the ever-accelerating pace of innovation.
- Technology represents a new form of “unaccountable power.”
- Big data has reshaped notions of government surveillance.
- “The neutrality fallacy” has overrun big tech.
- Big tech stiff-arms accountability by arguing that it’s capable of self-regulation.
- America’s regulatory scheme is designed to enhance, not limit, the power of technology firms.
- Democracies must reshape the digital economy.
- To control the power of big tech, democracies need new regulatory institutions.
- Bringing tech companies to heel will also require a robust regulatory effort.
- US political leaders are calling for the regulation of tech giants.
Technology represents a new form of “unaccountable power.”
Republicanism is a form of governance premised on the resistance to unaccountable power. During the Roman Republic, people viewed a domineering state as the main threat to individual liberty. In America’s early days, the British Parliament imposed a tax on the colonies while also telling the colonists that they had no power to resist. That sort of unchecked dominion aroused the “indignant spirit” of the Americans and ultimately sparked the birth of a new nation. One of America’s founders, James Madison, cautioned not only against overreach by a country’s rulers but also against the unrestrained concentration of power in the hands of a segment of society.
“Not long ago, the tech industry was widely admired and the internet was regarded as a tonic for freedom and democracy. Not anymore.”
In the modern age, the ethos of market individualism has replaced that of republicanism: A mighty power is tolerable as long as that entity doesn’t interfere in the decisions of individuals. Republicanism chafes at unchecked power, even if those with that power wield it in a low-key way. Market individualism, meanwhile, espouses a selfish mode of existence, with democracy serving strictly as a path to financial prosperity. Republicanism, on the other hand, views individuals not just as consumers but as citizens participating in a democracy.
Big data has reshaped notions of government surveillance.
During the 20th century, the phrase “government surveillance” conjured images of a human spy lurking in the shadows. That quaint notion is long out of date. Today’s reality is far more oppressive. Tech giants are amassing huge amounts of data about hundreds of millions of people. Practically every human action creates an information point that serves as grist for the data mill. Facebook has proven more than willing to share its data with any law enforcement agency that requests it. When police wanted information about Black Lives Matter supporters, for instance, they simply bought the data from Facebook, which has a helpful portal where law enforcement can request images, searches and even deleted content.
“The future is not Big Brother, in the sense of one government monolith watching us all at once.”
Today’s unaccountable power lies in the companies and technologies that collect sensitive personal information and then share it with private and public entities. This new surveillance is all-seeing and unblinking. Every payment is tracked, and smartphones constantly send location data. Journalists have found, for example, that they can tap location data generated by phones to pinpoint the whereabouts of the US president in just moments. Facial recognition technology is already widespread. Innovations promise to identify – and analyze – individuals based on even more subtle factors, like their gait. This patina of unceasing supervision affects the way people behave. Individuals are less likely to post candid photos if they worry it’ll cost them a job, for instance, or to search for health information if they know someone might be privy to the nature of their queries.
“The neutrality fallacy” has overrun big tech.
Silicon Valley has long operated under the assumption that algorithms are fair and just, as long as they treat everyone equally. So, for a time, if a user typed “Why do Jews” into Google’s search window, the search giant would autofill suggestions such as “love money so much?” To software engineers, this response was purely neutral; Google only suggested that phrase because other users had searched for it. While Google has since eliminated that bit of nonsensical neutrality, the overall mind-set remains. Reddit, for instance, emphasizes results that are popular with its users. Even if the users are white men upvoting content that’s racist or misogynistic, Reddit sees no need to intervene.
“The technologies of power are rarely neutral in their operation, and even if they were, neutrality is usually a poor guide to justice.”
Big tech’s love affair with neutrality plays out in negative ways, over and over. For instance, Amazon tested a recruiting system designed to find star performers by matching their résumés to those of the top workers already employed by the company. No one considered that men dominated Amazon’s workforce and that, therefore, the algorithm would filter out applicants who went to women’s colleges or used phrases such as “women’s soccer team” on their applications. While Amazon never put the algorithm into place, the episode offers another example of the fact that algorithmic neutrality is rarely neutral in practice.
Big tech stiff-arms accountability by arguing that it’s capable of self-regulation.
In Silicon Valley, the combination of a market economy and digital innovation has led to jaw-dropping advances and explosive growth. This is the upside of big tech. But now a new conundrum arises: Can the economic forces that created digital technology also bring them to heel when they overreach? Clearly, no. Big tech needs good governance. Consider the Jim Crow era in the United States: Restaurant owners knew their white customers didn’t want to be served by Black waiters, and law firms recognized that clients would feel uncomfortable with Black lawyers. So employers behaved in an economically logical way: They declined to hire Black employees. However, this choice also happened to be an immoral one. Because employers wouldn’t make a moral choice on their own, the 1965 Civil Rights Act compelled them to do so.
“Self-regulation in the tech industry is a non-starter, both in the literal sense that it has not started, and in the broader sense that it is unlikely to work in the short or medium term.”
Much like the racist employers of yesteryear, the big tech companies of today want to be left alone to operate as they see fit. Silicon Valley argues for self-regulation – the power of the sector to police itself. While every industry in a market economy enjoys some degree of self-regulation, carte blanche is rare. In the legal profession and in medicine, for instance, regulatory schemes impose basic qualifications on professionals and punish scofflaws with fines and other sanctions. The tech industry has resisted any such guardrails. Silicon Valley wants the world to trust that it’ll behave ethically and responsibly, even if such decisions would erode its profitability.
America’s regulatory scheme is designed to enhance, not limit, the power of technology firms.
In the United States, major holes in the governance structure of big tech exist. The first flaw is the “consent trap”: Internet regulation is primarily premised on the misguided notion that users who click “I agree” to the fine print on a terms and conditions page are somehow making an informed, meaningful choice. In truth, the language of such digital contracts is always vague and difficult to parse, and the ability to negotiate various clauses is, practically speaking, impossible.
The second hole is the lack of regulation on personal data. The United States is rare among advanced nations in not enacting sweeping privacy regulations. A third failing lies in the far-reaching protections the tech industry enjoys in terms of how people use their platforms. Section 230 of the 1996 Communications Decency Act decrees that tech companies aren’t legally responsible for the words posted by their users. Silicon Valley has clung to this chaos-inducing protection ever since.
“It is no accident that power has flowed into the hands of those who design and control digital technologies.”
The result is a legal framework that essentially allows big tech free rein to do what it wants. For Silicon Valley to complain about government intervention is absurd: Section 230 was just one massive gift to the tech industry from the federal government. Tech firms profit hugely because the regulatory scheme favors Silicon Valley at the expense of consumers. So it’s inaccurate to say that digital technology in the United States is not regulated; rather, it is governed in a way that allows tech firms to amass power and call their own shots.
Democracies must reshape the digital economy.
To save freedom and democracy from death by tech, the United States and other republican democracies must embrace four principles that make up the basis of “digital republicanism”:
- “The preservation principle” – Decision makers must make democracy’s survival their primary goal. While most tech platforms don’t threaten democracy today, it is wise to remember that some future technological development could indeed destroy it.
- “The domination principle” – The tech industry has built up vast reserves of unaccountable power, which is anathema to democracy. The United States and other nations need to find ways to demand accountability and restrain the industry’s power.
- “The democracy principle” – Technological innovation happens so swiftly that democratic laws simply can’t keep up. Old codes about stalking and harassment, for instance, were crafted when such offenses occurred face-to-face. Harassment has since moved online, but laws have been slow to reflect the new reality. Properly enforcing rules and norms that technology is rendering more and more obsolete will remain a challenge.
- “The parsimony principle” – This paradoxical rule recognizes a messy reality: While the state has failed to check the growing power of big tech, governments also have used the tools of digital technology to enhance their own power. Citizens must limit the state’s power, giving it just enough muscle to regulate the tech industry, but no more.
To control the power of big tech, democracies need new regulatory institutions.
It’s one thing to write new standards and laws to protect citizens in an age of big tech; it’s another to actually enforce the new norms. To establish an effective counter-power to big tech, democracies will need to develop new institutions. It’s worth pointing out that the present legal system is dysfunctional. Litigation is so expensive and time-consuming that most individuals won’t bother to pursue legal action when they are harmed. Consumers need a faster, less costly way to file grievances against tech firms.
“Many of the complaints processes presently offered by tech firms are Byzantine in design and Kafkaesque in operation.”
Facebook’s Oversight Board offers a promising way to address this issue. Its internal dispute resolution system has some problems – it can take months to resolve simple claims, and it can’t escape the perception that it lacks independence. However, the general notion of an independent body that can resolve disputes is a republican idea. What the tech industry needs is a system of tech tribunals that would arbitrate disputes between tech firms and their users. These bodies could operate online, with no need for the parties involved to set foot in a courtroom. Public servants would staff the tribunals, creating a sense of impartiality that’s lacking in Facebook’s in-house Oversight Board. To operate effectively, these tribunals would need to hit a sweet spot: inexpensive to use, quick to resolve disputes and staffed by public servants who understand the intricacies of algorithmic decision making.
Bringing tech companies to heel will also require a robust regulatory effort.
Even freewheeling market economies subject businesses to checks: Health inspectors pay surprise visits to restaurants. Bank regulators constantly scrutinize the books and operations of financial institutions. Nuclear plants, hospitals and other crucial infrastructure are subject to government oversight. The same is required in a digital republic. New legal standards will have teeth only if the tech industry faces the same routine scrutiny as other corners of the economy. Monitoring tech firms will require a new type of expertise, because the tech sector operates on mountains of code and fast-evolving systems of machine learning.
“Scrutiny is the soul of governance. Chefs keep their kitchens clean when they know there might be a surprise inspection.”
Of course, inspection is no silver bullet. Regulation raises all sorts of thorny issues. Tech firms will fret about giving up their trade secrets and proprietary code. Libertarians will wonder who will police the new police. If the inspectors behave in a high-handed manner, isn’t democracy simply trading one unaccountable power for another? These objections shouldn’t stop society from regulating the tech sector. No regulatory scheme will be perfect, but any oversight is preferable to none. After all, if restaurateurs, bankers, doctors, lawyers and nuclear engineers can survive with an inspector watching over their shoulders, tech companies can, too.
US political leaders are calling for the regulation of tech giants.
Calls to break up the tech sector are growing more common – from both the left and the right. When Senator Elizabeth Warren ran for president, she vowed to break up Amazon, Facebook and Google. In Congress, an Antitrust Caucus has emerged. Alas, the sudden interest in trust-busting follows decades in which no one in Washington, DC, particularly cared about the anticompetitive practices of the tech industry. In Europe, regulators have taken a harder line, imposing sanctions on Google, Facebook, Intel and Qualcomm.
“In the last decade or so, Facebook, Google and Amazon have happily hoovered up hundreds of smaller enterprises which could have bloomed into potential rivals.”
Regulating the tech giants poses an immense challenge. They benefit from the network effect and from economies of scale, two phenomena that help market leaders gain power and relevance while they snap up or shut out potential rivals. The result is a “wealth cyclone” in which the biggest players control more and more money and power. The republican ethos abhors such a concentration of economic might, and yet the antitrust apparatus in the United States has been able to do nothing to slow the growth of big tech. What’s more, it’s unclear that breaking up the biggest actors will achieve the desired result. It’s possible that even a successful antitrust initiative would simply unleash a pack of smaller predators.
“The most basic goal of digital republicanism is the survival of the democratic state itself.”
Still, America needs a new approach to antitrust, one that seeks to replace the “consumer welfare” test – which concerns itself solely with prices – with one that looks at the health and welfare of its citizenry.
About the Author
Jamie Susskind is the author of the best-selling Future Politics. Recognized as a leading authority on the digital age, he lives and practices law in London.
“The Digital Republic: On Freedom and Democracy in the 21st Century” by Jamie Susskind is a thought-provoking and timely book that examines the impact of technology on democracy and freedom in the digital age. Susskind, a political philosopher and lawyer, offers a nuanced and balanced analysis of the challenges and opportunities presented by digital technologies, and argues that we must fundamentally rethink our political and social institutions to ensure that they are compatible with the realities of the 21st century.
The book is divided into four main parts. In the first part, Susskind explores the ways in which digital technologies are transforming the nature of political power and the role of the state. He argues that the digital revolution has created new forms of power and influence that are not adequately addressed by traditional political structures, and that we need to develop new forms of governance that are better suited to the digital age.
In the second part, Susskind examines the impact of digital technologies on individual freedom and autonomy. He argues that the digital revolution has created both new opportunities and new threats to individual liberty, and that we need to develop new frameworks for protecting privacy, free speech, and other fundamental rights in the digital era.
In the third part, Susskind turns to the question of democracy, and argues that digital technologies are fundamentally changing the way we engage with politics and make decisions. He suggests that we need to develop new forms of democratic participation and decision-making that are better suited to the digital age, and that we need to rethink our assumptions about the role of representatives and the nature of political representation.
Finally, in the fourth part, Susskind offers a series of policy recommendations for building a digital republic that is compatible with the principles of freedom and democracy. He argues that we need to develop new regulations and institutions that are specifically designed to address the challenges of the digital age, and that we need to engage in a sustained and inclusive process of political reform to ensure that our political systems are fit for purpose in the 21st century.
Throughout the book, Susskind draws on a wide range of examples and case studies to illustrate his arguments, from the role of social media in shaping political discourse to the use of algorithms in criminal justice and the impact of surveillance capitalism on privacy and individual freedom. He also engages with a range of theoretical and philosophical traditions, from liberalism and libertarianism to Marxism and critical theory, and he offers a nuanced and sophisticated analysis of the political and ethical implications of digital technologies.
One of the strengths of the book is Susskind’s willingness to challenge conventional wisdom and to offer fresh perspectives on the challenges of the digital age. He is critical of both the techno-utopianism that sees technology as a panacea for all social ills, and the techno-dystopianism that sees technology as a threat to all that is good and noble in human society. Instead, he offers a nuanced and balanced analysis of the opportunities and challenges presented by digital technologies, and he argues that we need to develop a more nuanced and sophisticated understanding of the role of technology in shaping our political and social institutions.
Another strength of the book is Susskind’s engaging writing style, which makes complex ideas accessible to a broad audience. The book is well-researched and well-written, and it is a valuable resource for anyone interested in the future of democracy and freedom in the digital age.
In summary, “The Digital Republic” is a thought-provoking and timely book that offers a nuanced and balanced analysis of the challenges and opportunities presented by digital technologies. It is a valuable resource for anyone interested in the future of democracy and freedom in the digital age, and it offers a range of policy recommendations for building a digital republic that is compatible with the principles of freedom and democracy. Overall, I would highly recommend this book to anyone interested in the impact of technology on politics and society.