Skip to Content

Can history’s biggest tech revolutions teach us how to govern AI before it’s too late?

Why must we stop Silicon Valley’s values from dictating the future of artificial intelligence?

Verity Harding’s AI Needs You argues that the future of AI isn’t inevitable. By drawing lessons from the space race, IVF, and the internet, she reveals how democratic values and public input can tame the algorithm and ensure technology serves humanity. Don’t let the future be decided for you—discover Harding’s blueprint for ethical AI governance below and learn how your voice can shape the technology that will define our generation.

Recommendation

Dire warnings about the potential impacts of AI are everywhere, but what humanity needs is solutions. AI policy expert Verity Harding finds them by exploring the stories of three technical revolutions — the space race, IVF, and the internet — to learn how society established ethical, inclusive governance frameworks for them. Lucid, absorbing, and hopeful, Harding’s book illustrates how social values and the public good can be balanced against the political and business interests currently shaping AI’s future. An important and compelling contribution to the debate over AI.

Take-Aways

  • An understanding of history can help society guide AI’s development and avoid the mistakes of past technologies.
  • The values and priorities of Silicon Valley are shaping the trajectory of AI for the worse.
  • The space race illustrates the need for — and potential impact of — leadership and cooperation in the race for AI.
  • The history of IVF demonstrates the need for interdisciplinary and public input into debates over AI.
  • The story of the early internet shows that efforts to keep new technology decentralized can succeed.
  • Democracies must lead in establishing ethical AI standards and open, globally inclusive frameworks for governance.
  • Decision-makers must integrate social and ethical values into AI development through careful regulation.
  • Democratic processes, including public input — and an inspiring vision — should guide the course of AI.

Summary

An understanding of history can help society guide AI’s development and avoid the mistakes of past technologies.

AI has become the focal point of Silicon Valley’s innovation due to its capacity to revolutionize industries. The benefits of AI, particularly in enhancing productivity and transforming industries, are clear. Its ability to process vast amounts of data and automate decision-making position it as potentially the most disruptive technology of the modern era. However, history shows that rapid technological advancements often lead to unintended consequences. Past innovations, designed to improve the world, have frequently centralized power, deepened inequality, and been weaponized for control, surveillance, and manipulation.

“History is replete with examples of technologies capable of bringing great joy, or great harm, depending on how they are built, and how society and its leaders choose to react.”

The same risks exist with AI today. For example, the controversy surrounding Clearview AI’s invasive use of facial recognition technology — based on billions of facial images scraped from the internet — exposed how easily AI systems can cross ethical boundaries. To ensure AI’s development benefits everyone, it’s crucial that decision-makers learn from past mistakes. Interdisciplinary perspectives, including insights from history, ethics, and social sciences, are essential to guide AI responsibly. If policy-makers and tech leaders apply these lessons, AI can move beyond narrow commercial interests and become a force for equitable and ethical progress.

The values and priorities of Silicon Valley are shaping the trajectory of AI for the worse.

Technology has always been shaped by the values of its creators, and AI is no different. Silicon Valley’s ethos of dynamic disruption and its belief that technological breakthroughs can solve complex problems have driven AI’s development. The emphasis on STEM disciplines in Silicon Valley has further fueled AI’s rapid progress. But the tech industry’s focus on technical development and expertise often marginalizes important discussions around ethics, fairness, and inclusivity. History shows that the advance of technologies without interdisciplinary input can lead to negative societal impacts. AI, driven by a STEM-dominant culture, risks overlooking important human concerns, including privacy, accountability, and the long-term social effects of automation.

“The AI hype of recent years has contributed to a god complex that positions technology leaders as voices of authority on the societal problems their creations have often caused.”

AI development has benefited from an influx of highly skilled engineers into private industry — one in every eight computer science PhDs specialize in AI and move directly into the private sector. This has created an imbalance: As researchers leave academia for corporate roles, AI development becomes increasingly driven by business interests. This shift narrows AI’s potential to address societal issues, focusing instead on commercial applications that prioritize profit. This concentration of expertise within large tech companies mirrors historical patterns where technological developments served the interests of a few, leaving the broader public out of the conversation. The tech industry’s role in shaping AI, while vital for innovation, must be tempered by ethical considerations and a broader societal focus.

The space race illustrates the need for — and potential impact of — leadership and cooperation in the race for AI.

Geopolitical competition has long driven technological progress, and during the Cold War, the United States and the Soviet Union competed to dominate space, shaping the development of space technology through their rivalry. The Apollo 11 moon shot of 1969 cemented a vision of space exploration as a noble pursuit, testifying to human ingenuity. Acceptance of the principle of “freedom of space,” first articulated by an adviser to Dwight D. Eisenhower during his presidential term, aimed to maintain space as a peaceful domain for exploration, even as Eisenhower’s own technologists continued the development of spy satellites. The 1967 United Nations Outer Space Treaty was a milestone, setting rules that prevented the weaponization of space, and its passage resulted from President John F. Kennedy’s leadership in declaring space exploration as an arena for international cooperation.

“The purpose of AI cannot be to win, to shock, to harm. Yet the ease with which some AI experts today refer to it as nothing more than a tool of national security indicates a broken culture.”

Today, in the race for AI supremacy, similar leadership is needed. As AI becomes increasingly integrated into military and surveillance systems, the risk of its misuse grows. Global leaders must work together to establish clear norms and agreements, ensuring that AI serves humanity rather than becoming a tool of conflict. Countries must not only strive for dominance but also work together to establish ethical frameworks.

The history of IVF demonstrates the need for interdisciplinary and public input into debates over AI.

In vitro fertilization (IVF) revolutionized reproductive technology in the 1980s by making parenthood possible for couples struggling with infertility. However, its early days were marked by controversy and ethical concerns. The UK’s Warnock Commission, led by Baroness Mary Warnock, played a crucial role in gaining public trust by setting clear ethical boundaries and implementing regulations. This careful approach transformed IVF from a feared, experimental procedure into an accepted medical practice.

The Warnock Commission’s success depended on input from interdisciplinary committees and the public. The inclusion of experts from law, science, and theology ensured that decision-makers considered more than just scientific viability. Moral and legal perspectives added depth to discussions, making the regulation both pragmatic and ethical. The Warnock Commission’s engagement with society built trust and shaped policies that reflected public values.

“Great legislation is the product of compromise, patience, debate, and outreach to translate technical matters to legislators and the public.”

AI’s development parallels the controversies that surrounded early biotechnology. Biotechnology provoked fears about its potential to disrupt nature and life itself. Public concern centered on issues of control, ethics, and the long-term consequences of human intervention. Similarly, AI has sparked debates about privacy, job displacement, and the ethical use of data. Historical experience with biotechnology shows that these concerns must be addressed early and thoroughly. Ignoring them only deepens mistrust, while clear regulation fosters confidence and smoother technological integration.

Technologists alone cannot foresee the societal impacts of AI systems. Input from legal experts, ethicists, and sociologists can create a robust framework that ensures AI serves the common good while avoiding harm. Public concerns about AI’s role in shaping the future should not be dismissed. Instead, they highlight valuable insights that can guide responsible development.

The story of the early internet shows that efforts to keep new technology decentralized can succeed.

The protests and countercultural movements of the Vietnam War era reshaped the modern tech landscape. Opponents of military dominance and centralized power played a pivotal role in transforming the computing industry in the 1960s and 1970s, and as tech emerged from the Cold War’s military projects, it absorbed the values of the counterculture: decentralization, user empowerment, and resistance to authoritarian control. This shift was crucial in creating today’s digital infrastructure, where individual participation became a central goal. History thus built a foundation of antiauthoritarian principles that now underlie discussions of AI ethics and governance.

“So far there are few examples of principled stances and inspiring uses of AI that the wider population can believe in or trust.”

The system for managing domain name registration, called ICANN after the body that administers it, the Internet Corporation for Assigned Names and Numbers, offers a model for the regulation of AI. ICANN is a private, nonprofit corporation that contracts with the US Department of Commerce to handle domain names, IP addresses, and protocols. ICANN offers transparency and decentralization of power, relying on the trust of those who participate.

Democracies must lead in establishing ethical AI standards and open, globally inclusive frameworks for governance.

The internet’s history featured a global struggle for control during the early 2000s. When the United States expanded its control over the internet after 9/11, using it to gather intelligence and conduct surveillance on an unprecedented scale, it created deep mistrust and resistance internationally. In AI, the same risk exists: If one nation dominates control of a powerful technology, others will resist, undermining the potential for global progress. AI governance must avoid repeating this mistake. To build trust, leaders should create open, accountable, and globally inclusive AI governance frameworks. A balanced approach is necessary, where countries cooperate to establish clear, fair AI governance that benefits everyone.

“Our project must be to direct the development of technology away from our faults and weave into it the fabric of our shared ideals.”

Leading democracies must take the lead in setting global ethical standards for AI. They have a responsibility to show how AI can be developed responsibly, ensuring that human rights are protected. By developing and enforcing strong ethical guidelines, democracies can establish a global model for AI governance, influencing other nations to follow suit. If they fail, corporations and authoritarian regimes might exploit AI in harmful ways, deepening inequality and eroding civil liberties. Democracies are uniquely positioned to champion the values of ethical leadership in AI.

Decision-makers must integrate social and ethical values into AI development through careful regulation.

In 1964, civil rights leader Dr. Martin Luther King Jr. called for a balance between technological growth and spiritual progress. Throughout history, technological achievements often overshadowed human and social development. Industrialization, while creating wealth, widened social divides. AI could do the same if it remains focused solely on efficiency and economic gain. To prevent this, societal and ethical goals must become integrated into AI research and development.

“Technology is created by people, and it cannot be left purely up to those building AI to design its future alone.”

Challenging the idea that regulation stifles innovation is critical to fostering ethical AI growth. The example of the UK’s regulations on human embryology and IVF shows that well-designed regulations can protect society without slowing technological progress. In AI, careful regulation can establish boundaries that prevent harm while encouraging ethical use, fostering public trust, and promoting broad adoption.

Democratic processes, including public input — and an inspiring vision — should guide the course of AI.

The course of science depends on the choices people make. AI can benefit humanity, but to do so, its course must be guided by a larger community than Silicon Valley and its technical experts. Limits must be set. Techno-libertarians argue that the government has no role in innovation, but the Warnock Commission provides an example of how policy-makers, experts, and democratic processes can contribute to technological progress. The example of Deep Learning Indaba, an organization aiming to increase Africa’s AI capacity, shows how communities can play a role in the development of AI that serves a social purpose.

“When scientific power outruns moral power, we end up with guided missiles and misguided men.” (US civil rights leader Dr. Martin Luther King Jr.)

But beyond that, the world needs an inspiring vision of how AI can serve a positive purpose in humanity’s struggle against global challenges. Such a vision would go a long way toward building trust among the general public. Trust will also come from increasing the diversity among the groups building and guiding new technology. And individuals must play a part, voicing their concerns and visions to guide the structures and guardrails that will determine the future of AI. Everyday people have experience with AI at work and in their personal lives, and they know better than tech experts and CEOs about the impact that AI is having. People should speak up about the uses of AI, including workplace policies and its uses in schools, and insist that it serve peace, prosperity, and equity.

About the Author

Verity Harding is widely recognized for her influence in the field of artificial intelligence and technology policy. As a leader at the intersection of politics and tech, she worked with major institutions such as Google DeepMind, where she was the first Global Head of Policy, and more recently as the director of the AI & Geopolitics Project at Cambridge University’s Bennett Institute for Public Policy. She was listed among Time magazine’s 100 Most Influential People in AI in 2023.