How will AI impact our immediate and near future? Can the technology be controlled, and does it have agency? Watch DeepMind co-founder Mustafa Suleyman and Yuval Noah Harari debate these questions, with The Economist editor-in-chief Zanny Minton-Beddoes.
Recommendation
Table of Contents
Can artificial intelligence be controlled, and how will it change global economies, politics and daily lives? DeepMind co-founder Mustafa Suleyman and historian Yuval Noah Harari spoke with The Economist’s editor-in-chief Zanny Minton-Beddoes about what the “AI Revolution” means to humanity’s future, and whether inevitable changes caused by this burgeoning technology threaten liberal democracy. The conversation provides a balanced look at how AI reveals risks and rewards as it changes the world order, often in unexpected ways.
Take-Aways
- Within five years, people will interact with more human-like AIs.
- For the first time in history, a human-made machine can make its own decisions.
- Control and power are shifting from humans to non-human intelligence.
- Evidence doesn’t support significant job loss due to AI.
- It remains to be seen how global entities will distribute newfound AI wealth.
- Governments struggle to hold AI developers accountable, partly due to political polarization.
- Geopolitical tensions between world powers like China and the United States complicate international AI regulations.
- Society has to determine its toleration level regarding AI.
Summary
Within five years, people will interact with more human-like AIs.
Mustafa Suleyman says that over the past decade, people connected to the generative AI Revolution have become adept at sorting, labeling and classifying images, audio and text – and using these materials to train computers.
“I think that’s really the character of AI that we’ll see in the next five years. Artificial capable AIs – AIs that can’t just say things, but can also do things.” (Mustafa Suleyman, DeepMind co-founder)
The next phase is to produce new AI-enhanced forms of multimedia and languages. ChatGPT and other current LLMs (large language models) are quite accurate, but within the next five years the “frontier model companies” will use more computer power to train even larger models. Suleyman predicts that new capabilities will allow AI to plan actions over “multiple time horizons.”
For the first time in history, a human-made machine can make its own decisions.
Suleyman proposes a “modern Turing test” in which a machine takes $100,000 seed money and creates a new product in about three months. It would generate design blueprints, negotiate with manufacturers for cost, “drop-ship” the product and collect payment. He thinks an ACI (artificial capable intelligence) could complete the sequence autonomously, working with humans and other AIs. The model would apply APIs (application programming interfaces) to access and communicate with other websites and databases, and it could work in a variety of economic sectors.
Yuval Noah Harari says that as a historian he takes AI development very seriously, especially since Suleyman describes AI as not the end of history as such but “the end of human history.”
“History will continue with somebody else in control because what we just heard is basically Mustafa telling us that in five years there will be a technology that can make decisions independently and can create new ideas independently.” (historian and author, Yuval Noah Harari)
Harari points out that no human-made invention in history, such as a stone knife or a nuclear bomb, could make its own decisions. The printing press and media replicate and distribute human ideas, but AI technology can create new ideas at a speed and scale far beyond human capability.
Control and power are shifting from humans to non-human intelligence.
Suleyman says that everything humans have created is a product of their intelligence, and their ability to predict outcomes and act upon those predictions. For example, 100 years ago, farms required 50 times more labor to produce the same amount of grain as they do today. Today’s emerging AI tools will produce more value than ever before. Such capabilities don’t naturally arise from AI models. People must engineer the models’ capabilities with precision and deliberation.
Over the next couple of decades, governance must include reasonable restraints on AI capability development, while taking advantage of transformative positive effects. For example, people will carry a “personal intelligence” assistant to prioritize critical information, making people more capable and smarter.
“The positive potential is so enormous in everything from much better health care, higher living standards, solving things like climate change – this is why it’s so tempting, this is why we are willing to take the enormous risks involved.” (Harari)
Harari thinks intelligence is “overrated.” Humans may be the most intelligent beings on Earth, but they’re also incredibly stupid and destructive. For one thing, they’ve put Earth’s entire ecosystem at risk. A window of about 30 years exists during which people will remain in the “driver’s seat” in shaping AI technology, but humans must think carefully about what they’re creating.
Evidence doesn’t support significant job loss due to AI.
Beyond a 10- to 20-year time frame, Suleyman says, speculation about AI is difficult. A few years ago people thought AIs would never display creativity or valuable new ideas. But newer models do these very human things quite well.
“Two years ago we thought that these models could never do empathy. We said that we humans were always going to preserve kindness and understanding and care for one another as a special skill that humans have.” (Suleyman)
For a while, Suleyman says, AI will support human skills, making people more efficient and accurate. But over time it may be much more difficult to pinpoint which capabilities are specific to humans. This “containment challenge” means humans may be forced to decide what’s acceptable in terms of governance and jobs.
“You need a lot more computer engineers and yoga trainers and whatever in California, but you don’t need any textile workers at all in Guatemala or Pakistan because this has all been automated, so it’s not just about the total number of jobs on the planet, it’s the distribution between different countries.” (Harari)
Harari thinks the transition period could be difficult. Human work won’t simply go away, but persistent unemployment could cause social and political upheaval. Large numbers of jobs will be created or eliminated in various regions of the world as economies adjust.
Suleyman points out that people didn’t create civilization so that everyone could be employed. The goals were less human suffering and more abundance for everyone. Considering today’s explosive population growth and climate change, AI could help people produce more, using fewer resources and human work hours.
It remains to be seen how global entities will distribute newfound AI wealth.
America’s political environment is a product of the country’s economic situation, combined with the information technologies of the 19th and 20th centuries. New AI technology may challenge liberal democracy, especially if the economy is fundamentally affected.
“I just don’t see the US government raising taxes on corporations in California and sending the money to help unemployed textile workers in Pakistan or Guatemala.” (Harari)
No large-scale democracies existed in ancient history, because democracy is a “conversation,” and people couldn’t communicate as efficiently as they do today. But if the democratic conversation becomes tainted by disinformation and mistrust, Harari says, the entire system could break down. If the pervasive online environment is flooded with machine entities that pretend to be people, and users truly have no idea whether identities, audio and images are real or fake, the democratic conversation collapses.
Bot entities impact society like counterfeit money. Once people realized that paper money could be reproduced, the financial sectors had to find methods for verifying currency to avoid systemic economic collapse.
In the past three decades, the internet and social media have become democratized, allowing hundreds of millions of bloggers and podcasters to freely distribute their thoughts. Suleyman thinks this is overall a positive development, because more people have access to legitimate media institutions, and they have adapted remarkably well. Through wave after wave of technology, people have adjusted their abilities to differentiate between nonsense and truth.
Harari agrees, but thinks that even today’s sophisticated information consumers find it difficult to communicate with each other productively, and trust is disappearing. Millions of Americans still don’t agree on who won the last election.
Governments struggle to hold AI developers accountable, partly due to political polarization.
Powerful AI models are being developed by private investors. Suleyman says that AI technology proliferates through open-source code that can be freely downloaded. Smaller groups are about a year and a half behind the developers of the latest large language models. Some companies are banding together to proactively self-govern AI capabilities. Suleyman thinks that is an appropriate first step, considering that potential risks arising from AIs have not been concretely demonstrated.
“I think that we should be approaching a ‘do no harm’ principle and that may mean that we have to leave some of the benefits on the tree and some fruit may just not be picked for a while.” (Suleyman)
A trade-off is necessary as people cautiously observe this unfurling technology wave. Suleyman has laid out a 10-point plan to address the risk of “extinction-level events” relating to AI, and what societies and governments can do to take advantage of its potential benefits while minimizing its risks.
One practical idea is “red teaming” the system, placing it under extreme pressure to provide dangerous information and noting how it reacts. Users have already probed AI’s weaknesses in these areas, but in today’s open-source arena, developers find it difficult to coerce AI into producing those feared capabilities.
Researchers must share discovered weaknesses and vulnerabilities, and work together to resolve competitive tensions. Such cooperative “best practices” have been used to develop technology over many decades, from software to black box recorders for aviation. Suleyman also believes that at this point, AI tools aren’t reliable and stable enough to use in elections.
AI autonomy presents exciting but extremely risky possibilities. Dangers may arise when a model can update its own code, take charge of its own goals, and acquire new resources to achieve its objectives. Suleyman supports robust audits, independent oversight and licensing protocols.
A regulatory framework must encompass the division between leading-edge and open-source models, which are decentralized and available to many people. These smaller, more efficient systems will encourage “garage tinkerers” to experiment with AI over the next decades. Suleyman calls AI proliferation “the defining containment challenge of the next few decades.”
Geopolitical tensions between world powers like China and the United States complicate international AI regulations.
Harari points out that governments and human beings are dangerously divided, and in a global arms race, making it will harder to contain this “alien intelligence” people refer to as AI. But these aliens aren’t coming from the “Planet Zircon.” They’re being developed in today’s laboratories.
They’re extremely intelligent, possibly more intelligent than humans – and they may eventually have agency. This is a “frightening mix” that humanity hasn’t previously faced. New institutions must be developed that can comprehend fast technology developments and react to them in real time.
“At present, the problem is that the only institutions that really understand what is happening are the institutions who develop the technology. Governments seem quite clueless about what’s happening, also universities.” (Harari)
These new institutions will require public trust as well as economic and technological resources. Harari says he’s not sure whether human societies are prepared to confront these new entities.
Moving toward autonomy, generative AIs behave in ways scientists did not anticipate.
The unpredictability of the AI development equation raises concerns about how much control creators have over its abilities and directions. Suleyman says that developers can’t “pressure-test” AI systems for everything in advance, so technology leaders are assigned responsibility for negative outcomes.
The European Union (EU) AI Act, which has evolved over three years, takes a sensible, risk-based attitude toward all application domains, from self-driving cars to facial recognition and health care apps. The Act takes AI capabilities “off the table” when its thresholds are exceeded.
“What we cannot have is a race to the bottom that says just because they’re doing it we should take the same risk. If we adopt that approach and cut corners left, right and center, we’ll ultimately pay the price.” (Suleyman)
Another approach is the Intergovernmental Panel on Climate Change (IPCC) model, which involves an international agreement administered through an investigatory power, such as an independent expert panel. This establishes a scientific basis for evaluating AI capabilities.
In the financial sector, Harari says, AI understands economy and finance better than politicians and bankers. It could invent complex new classes of financial products that humans struggle to comprehend. In one scenario, AI makes billions upon billions of dollars, but then the system brings down the world economy because nobody understands how it operates.
People need to manage AI capabilities with global cooperation and established values. The UN has been trying to negotiate a consensus on managing “lethal autonomous weapons” for more than two decades.
Society has to determine its toleration level regarding AI.
Suleyman says that cautious regulatory agencies are looking at facial recognition, drones, surveillance, self-driving cars and other AI systems to seek a balance between what is harmful, and what is politically, culturally and socially acceptable. He thinks it is possible to retain AI upsides while minimizing the downsides, such as jailbreaks and adversarial “red teams.”
Harari suggests humans need a willing coalition of AI developers and users. Global regulations such as banning bots that impersonate people might be a good start. As investment and innovation accelerate at an astonishing pace, AI models are taking their first baby steps toward an impactful but uncertain evolution.
“This is like the amoeba of artificial intelligence, and it won’t take millions of years to get to T. rex. Maybe it will take 20 years to get to T. rex.” (Harari)
Humanity hasn’t reached its full potential. If people invest equal amounts of time and money into AI as they do into developing human consciousness and the human mind, humans may achieve an appropriate balance. However, Harari doesn’t see this kind of “investment in human beings” taking place at present.
About the Speakers
Mustafa Suleyman is the co-founder of DeepMind and Inflection AI. He is the author of The Coming Wave: Technology, Power and the 21st Century’s Greatest Dilemma. Yuval Noah Harari is a historian and the author of Sapiens: A Brief History of Humankind; Homo Deus: A Brief History of Tomorrow; and 21 Lessons for the 21st Century. Zanny Minton-Beddoes is editor-in-chief of The Economist.