Skip to Content

Podcast Summary: A Skeptical Take on the AI Revolution -The AI expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?

Recommendation

In this eye-opening episode of The Ezra Klein Show, neuroscientist and leading AI expert Gary Marcus offers his highly informed insights into the significant limitations of ChatGPT and similar large language models – and explains why more data won’t overcome them. Marcus also warns of ChatGPT’s serious social threats, including perpetuating misinformation at scale. However, he also outlines a way that more sophisticated artificial general intelligence systems could play valuable roles in science, medicine, work and everyday life in the decades to come.

Take-Aways

  • ChatGPT and similar systems are producing content divorced from truth.
  • AI systems like ChatGPT pose significant threats to society because they facilitate misinformation and propaganda.
  • Large language models do offer important benefits, but people overestimate their power and application.
  • Scaling data won’t overcome the limitations of ChatGPT and similar systems.
  • Artificial general intelligence is probably decades away, and creating it could require international collaboration.

Podcast Summary: A Skeptical Take on the AI Revolution -The AI expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?

Summary

ChatGPT and similar systems are producing content divorced from truth.

Large language models (LLMs) like ChatGPT excel at producing convincing-sounding text, but these systems don’t understand what they’re doing and saying. Their output consists of “bullsh*t” in the classic sense defined by philosopher Harry Frankfurt: content that seems plausible but has no relationship to truth.

“What we’re actually getting really good at is making content with no truth value, no embedded meaning, much more persuasive.” (Ezra Klein)

LLMs essentially cut and paste text to create pastiches – content that imitates a style. Humans do something similar when they think – they average things they’ve heard and read – but with a critical difference: Humans have mental models. Because computers have no such internal representations, they’re incapable of either telling the truth or lying.

AI systems like ChatGPT pose significant threats to society because they facilitate misinformation and propaganda.

LLMs threaten to create a situation where misinformation becomes so prolific and convincing that people can no longer discern truth from falsehood. Because these systems excel at producing convincing content – and can do so at scale – they can put out thousands of plausible-sounding articles promoting, for example, vaccine misinformation, making it very difficult to sort fact from fiction.

Another threat arises from LLMs ability to target individual users with personalized content. These language networks could create massive quantities of persuasive, personalized propaganda – or just spam – that people would have difficulty discerning as divorced from reality.

Large language models do offer important benefits, but people overestimate their power and application.

Truly intelligent AI systems could help humanity solve difficult problems in biology, materials science and environmental science. They could play a valuable role in medical diagnosis and treatment, and they could serve as virtual assistants to help people accomplish everyday tasks at work and in their personal lives.

“The reality is we’ve only made progress on a tiny bit of the things that go into intelligence.” (Gary Marcus)

But many researchers are overestimating the power of current LLMs, as well as their potential applications. These systems perform deep learning proficiently, but deep learning represents just one aspect of intelligence. As an example of current systems’ limitations, even large models can’t read an entire short story, much less a novel, and discuss the characters with sophistication.

Scaling data won’t overcome the limitations of ChatGPT and similar systems.

Some AI developers hypothesize these systems could achieve general intelligence if only they had access to much more data – an idea referred to as “scale is all you need.” But ChatGPT and similar systems have significant limitations because they lack internal models and, in essence, are merely executing a complex version of autocomplete.

“There’s no law of the universe that says as you make a neural network larger, that you’re inherently going to make it more and more humanlike.” (Gary Marcus)

These systems can’t interpret humans’ intentions, and they don’t reliably understand the concept of harm, so they can’t be programmed to avoid it. Only achieving comprehension will overcome these limitations.

Artificial general intelligence is probably decades away and creating it could require international collaboration.

To achieve artificial general intelligence (AGI), researchers will have to create hybrid systems that combine neural networks and symbol manipulation – two traditions in AI that have long been at odds. Symbolic systems excel at abstraction, one of the pillars of human cognition and a cardinal weakness of current AI systems. Researchers will probably need at least 10 years and possibly much longer to create a system with AGI, and the size of the undertaking means it would likely require an international collaborative effort similar to CERN’s Large Hadron Collider.

About the Podcast

Gary Marcus is considered a leading authority on AI, widely known for his research in human language development and cognitive neuroscience. He is professor emeritus of psychology and neural science at New York University and the founder of the machine learning companies Geometric Intelligence and Robust.AI. Marcus is the author of five books, including Rebooting AI: Building Artificial Intelligence We Can Trust. Ezra Klein is a journalist, political analyst and host of The Ezra Klein Show podcast for The New York Times.