Table of Contents
- What is Mastering AI about, and what risks does it warn everyday users and businesses to prepare for?
- Recommendation
- Take-Aways
- Summary
- ChatGPT’s 2022 launch sparked an AI arms race.
- Artificial intelligence may seem akin to human intelligence, but it lacks empathy and any true understanding of human lived experience.
- AI could enhance or degrade human life, depending on how people develop and regulate it.
- AI could undermine human critical thinking abilities.
- Chatbots may foster increased social isolation.
- AI “co-pilots” can help workers enhance their professional skills.
- AI will reshape business models.
- AI can personalize education and serve as a resource for teachers.
- AI will alter how people produce and consume art — and wage war.
- About the Author
What is Mastering AI about, and what risks does it warn everyday users and businesses to prepare for?
Mastering AI by tech journalist Jeremy Kahn explains how generative AI is reshaping work, education, art, and politics—while outlining practical risks like misinformation, bias, privacy erosion, automation bias, and why governance matters. Continue reading to get a practical checklist: where AI helps most, where it should never run unattended, and the guardrails that keep human judgment, accountability, and data safety in the loop.
Recommendation
What lies ahead when it comes to AI? Tech journalist Jeremy Kahn extols AI’s potential contributions to humanity but offers warnings as well. He highlights AI’s ability to transform science, the arts, industry, and medicine. At the same time, Kahn notes that AI might erode “human capabilities” and negatively affect job markets, trust, privacy, and democracy. He describes the urgent need for ethical decision-making in automated systems development and regulations that govern AI output. Kahn further underscores that AI must complement, not replace, human talent.
Take-Aways
- ChatGPT’s 2022 launch sparked an AI arms race.
- Artificial intelligence may seem akin to human intelligence, but it lacks empathy and any true understanding of human lived experience.
- AI could enhance or degrade human life, depending on how people develop and regulate it.
- AI could undermine human critical thinking abilities.
- Chatbots may foster increased social isolation.
- AI “co-pilots” can help workers enhance their professional skills.
- AI will reshape business models.
- AI can personalize education and serve as a resource for teachers.
- AI will alter how people produce and consume art — and wage war.
Summary
ChatGPT’s 2022 launch sparked an AI arms race.
In 2019, Microsoft invested $1 billion in a then-largely unknown start-up: OpenAI. Together, the two companies developed a powerful supercomputer designed to train one of the most extensive AI programs ever created. The software, dubbed GPT-3, could process relationships among 175 billion data points, including vast amounts of text from web pages, books, and Wikipedia. It used this information to “predict the next most likely word” in any given sequence, allowing it to code, answer factual questions, and translate languages in a human-like manner. More notably, it gave these outputs in response to “natural language” prompts: instructions framed conversationally rather than in code. This development meant even non-technical users could use the AI.
Before the advent of GPT-3, AI achievements remained niche — for example, defeating human chess or Go champions. These case studies demonstrated AI’s ability to learn from experience. However, they didn’t translate into practical applications or a consumer-friendly experience. The commercial potential of GPT-3 convinced Microsoft to increase its investment in OpenAI, leading to the public debut of ChatGPT in November 2022. Public interest in ChatGPT — 100 million people signed up the first month it was online — kicked off a kind of Big Tech arms race, with companies like Google and Meta scrambling to launch their own user-friendly AI tools.
“ChatGPT was not the first AI software whose output could pass for human. But it was the first that could be easily accessed by hundreds of millions of people.”
The swift AI advancements seen since ChatGPT’s debut have spurred debate about an impending singularity: when AI might match or surpass human cognitive abilities. While that moment may be far in the future, the AI revolution will alter almost every aspect of society, paralleling historical game-changers such as the invention of electricity. There are many potential upsides to an AI-rich future — more personalized education and medical treatment, for example. But there are many risks, too. Without sufficient checks, AI could undermine democratic norms, increase inequality, and weaken human skills. Societies and governments have a limited period in which to consider and enact regulatory measures to mitigate potential adverse effects and otherwise define AI’s place in humanity’s collective future.
Artificial intelligence may seem akin to human intelligence, but it lacks empathy and any true understanding of human lived experience.
Large language model (LLM) AI tools mimic human interaction by piecing together statistically probable words, but that does not mean their output is reliable. AI is only as accurate as the data available to it — and with many LLMs scraping information from across the internet, that data tends to be a mix of genuine facts and misinformation.
“LLMs have no inherent understanding of what they’re saying — and no understanding of the difference between fiction and fact. They just generate a string of statistically likely words.”
Though the accuracy of models like ChatGPT continues to improve, these tools remain more akin to “digital brains” than “digital minds”: AI cannot understand context or judge situations in the same ways humans can. And though an AI can mimic human empathy, it cannot empathize with human experience. To ensure humanity does not end up worse off due to AI developments, people must learn to distinguish between genuine human traits and AI simulations. People must also bear in mind that the tech firms developing this technology are profit-driven — which may prompt them to make choices that serve their bottom line more than the public good. As advancements continue, stakeholders must address these issues collectively to ensure that AI complements — rather than replaces — human capabilities.
AI could enhance or degrade human life, depending on how people develop and regulate it.
AI offers humanity the opportunity to gain extraordinary abilities. However, people will only enjoy these gains if they make thoughtful design choices and implement policies that ensure AI tools support — rather than undermine — human flourishing. AI can free human workers from mundane tasks, allowing them to do their jobs more efficiently and boosting economic growth; but it could also render humans subservient to machine decision-making. It promises breakthroughs in science and medicine; but these same capabilities could enable the creation of more deadly bioweapons. AI can help scientists monitor the environment and improve the efficiency of renewable energy resources. However, its high energy consumption may actually speed up environmental degradation.
“We must remember that AI’s superpower is its ability to enhance, not replace, our own natural gifts.”
AI can help coders do their jobs more easily and can aid cybersecurity professionals in fighting malware. However, it also opens the door to new security vulnerabilities — such as “data poisoning” attacks that lead to incorrect outputs — and forms of fraud. For example, AI-driven voice cloning technology can allow bad actors to impersonate your loved ones, making it seem like your daughter or nephew is calling to ask for some cash. AI could help world leaders become more responsive to their constituents’ concerns. However, the proliferation of misinformation and AI-generated deepfakes may further erode the voting public’s trust in leaders, media, and democratic institutions. If humans use AI as a tool for terrorism or automated nuclear decisions, the consequences could be catastrophic.
AI could undermine human critical thinking abilities.
AI advancements risk making human cognitive skills less relevant. Like muscles that weaken without use, your brain can lose strength if you rely too heavily on AI for tasks such as critical thinking. AI can undermine human memory and people’s ability to concentrate on and work through complex problems by providing instant, frictionless responses to queries. Overreliance on AI can also lead to less diversity of thought. By reducing the potentially wide range of viewpoints on a given subject to a single answer, AI obfuscates the fact that other perspectives are possible — and encourages groupthink. Worse, people often display “automation bias”: a tendency to defer to automated guidance even when there are strong indications that it is wrong. In one experiment, for example, people in an evacuation simulation complacently followed a robot’s instructions to walk away from clearly marked fire exits.
“As with all intellectual technologies, the question is about netting what is lost against what is gained.”
People must also be wary of AI’s role in decision-making in areas requiring moral reasoning, such as determining which prisoners should receive parole or how severe a sentence someone should receive after committing a crime. Though many people think of machines as objective decision-makers, AI outputs are a direct reflection of training data — which is often rife with biases. If humanity allows AI too much power in, say, making health care determinations or choices about who should qualify for a mortgage — it risks exacerbating inequalities and biases by codifying existing ones. Indeed, the evidence suggests that outsourcing decisions to AI allows people to sidestep responsibility for preserving a flawed status quo. To harness AI’s potential positively, people must maintain the primacy of their critical thinking, ethics, and problem-solving skills. AI must complement human intellect, not replace it.
Chatbots lack emotions, making interactions with them fundamentally one-sided. Some technology companies claim that chatbots can help combat loneliness and could, perhaps, help individuals with autism or social anxiety hone their conversational skills. But these arguments remain largely unsupported by conclusive research. On the contrary, like social media, Chatbots may actually exacerbate feelings of loneliness and anxiety.
“What’s at stake is ultimately our personal autonomy, our mental health, and society’s cohesion.”
Designers must make thoughtful choices if chatbots are to contribute positively to society. Encouraging respectful interaction between human and machine may well promote better social skills. However, these technologies could potentially lead to a “filter bubble” effect under which personalized AI assistants tailor information to individual biases. Critics fear this tendency could worsen cultural divides, with AI becoming a tool for reinforcing narrow ideologies. Government regulations should address these concerns by promoting business models that prioritize user needs over advertisers’ needs to ensure that AI systems enhance human decision-making rather than manipulate it.
AI “co-pilots” can help workers enhance their professional skills.
In fields from accounting to medicine, AI co-pilots will automate routine tasks and act as “digital colleagues.” These AI tools promise a productivity boom, enabling professionals to focus on more intellectually stimulating challenges. However, professionals must avoid mindlessly deferring to AI — which will breed mediocrity. People must use AI tools thoughtfully — to help them explore all the facets of an issue more deeply, for example, or employ it as a virtual coach to help them grow their skills.
“Designed well, AI co-pilots will unleash an unprecedented productivity boom, making us faster and more efficient. They will make us happier, too, relieving us of some of the tasks we find the least intellectually stimulating and enjoyable.”
AI cannot fully replace human expertise and intuition. Professionals must become adept at supervising and refining AI-generated suggestions, behaving as they would when mentoring interns. As AI evolves, industries must develop sector-specific standards to define appropriate applications that support human-AI interaction.
AI will reshape business models.
For years, companies like Disney and Pfizer have leveraged their vast intellectual property and customer data repositories to offer more personalized marketing and enhanced customer service. The rise of new AI tools will allow companies to comb their customer data to create more bespoke marketing. Inuit, for example, moved from three main customer categories for marketing its tax software to 450, thanks to AI. AI’s ability to capture knowledge — such as the content of a textbook or user manual, for example – will help overcome traditional productivity barriers, paving the way for enhanced automation across sectors. In entertainment and publishing, for example, AI facilitates rapid content creation, enabling individuals to produce high-quality works with limited resources.
Employee data, particularly their tacit knowledge, which transcends the formal steps involved in doing a task, is also of paramount importance. Companies often rely on experienced employees to contribute their expertise to training AI systems. In one study, an AI co-pilot trained on customer service practices used by a call center’s best employees was then able to boost the performance of its new hires. Companies must recognize and offer appropriate compensation for that critical human knowledge.
AI can personalize education and serve as a resource for teachers.
As with past concerns over CliffsNotes and calculators, skepticism regarding AI will gradually give way to adaptation, with educators finding ways to integrate AI tools into teaching. Tools like ChatGPT can aid with group discussions and assignments emphasizing experiential learning. For example, sixth-grade math teacher Heather Brantley recalls how a ChatGPT-generated lesson involving wrapping paper and boxes helped students understand the practical value of the formulas they’d been learning.
“The idea is to encourage students to learn how to use AI as a copilot, while trying to avoid the risk that they will lean on it so heavily that they never develop their own abilities.”
AI can personalize education by providing each student with a tailored learning experience akin to having a personal tutor. Khan Academy’s AI-tutoring program, Khanmigo, for example, uses the Socratic method to guide students rather than solve problems for them. In upper grades and college courses, AI can help students review lecture notes and draft and edit essays. Teachers must educate students about AI’s shortcomings, including its tendency to “hallucinate” — making up information and presenting it as factual — and stress the need to expand upon AI-generated ideas. However, when used properly, AI tools can help create a more even playing field for students, ensuring, for example, that those without the funds for college admissions coaching or for whom English is a second language can get extra support.
AI will alter how people produce and consume art — and wage war.
AI tools can generate artwork and music from simple text prompts. These technologies promise a dramatic shift in cultural production and entertainment, streamlining access to creative expression. At the same time, they will likely offer greater power to those already at the top of the content creation, curation, and distribution chain. Also, the volume of automated content may further complicate the already challenging task of discovering quality work. AI can make it easier for humans to do certain artistic tasks, and people can prompt it to generate images that are hard to distinguish from human-made creations, but AI cannot create with “intention.” It can “blend” and “bend” what humans have made into new entities, but it performs poorly in terms of offering truly original art.
“One of this book’s chief arguments is that a little automation can be a good thing, but too much is usually a problem.”
AI integration into warfare is occurring in three areas: weapons, information gathering and analysis, and national security and international relations. AI could help save civilians’ and soldiers’ lives, but its use in battle raises ethical concerns about automated decision-making and moral responsibility. Human soldiers ordered to storm a building that military intelligence says is housing terrorists will fall back if they find it is, instead, full of sleeping children. On the other hand, AI drones may stick to the actions dictated by faulty intelligence and open fire. Even those who favor military use of AI argue for robust international regulation resembling nuclear weapons agreements to oversee safe AI development. AI presents varied existential risks — including the remote possibility that one day, a conscious AI might decide to end humanity. Only a global cooperative effort can ensure that AI’s benefits do not undermine society’s foundational values and security.
About the Author
Journalist Jeremy Kahn focuses on technology and AI’s implications for society.