Table of Contents
- Recommendation
- Take-Aways
- Summary
- “Deepfake” attacks – which incorporate deception, misinformation and false intelligence in various types of data – are becoming more prevalent and complex.
- Non-democratic leaders use deepfakes to create instability and confusion in democratic nations.
- Democracies must more effectively defend against deepfakes as well as develop protocols for the potential use of the technology against adversaries.
- About the Author
Recommendation
Long used in military and geopolitical engagements, disinformation campaigns are today vastly more complex, thanks to artificial intelligence, machine learning and sophisticated technologies. Writing for the Brookings Institution, four foreign policy and computer science scholars explore the architecture of “deepfakes” and assess how technology is transforming the practice of adversarial messaging and deceptive intelligence. Anyone interested in a robust examination of geopolitics in an AI world will discover some important insights in this illuminating report.
Take-Aways
- “Deepfake” attacks – which incorporate deception, misinformation and false intelligence in various types of data – are becoming more prevalent and complex.
- Non-democratic leaders use deepfakes to create instability and confusion in democratic nations.
- Democracies must more effectively defend against deepfakes as well as develop protocols for the potential use of the technology against adversaries.
Summary
“Deepfake” attacks – which incorporate deception, misinformation and false intelligence in various types of data – are becoming more prevalent and complex.
The practice of false intelligence dissemination is nothing new: The ancient Romans spread nasty rumors about their leaders. But a 2022 episode at the start of the Ukraine conflict offers a new slant, thanks to technology. Russian actors fabricated a video message from Ukrainian president Volodymyr Zelenskyy instructing Ukrainians to surrender to Russian forces. While the Ukrainian government quickly countered the incorrect narrative, the deepfake showed its potential for damage.
“As artificial intelligence (AI) grows more sophisticated and the cost of computing continues to drop, the challenge deepfakes pose to online information environments during armed conflict will only grow.”
Deepfakes get their name from the deep neural networks in machine learning. With these tools, actors can use “multimodal data” – text, audio, video and the like – as part of a “generative adversarial network” (GAN), a deep-learning algorithm. GANs, in use since 2014, consist of two critical pieces: “a Generator algorithm” that creates a false video, text or image, and “a Discriminator algorithm” that works to authenticate whether the constructed text or image is fake or real.
Non-democratic leaders use deepfakes to create instability and confusion in democratic nations.
Autocratic states are ramping up their use of deepfakes. The 2016 Russia campaign is an example of a highly coordinated deepfake effort to interfere in the US presidential election.
“For now, the greatest danger is from states that have considerable technological capacity (like Russia) or can hire it (like Saudi Arabia). However, as technology improves and becomes more accessible, smaller states, nonstate actors and even individuals could begin using deepfakes.”
Because of the power of disinformation and the rapid speed of dissemination, rogue state actors are deploying the technology in multiple capacities:
- “Legitimizing war and uprisings” – Deepfakes can raise “false flags” as pretexts for military actions and violence.
- “Falsifying orders” – The Zelenskyy episode is a prime example.
- “Sowing confusion” – When people can’t trust what they see and hear, corrupt leaders reap the “liar’s dividend” by putting into doubt even real evidence of their criminality.
- “Dividing the ranks” – Deepfakes can erode morale among troops.
- “Undermining popular support” – Deepfakes can cast leaders in a damaging or unfavorable light, leading citizens to pull their backing.
- “Polarizing societies” – Russia disseminated fake news about the Black Lives Matter movement in the United States, with the intent to roil an already tense situation.
- “Dividing allies” – Deepfakes ridiculing or misrepresenting sovereign or military leaders can weaken geopolitical alliances.
- “Discrediting leaders” – Deepfakes can defame and degrade politicians and lawmakers.
Yet just as the technology can deceive, scientists are using the same neural networks to detect and uncover deepfakes.
Democracies must more effectively defend against deepfakes as well as develop protocols for the potential use of the technology against adversaries.
With anti-democratic actors unloading a torrent of deepfakes, pro-democracy states and allies must plan strategically and execute defensive and offensive tactical plans. Governments must adopt new technologies that more effectively screen and identify complex deepfakes, and they need to establish protocols on when and to what extent they would conduct deepfake attacks.
“The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs.”
American leaders should establish a Deepfakes Equities Process, based on the existing Vulnerabilities Equities Process that governs US cybersecurity responses. Such a protocol would put a deliberative process in place that brings stakeholders together to determine the use of deepfakes on a case-by-case basis.
About the Author
Daniel L. Byman is a professor at Georgetown University and a senior fellow at the Brookings Institution, where Chris Meserole is a foreign policy fellow. Chongyang Gao is a PhD student in computer science at Northwestern University, where V.S. Subrahmanian is a professor of computer science.