So-called “disinformation peddlers” can now produce cheap, but realistic, videos in misinformation campaigns designed to spread lies, disrupt international relations or create porn. New York Times reporters Alan Satariano and Paul Mozur warn that media manipulators now can build a humanlike online avatar “puppet” news announcer and program it to say and do anything desired, in credible tones and in any language or accent. While it’s still pretty easy to tell that the resulting broadcasts are fake, the technology is getting better and discerning who or what is real is only going to become more difficult.
- Using “deepfake” technology, bad faith actors can create realistic TV news segments to spread propaganda and false information.
- A deepfake broadcast that viewers mistake as factual can have catastrophic results, but nations have few laws to curtail the spread of made-up videos.
- Deepfake has penetrated into popular culture.
Using “deepfake” technology, bad faith actors can create realistic TV news segments to spread propaganda and false information.
For now, you usually can spot fake people in videos because of their pixilated images and stilted voices, but day by day the details are getting better and fakes are getting harder to detect.
Researchers from Graphika, a company that researches the spread of disinformation, tracked propaganda videos from a fake network called Wolf News to a Chinese bot account. The researchers found that most of Wolf’s fake broadcasts promoted communist Chinese ideas and discredited the West.
“Disinformation experts have long warned that deepfake videos could further sever people’s ability to discern reality from forgeries online, potentially being misused to set off unrest or incept a political scandal. Those predictions have now become reality.”
Such deepfakes harness AI programming to manipulate real footage and to make the avatar on screen do and say anything the designer wishes. A recent, widely disseminated deepfake video purported to show Ukrainian president Volodymyr Zelenskyy supposedly surrendering to the Russians. Another video showed fake evidence indicating American support for the government of Burkina Faso, something which could upset Russia and further divide it from the United States.
A deepfake broadcast that viewers mistake as factual can have catastrophic results, but nations have few laws to curtail the spread of made-up videos.
Both media manipulators and legitimate users can buy AI programs online and use them to create videos with fake spokespeople – “digital puppets” – for a fraction of the cost of real film production.Many companies use the flexibility, economy and capability of deepfake software to create HR training videos at a fraction of the cost of hiring a film crew.The films aren’t polished or well-produced, but they do the job.
“The software, which costs as little as $30 a month, produces videos in minutes that could otherwise take several days and would require hiring a video production crew and human actors.”
The creators of the Wolf News segments used technology from Synthesia, a UK-based company. Once Synthesia has a script, its software program can create a video using avatars based on preloaded stock images or images that users generate. The company’s website says the production process is “as easy as writing an email.”Users can choose among 85 different characters, based on hired actors, of varying age, gender, ethnicity, voices and clothing, and using up to 120 different languages.
The Synthesia stock character “Anna” appeared in both a Wolf News video and the fake report about Burkina Faso.However, Synthesia is quick to point out that it suspended the account which created the fake news report.
“The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence”
The company has a content moderation team made up of four people. Cofounder and chief executive Victor Riparbelli asserts that fake news videos violate Synthesia’s service agreement, which forbids users from making films with “political, sexual, personal, criminal and discriminatory content.” Riparbelli, who believes government should be responsible for setting the rules on the use of AI, acknowledges that misinformation is difficult to spot – and that it will only get harder to identify.
Deepfake has penetrated into popular culture.
Pornographic websites use deepfake AI technology to put celebrities’ faces on others’ bodies. The Chinese firm iFlytek even created a video that made it look like then-President Donald Trump was speaking Mandarin. Now iFlytek is on a US national security blacklist that restricts who can purchase American technology.
“Deepfake videos have proliferated for years. Kendrick Lamar used the technology in a music video last year to morph into Kanye West, Will Smith and Kobe Bryant.”
Meta, the umbrella corporation for Facebook, Instagram and WhatsApp, claims that it does not allow misleading deepfake videos on its platforms and that it bans accounts that use them. However, Graphika found several deepfake dissemination accounts on social media. These accounts use “spamouflage” to upload content in a dummy account, and then they use that account to spread misinformation around the world. Even when it appears that these videos don’t get a lot of views, “disinformation peddlers” never stop trying.
About the Authors
Adam Satariano, in London, and Paul Mozur, in Seoul, are New York Times tech correspondents who regularly report about online disinformation. Mozur was on the Times’ team that won the Pulitzer Prize for public service for its coverage of the Coronavirus pandemic.