Facebook Issues New Rules On Deepfake Videos, Targeting Misinformation
Updated at 11:40 a.m. ET
Facebook says it's banning many types of misleading videos from its site, in a push against deepfake content and online misinformation campaigns.
Facebook's new ban targets videos that are manipulated to make it appear someone said words they didn't actually say. The company won't allow videos on its site if they've been either edited or computer-generated in ways that the average person couldn't detect.
But Facebook isn't banning all deepfake videos. It will allow the technique to be used in parodies or satire, and it will also allow clips that were edited only to cut out or change the order of words. Those exceptions could open up a gray area in which fact-checkers must decide what content is allowed and what is taken down.
Deepfake videos can be created in several ways, from video editing software to sophisticated artificial intelligence tools.
"While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," Facebook's vice president of global policy management, Monika Bickert, said in a blog post about the new policy Monday night.
Facebook is pushing back on deepfake videos as the 2020 presidential campaign ramps up — and the company clearly hopes to avoid a repeat of the fallout from the 2016 election, when it was accused of allowing voter manipulation from fake accounts and thousands of Russian-backed political ads.
In its announcement, Facebook laid out two main criteria for banning deepfake videos:
"It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
"It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic."
In two high-profile cases from last summer, misleading videos featured House Speaker Nancy Pelosi and Facebook's own CEO, Mark Zuckerberg, in highly manipulated footage. In Pelosi's case, the video's audio was altered to make it seem as if she was slurring. The Zuckerberg deepfake was both more sophisticated and satirical, showing the billionaire gloating about using data to control the future.
Facebook says it will work with governments, academia and tech companies to go after manipulated media and the people who produce it. It also says it's working with experts to improve its ability to detect manipulated media.
Copyright 2021 NPR. To see more, visit https://www.npr.org.