A fake video manipulated to falsely show President Joe Biden inappropriately touching his granddaughter has exposed flaws in Facebook's “deepfake” policies, according to the Meta Oversight Board. concluded Monday.
Last year, when Biden's video went viral, Facebook has ruled on several occasions that he did not violate policies regarding hate speech, media manipulation, or intimidation and harassment. Since Biden's video is not AI-generated content and does not manipulate the president's speech, making him appear to be saying things he never said, the video was deemed acceptable to stay on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer”.
“The video does not show President Biden saying anything that he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces or overlays content on the video (the video was simply edited to remove certain parts),” the Meta blog says.
The Supervisory Board, a group of independent experts, reviewed the case and ultimately confirmed Meta's decision despite being “skeptical” that current policies help reduce harm.
“The board does not see sense in limiting the manipulated media policy to only covering people saying things they did not say, while excluding content showing people doing things they did not. they did not make,” the board said, noting that Meta claimed the distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect “.
The board called on Meta to review its “inconsistent” policies, which it said appear more concerned with regulating how content is created, rather than preventing harm. For example, Biden's video caption described the president as a “sick pedophile” and called anyone voting for him “mentally ill,” which could affect “election processes” that Meta might choose to protect, suggested board of directors.
“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.
One problem, according to the Oversight Board, is that in their rush to combat AI technologies that make generating deepfakes a quick, cheap and easy activity, Meta policies are currently neglecting less techniques for manipulating content.
Instead of using AI, Biden's video relied on basic video editing technology to remove the president placing an “I Voted” sticker on his adult granddaughter's chest. The raw edit looped a 7-second clip edited to make it appear as if the president, as Meta described in his blog, was “inappropriately touching a young woman's breast and kissing her on the cheek.” “.
Making this distinction is confusing, the board said, in part because videos edited using technologies other than AI are not considered less misleading or less prevalent on Facebook.
The board recommended that Meta update its policies to cover not only AI-generated videos, but also other forms of manipulated media, including all forms of manipulated video and audio. Audio forgeries that are currently not covered by the policy, the council warned, offer fewer clues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”
Notably, earlier this year, a fake robocall from Biden attempted to mislead Democratic voters in New Hampshire into encouraging them not to vote. The Federal Communications Commission quickly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission has not been able to act as quickly to regulate the misleading AI-generated ads that are spreading easily on social networks, AP reported. In a statement, Oversight Council Co-Chair Michael McConnell said audio manipulation is “one of the most powerful forms of election disinformation.”
To better combat known harms, the board suggested that Meta revise its manipulated media policy to “clearly specify the harms it seeks to prevent.”
However, rather than pushing Meta to remove more content, the board urged Meta to use “less restrictive” methods to handle false content, such as relying on fact-checkers applying labels indicating that the content is “significantly altered”. In public comments, some Facebook users agreed that labels would be more effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing Biden's video should have been a “profoundly easy call.” Another commenter suggested that Biden's video should be considered acceptable speech, as harmless as a funny meme.
Although the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it warned that including manipulated photos in the policy could “significantly expand” the scope of the policy and make it more difficult to apply.
“If Meta sought to label videos, audio files and photographs but only captured a small portion of them, it could create a false impression that unlabeled content is inherently trustworthy,” the board warned. 'administration.
Meta should therefore refrain from adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is ready to impose a ban on manipulated photos on a large scale, the board recommended. In the meantime, Meta should move quickly to update its policies ahead of a busy election year where pundits and politicians around the world are bracing for waves of misinformation online.
“The volume of misleading content is increasing, and the quality of the tools to create it is increasing rapidly,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections in which some actors seek to mislead the public. »
Meta's spokesperson told Ars that Meta is “reviewing the Oversight Board's guidance and will respond publicly to its recommendations within 60 days.”