A fake video showing US President Joe Biden inappropriately touching his adult granddaughter’s chest sparked calls for Meta to change its policy on deepfakes and manipulated content.
The video clip, which is sometimes accompanied by a caption describing Biden as a “pedophile,” started to circulate in May 2023 on Facebook and other social media platforms.
The fake video is a maliciously edited version of actual footage of President Biden voting in the US Midterm elections in October 2022.
Despite being fake, the shocking video was not removed from Facebook as it does not violate Meta’s Manipulated Media policy, Meta’s Oversight Board said in a February 2024 post.
Currently, Meta’s Manipulated Media policy only applies if specific conditions are met.
These conditions include that:
- The content was created through artificial intelligence (AI)
- The content shows people saying things they did not say
“Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy,” the Oversight Board explained.
Additionally, Meta will not restrict content when its alteration is “obvious” and “therefore unlikely to mislead the ‘average user’ of its authenticity, a key characteristic of manipulated media.”
Several user attempts to report the video failed because it did not meet all the conditions for Meta to remove it as misleading content, the Oversight Board added.
Cheap Fakes are as Harmful as Deepfakes
Meta’s Oversight Board considered the Manipulated Media policy no longer sufficient to fight against misinformation and disinformation effectively.
“The Board finds that Meta’s Manipulated Media policy is […] too narrow, lacking in persuasive justification, incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent,” the post reads.
First, the technical restrictions regarding the type of content covered and the technologies used to create it should be scrapped.
“Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content. Therefore, the policy should not treat ‘deep fakes’ differently to content altered in other ways (for example, ‘cheap fakes’),” the post reads.
Next, the Board suggested that removing content should not necessarily be the only way for Meta to flag it as misleading or fake.
Finally, the Board criticized Meta for publishing this policy in two places which makes it confusing.
Board Calls on Meta to Revise Manipulated Media Policy
Drawing on these criticisms, the Oversight Board recommended that Meta take the following measures:
- Reconsider the scope of its Manipulated Media policy to cover audio and audiovisual content
- Extent its policy to cover content showing people doing things they did not do
- Apply this policy to content regardless of how it was created or altered
- Stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and could mislead
The Board also suggested that Meta should clearly define the harms the company aims to prevent in a unified Manipulated Media policy.
These harms could include “preventing interference with the right to vote and to participate in the conduct of public affairs.”
“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Board concluded.
The Oversight Board is a global body of experts tasked with the mission to review Meta's most difficult and significant decisions related to content on Facebook and Instagram.