Twitter has become the latest major social platform to articulate its deepfake policy, claiming it will remove “manipulated media” only if it causes harm.
In a blog post earlier this week, head of site integrity, Yoel Roth, and group product manager, Ashita Achuthan, explained that the site’s new policy was distilled from responses to its draft rule by academics, civil society and thousands of Twitter users.
The new rule is that if synthetic or manipulated content like deepfakes is deliberately intended to deceive users then it will be clearly labelled. If it’s also deemed likely to cause harm then it is “very likely” to be removed.
By harm, Twitter means threats to physical safety, the risk of mass violence or civil unrest, and threats to privacy or free expression of an individual or group. This includes voter suppression or intimidation, but it doesn’t mention attempts to influence voters in other ways.
Twitter said it “may” also remove manipulated content that causes harm but has not been shared in a manner intended to deceive.
The firm added that it might also show a warning to users before they retweet such content, reduce its visibility on Twitter and/or prevent it from being recommended, and provide a landing page with more context.
Twitter’s policy would, at first sight, appear more liberal than Facebook, which earlier this year effectively stated its intent to ban any deepfake content designed to mislead users, whether it’s harmful or not.
YouTube also recently reminded users that any deepfakes related to the upcoming US Presidential elections would be banned from the site.
The challenge for such platforms is that their attempts to police such content at present are largely reactive in nature, and that harm can still be done to candidates if a deepfake goes viral, even if it is subsequently removed and confirmed as a hoax.
Stay up-to-date with the latest information security trends and topics by registering for Infosecurity Magazine’s next Online Summit. Find out more here.