Twitter has updated its rules and detection and enforcement capabilities in a bid to boost confidence in the platform ahead of the US midterm elections.
In a post from VP of truth and safety, Del Harvey, and head of site integrity, Yoel Roth, the firm explained that the enhancements were part of an “election integrity” push.
They include updated and expanded rules to discern whether an account is fake or not. Some of the factors now included are use of stock or stolen avatar photos; stolen or copied profile bios; and intentionally misleading profile information, including location.
Twitter claimed it is also getting tough on accounts that “deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules.”
It is lowering the bar for taking action on accounts claiming responsibility for hacking, “which includes threats and public incentives to hack specific people and accounts.”
These efforts will combine with unnamed improvements to ban policy violators, and the development of proprietary systems to “identify and remove ban evaders at speed and scale.”
The micro-blogging platform claimed that its automated detection tools challenged on average 9.4 million accounts each week in the first half of September, while user-generated spam reports declined from 17,000 per day in May, to around 16,000 per day in September.
The firm was at pains to point out its partnership with Republicans, Democrats and state election officials to take action on tweets about elections and political issues “with misleading or incorrect party affiliation information.”
It remains to be seen whether these efforts will be enough to disrupt a concerted effort by the Russian state to disrupt, misinform and sow discord within the US electorate.
It’s a tactic which came to light during the 2016 race for the White House, and has been seen at work in the UK ahead of the Brexit vote.