Twitter has released details of the steps it’s taking to combat misinformation on COVID-19, but admitted that its increasing reliance on automated systems may lead to more mistakes.
The social network said it was broadening its definition of harm to tackle user-generated content that contradicts guidance from public health authorities and other trusted bodies.
“Rather than reports, we will enforce this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content,” it said.
The long list of content now prohibited includes: description of harmful or ineffective treatments, denial of official recommendations and established scientific facts, calls to action that benefit third parties, incitement to social unrest, impersonation of health officials and claims that specific groups are either more or less susceptible to the virus.
The new Twitter rules around COVID-19 will be reviewed going forward and amended as appropriate.
Twitter said it’s also rolling out a global content severity triage system to ensure the most serious rule violations are handled first, as well as daily quality assurance checks on content enforcement processes.
However, question marks remain over how effectively harmful content will be removed. The social network explained that it would be increasing its use of machine learning and automation to spot “abusive and manipulative content,” but that these systems may not be as accurate as human moderators.
“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” it admitted.
“As a result, we will not permanently suspend any accounts based solely on our automated enforcement systems. Instead, we will continue to look for opportunities to build in human review checks where they will be most impactful.”