Twitter has bowed to pressure to prevent interference in the next US election, following months of criticism over altered multimedia.
The company has announced it will label and in some cases remove doctored or manipulated photos, audio and videos that it deems designed to mislead people.
Twitter announced on Tuesday (local time) that the new rules prohibit sharing “synthetic or manipulated material” likely to cause harm. Material that is manipulated but isn’t necessarily harmful may get a warning label.
Under the new guidelines, a slowed-down video of House Speaker Nancy Pelosi in which she appeared to slur her words could get the label if someone tweets it out after the rules go into effect March 5.
The company acknowledged that deciding what might cause harm could is subjective and that some material may fall into a grey area.
“This will be a challenge and we will make errors along the way – we appreciate the patience,” Twitter said in a blog post. “However, we’re committed to doing this right.”
Google, Facebook, Twitter and other technology services are under intense pressure to prevent interference in the 2020 US elections after they were manipulated four years ago by Russia-connected actors. On Monday, Google’s YouTube clarified its policy around political manipulation, reiterating that it bans election-related “deepfake” videos. Facebook has also been ramping up its election security efforts.
Ms Pelosi recently criticised Facebook, saying they had been “very irresponsible … I think their behaviour is shameful.”
Pelosi criticizes Facebook, which has an office in her district: “All they want are their tax cuts and no antitrust action against them and they schmooze this administration in that regard…They intend to be accomplices for misleading the American people” https://t.co/7DMcsOnniz pic.twitter.com/P6dI9a1LEl
— CBS News (@CBSNews) January 16, 2020
As with many of Twitter’s policies, including those banning hate speech or abuse, success will be measured by how well the company can implement it. This is likely to be especially true for misinformation, which can spread quickly on social media even with safeguards in place.
Facebook, for instance, has been using third-party fact-checkers to debunk false stories on its site for three years. While the efforts are paying off, the battle against misinformation is far from over.