A Year Later, Twitter’s New Banned Speech Policy Adds Little to the Old One

“I like to think of this as us trying to be experimental, the way that our colleagues in product and engineering are very experimental and they’re trying new things,” Ms. Gadde said in an interview at the time.

The response from users was swift — and critical. Twitter received more than 8,000 pieces of feedback from people in more than 30 countries. Many said the draft made no sense, pointing out cases in which the policy would lead to takedowns of posts that lacked any negative intent.

In one example, fans of Lady Gaga, who call themselves “Little Monsters” as a term of endearment, worried that they would no longer be able to use the phrase. Some gamers complained that they would be unable to discuss killing a character in a video game. Others said the draft policy didn’t go far enough in addressing hate speech and sexist comments.

In October and November, Twitter employees began revising the policy with the public input.

“We knew the policy was too broad,” Mr. Peterson said. The solution, he and others decided, was to narrow it down to groups that are protected under civil rights law, such as women, minorities and L.G.B.T.Q. people. Religious groups seemed particularly easy to identify in tweets, and there were clear cases of dehumanization on social media that led to harm in the real world, Twitter employees said. Those include the ethnic cleansing of Rohingya Muslims in Myanmar, which was preceded by hate campaigns on social networks like Facebook.

Early this year, Twitter further limited the scope of the policy by carving out an exception. The company prepared a feature to preserve tweets from world leaders, like Mr. Trump, even if they engaged in dehumanizing speech. Twitter reasoned that such posts were in the public interest. So if any world leaders tweeted something insulting and unacceptable, their posts would be kept online but hidden behind a warning label.

Twitter then trained its moderators to spot dehumanizing content, using a list of 42 religious groups as a guide and the tweet of Mr. Trump’s uncomplimentary phrase about certain countries as an example of what to allow. It assigned 10 engineering teams to design the warning label and to make sure that any offending tweets would not appear in search or other Twitter products. It announced the exception for world leaders last month.

On Tuesday, Twitter also said it would require the deletion of old tweets that dehumanize religious groups but would not suspend accounts that had a history of such tweets, because the rule did not exist when they were posted. New offending tweets, however, will count toward a suspension.

“We constantly keep changing our rules, and we try to improve across the product,” said David Gasca, Twitter’s head of product health. “We’re never fully done.”

Source link