Tech giants face questions on hate speech going into debates

Executives of Facebook, Google and Twitter faced questioning by a House panel Wednesday on their efforts to stanch terrorist content and viral misinformation on their social media platforms.

The scrutiny comes as the tech giants step up safety measures to prevent disinformation online targeting the Democratic presidential debates starting Wednesday night.

Lawmakers and tech industry executives are concerned that the debates could be targeted by Russian or other hostile parties to foment political conflict using social media, as happened in the 2016 election. U.S. intelligence officials have determined that Russia carried out a sweeping political disinformation campaign on social media to influence the election, and they have repeatedly warned about the threat of foreign meddling in American politics, especially ahead of elections.

“As the presidential debates begin, we are building on our efforts to protect the public conversation and enforce our policies against platform manipulation,” Twitter said in a statement Wednesday. “It’s always an election year on Twitter.”

Facebook said it will have “a dedicated team proactively monitoring for threats as well as investigating any reports of abuse in real time in the lead up to, during and following the debates.”

The hearing by the Homeland Security Committee was prompted by the mosque shootings in New Zealand in March that killed 50 people, attributed to a self-professed white supremacist who livestreamed the attacks on Facebook.

Rep. Bennie Thompson, D-Miss., the panel’s chairman, noted that the livestreamed massacre occurred nearly two years after Facebook, Twitter, Google and other big tech companies established a global internet forum to fight the spread of online terrorist content.

“I want to know how you will prevent content like the New Zealand attack video from spreading on your platforms again,” Thompson told the information policy executives from the three companies.

Thompson said he also wanted to know how the companies are working to keep hate speech and misinformation off their platforms.

Controversy over white nationalism and hate speech has dogged online platforms such as Facebook and Google’s YouTube for years. In 2017, following the deadly violence in Charlottesville, Virginia, tech giants began banishing extremist groups and individuals espousing white supremacist views and support for violence. Facebook extended the ban to white nationalists.

But the big tech companies now are under closer scrutiny than ever in Congress, following a stream of scandals including Facebook’s lapses in opening the personal data of millions of users to Donald Trump’s 2016 campaign. Google’s dominant search engine and hyper data collection have raised privacy concerns and accusations by Republicans of suppressing conservative viewpoints.

Trump on Wednesday renewed his criticism of the tech giants, insisting that their platforms censor conservative views. “They’re doing it to me on Twitter,” Trump said in an interview with Fox Business Network’s “Mornings with Maria.”

“You know, I have millions and millions of followers, but I will tell you they make it very hard for people to join me on Twitter, and they make it very much harder for me to get out the message,” Trump said. “These people are all Democrats. It’s totally biased toward Democrats.”

Monika Bickert, Facebook’s head of global policy management, said at the hearing that in response to the events in New Zealand, the company now prohibits livestreaming by people who have violated rules covering organizations and individuals deemed dangerous and potentially violent.

“We want to make sure we’re doing everything to make sure it doesn’t happen again,” Bickert said.

The social network giant has improved its technology and techniques and is now able to more effectively detect terrorist content, including through tools now working in 19 languages, she said.

Twitter has suspended more than 1.5 million accounts for violations related to promoting terrorism from Aug. 1, 2015, to Dec. 31, 2018, said Nick Pickles, global senior strategist for public policy.

“We continue to invest in technology … to ensure we can respond as quickly as possible to a potential incident,” he said. “Twitter will take concrete steps to reduce the risk of livestreaming being abused by terrorists, while recognizing that during a crisis these tools are also used by news organizations, citizens and governments.”

Google’s policies for search, news and YouTube make clear the types of conduct that are prohibited, such as misrepresenting ownership or primary purpose, said Derek Slater, director of information policy.

“We want to do everything we can to ensure users are not exposed to content that promotes or glorifies acts of terrorism,” Slater said.

Source link