Facebook Expands Definition of Terrorist Organizations to Limit Extremism

Facebook on Tuesday announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizations and planning to deploy artificial intelligence to better spot and block live videos of shooters.

The company is also expanding a program that redirects users searching for extremism to resources intended to help them leave hate groups behind.

The announcement came the day before a hearing on Capitol Hill on how Facebook, Google and Twitter handle violent content. Lawmakers are expected to ask executives how they are handling posts from extremists.

Facebook, the world’s largest social network, has been under intense pressure to limit the spread of hate messages, pictures and videos on its site. It has also faced harsh criticism for not detecting and removing the live video of an Australian man who killed 51 people in Christchurch, New Zealand.

In at least three mass shootings this year, including the one in Christchurch, the violent plans were announced in advance on 8chan, an online message board. Federal lawmakers questioned the owner of 8chan this month.

In its announcement post, Facebook said the Christchurch tragedy “strongly” influenced its updates. And the company said it had recently developed an industry plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist accounts.

Facebook has long touted an ability to catch terrorism-related content on its platform. In the last two years, the company said, it has been able to detect and delete 99 percent of extremist posts — about 26 million pieces of content — before they were reported to them.

But Facebook said that it had mostly focused on identifying organizations like separatists, Islamist militants and white supremacists. The company said that it would now consider all people and organizations that proclaim or are engaged in violence leading to real-world harm.

The team leading its efforts to counter extremism on its platform has grown to 350 people, Facebook said, and includes experts in law enforcement, national security, counterterrorism and academics studying radicalization.

To detect more content relating to real-world harm, Facebook said it was updating its artificial intelligence to better catch first-person shooting videos. The company said it was working with American and British law enforcement officials to obtain camera footage from their firearms training programs to help its A.I. learn what real, first-person violent events look like.

Since March, Facebook had also been redirecting users who search for terms associated with white supremacy to resources like Life After Hate, an organization founded by former violent extremists that provides crisis intervention and outreach. In the wake of the Christchurch tragedy, Facebook is expanding that capability to Australia and Indonesia, where people will be redirected to the organizations EXIT Australia and ruangobrol.id.

“We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts,” the company said, “and we are committed to advancing our work and sharing more progress.”

Source link