Facebook Restricts Live Streaming After New Zealand Shooting

SAN FRANCISCO — When 51 people were killed in Christchurch, New Zealand, in March, the suspect, an Australian man, broadcast the attack live on Facebook. The video spread across the internet.

On Tuesday night, in its strongest response yet to the violent scenes that were live-streamed over its social network, Facebook announced that it would place more restrictions on the use of its live video service.

The company said that starting Tuesday, anyone who breaks certain rules in broadcasting content on Facebook Live will be temporarily barred from using the service, with the possibility of a 30-day ban on a first offense. Previously, it did not typically bar users until they had broken these rules multiple times.

Multiple offenders, or people who post particularly egregious content, could be barred from Facebook.

“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” Guy Rosen, vice president of integrity at Facebook, wrote in a blog post. “We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses.”

The new restrictions may not go far enough for critics who have called on the company to simply shut down Facebook Live. And it may not do much to satisfy some governments. Australian lawmakers, for example, have considered fines and even jail time for social media executives who fail to quickly remove violent content.

The announcement is timed to coincide with a meeting in Paris between Prime Minister Jacinda Ardern of New Zealand and President Emmanuel Macron of France.

On Wednesday, the two government leaders are expected to sign the “Christchurch Call,” a push for new limits against the spread of violent and extremist content online. They are expected to urge Facebook and other internet companies to make commitments that include re-examining their algorithms that steer people to content across the web.

The agreement is nonbinding, but adds more political pressure to Facebook to safeguard its platform against being an online broadcast network for violent behavior.

The attack in Christchurch inspired Ms. Ardern to push for international cooperation against online extremism. She has argued that a country-by-country approach will not work in an interconnected digital world. In addition to France, Britain, Canada, Jordan, Senegal, Indonesia, Australia, Norway, Ireland and the European Commission are also expected to sign the agreement.

Facebook, Google and Microsoft have also said they will sign. Twitter declined to comment.

In announcing the new restrictions on its live video service, Facebook said it was partnering with three universities — the University of Maryland, Cornell University and the University of California, Berkeley — in an effort to develop new technologies for detecting and removing troublesome images and videos from the internet.

Facebook and other companies were slow to identify and remove the Christchurch video — in part because the original had been edited in small ways as it passed across various services.

Through its new university partnerships — backed by $7.5 million in funding — Facebook said it would work on building technology that can detect images and videos that have been manipulated in subtle ways.

Over the past three years, Facebook and other social media giants have come under increasing pressure to identify and remove a wide range of problematic content, including hate speech, false news and violence.

The company has said that it is now using artificial intelligence to pinpoint many types of problematic content and that this technology is rapidly improving.

But A.I. doesn’t always detect some material, most notably hate speech and false news. And the attack in Christchurch showed the technology still has a long way to go when it comes to detecting violent images. Facebook also pays thousands of contract employees to scrutinize and remove problematic content.

The Christchurch video spread despite those safeguards.

One solution to rid Facebook Live of violent material would be to simply shut it down. But that is not yet a step the company wants to take. In an echo of previous statements from company executives, Mr. Rosen said the company was trying to find a balance between opposing views.

“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” he wrote. “Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”

Brendan O’Connor, a computer science professor at the University of Massachusetts in Amherst who once interned at Facebook and now specializes in technologies that can catch troublesome content on social media, said analyzing video as it is being broadcast was a particularly difficult problem.

“This is sensible — I guess,” he said of Facebook’s new rules. “It seems like one step among hopefully many others.”

Source link