SAN FRANCISCO — Facebook, facing withering criticism from governments around the world, said Thursday that it had been more aggressive in recent months about scrubbing its platform of hate speech.
In a report the company releases biannually, Facebook also said its automated detection software that scrubs illicit content was improving: It now automatically detects and removes more than half of the hate speech on the platform.
Regulators have expressed renewed interest in cracking down on Facebook after a gunman in Christchurch, New Zealand, live-streamed his mass killings on his Facebook account. The video was viewed just 4,000 times before Facebook removed it, but it spread rapidly across the internet and was reposted millions of times.
The quick distribution of the video — and the apparent inability of Facebook and other tech companies to stop it from spreading — led to calls from regulators who said the company must do a better job of policing the content posted on its platform.
The video prompted government leaders from around the world to sign on to the “Christchurch Call,” an agreement to limit violent and extremist content online. Facebook said it would introduce stricter policies for live-streamed videos.
Mark Zuckerberg, Facebook’s chief executive, said in a call with reporters that he had recently discussed regulation with President Emmanuel Macron of France and that governments around the world should take a more proactive role in the regulation of online speech.
“If the rules for the internet were being written from scratch today, I don’t think people would want private companies to be making so many decisions about speech themselves,” Mr. Zuckerberg said.
Facebook said it had removed four million hate-speech posts during the first three months of the year, and detected 65 percent of them with artificial intelligence, up from 24 percent the year before. Its automated systems for detecting violence also improved, Facebook said. It caught 98 percent of the violent content posted on its platform before users reported it.
[Get the Bits newsletter for the latest from Silicon Valley and the technology industry.]
“We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy,” Guy Rosen, Facebook’s vice president of integrity, wrote in a blog post.
Facebook is also beginning to use artificial intelligence to detect and remove the sale of guns and drugs from its platform. Gun sales have thrived on Facebook for years, and the company has struggled to prevent them.
In the first quarter of 2019, Facebook removed 670,000 posts about firearm sales from its platform and detected almost 70 percent of them without relying on user reports, the company said.
“By catching more violating posts proactively, this technology lets our team focus on spotting the next trends in how bad actors try to skirt our detection,” Mr. Rosen said.
But Facebook’s automated detection systems are not foolproof. Mr. Rosen said its numbers for monitoring child exploitation posts were lower this quarter, in part because of a bug that prevented new videos from being added to a database of content that Facebook blocks from being posted.
And Facebook sometimes mistakenly removes content that does not violate its policies. Mr. Zuckerberg said Facebook would establish an independent review board that would double-check its removal decisions.
“This independent oversight board will look at some of our hardest cases, and the decisions it makes will be binding,” Mr. Zuckerberg said, adding that he and other Facebook executives would not have the power to overrule the oversight board.
The social media company also reported a spike in the number of fake accounts, which it said had been caused by large groups of malicious users trying to register for accounts. The company disabled 2.19 billion fake accounts in the first quarter of 2019, up from 1.2 billion in the final quarter of 2018.