Navigating the various approaches to speech will require different solutions, said Kevin Martin, Facebook’s head of lobbying in the United States.
“Mark and Facebook recognize, and support, and are strong defenders of the First Amendment,” Mr. Martin said. That nuance was lost because the opinion piece, which ran in The Washington Post, The Independent in Britain and elsewhere, was written to speak to a global audience, he said.
Tech companies, as private businesses, have the right to choose what speech exists on their sites, much as a newspaper can pick which letters to the editor to publish.
Their online sites do already pull some content for breaking their rules. Facebook and Google have tens of thousands of content moderators to root out hate speech and false information on their sites, for example. The companies also use artificial intelligence and machine learning technology to identify content that violates their terms of service.
But many recent events, like the mosque shootings in New Zealand, show the limits of those resources and tools, and have led to more demands for regulation. A live video by a gunman in the New Zealand massacre was viewed 4,000 times before Facebook was notified. By then, copies of the video had been uploaded on several sites like 8Chan, and Facebook struggled to take down slightly altered versions.
“For the first time, I’m seeing the left and right agree that something has gotten out of control, and there is a lot of consensus on the harms created by fake news, terrorist content and election interference,” said Nicole Wong, deputy chief technology officer for the Obama administration.
Getting consensus on basic definitions of what constitutes harmful content, though, has been difficult. And American lawmakers have been little help.