Legal Shield for Websites Rattles Under Onslaught of Hate Speech

When the most consequential law governing speech on the internet was created in 1996, Google.com didn’t exist and Mark Zuckerberg was 11 years old.

The federal law, Section 230 of the Communications Decency Act, has helped Facebook, YouTube, Twitter and countless other internet companies flourish.

But Section 230’s legal protection has also extended to fringe sites hosting hate speech, anti-Semitic content and racist tropes like 8chan, the internet message board where the suspect in the El Paso shooting massacre posted his manifesto.

The law shields websites from liability for content created by their users, while permitting internet companies to moderate their sites without being on the hook legally for everything they host. It does not provide blanket protection from legal responsibility for some criminal acts, like posting child pornography or violations of intellectual property.

Now, as scrutiny of big technology companies has intensified in Washington over a wide variety of issues, including how they handle the spread of disinformation or police hate speech, lawmakers are questioning whether Section 230 should be changed.

Last month, Senator Ted Cruz, Republican of Texas, said in a hearing about Google and censorship that the law was “a subsidy, a perk” for big tech that may need to be reconsidered. In an April interview, Speaker Nancy Pelosi of California called Section 230 a “gift” to tech companies “that could be removed.”

“There is definitely more attention being paid to Section 230 than at any time in its history,” said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and the author of a book about the law, “The Twenty-Six Words That Created the Internet.”

“There is an inclination to look at Section 230 as one lever to influence the tech companies,” he said.

Here is an explanation of the law’s history, why it has been so consequential and whether it is really in jeopardy.

We can thank “The Wolf of Wall Street.”

Stratton Oakmont, a brokerage firm, sued Prodigy Services, an internet service provider, for defamation in the 1990s. Stratton was founded by Jordan Belfort, who was convicted of securities fraud and was portrayed by Leonardo DiCaprio in the Martin Scorsese film about financial excess. An anonymous user wrote on Prodigy’s online message board that the brokerage had engaged in criminal and fraudulent acts.

The New York Supreme Court ruled that Prodigy was “a publisher” and therefore liable because it had exercised editorial control by moderating some posts and establishing guidelines for impermissible content. If Prodigy had not done any moderation, it might have been granted free speech protections afforded to some distributors of content, like bookstores and newsstands.

The ruling caught the attention of a pair of congressmen, Ron Wyden, a Democrat from Oregon, and Christopher Cox, a Republican from California. They were worried the decision would act as a disincentive for websites to take steps to block pornography and other obscene content.

The Section 230 amendment was folded into the Communications Decency Act, an attempt to regulate indecent material on the internet, without much opposition or debate. A year after it was passed, the Supreme Court declared that the indecency provisions were a violation of First Amendment rights. But it left Section 230 in place.

Since it became law, the courts have repeatedly sided with internet companies, invoking a broad interpretation of immunity.

On Wednesday, the United States Court of Appeals for the Second Circuit affirmed a lower court’s ruling that Facebook was not liable for violent attacks coordinated and encouraged by Facebook accounts linked to Hamas, the militant Islamist group. In the majority opinion, the court said Section 230 “should be construed broadly in favor of immunity.”

Section 230 has allowed the modern internet to flourish. Sites can moderate content — set their own rules for what is and what is not allowed — without being liable for everything posted by visitors.

Whenever there is discussion of repealing or modifying the statute, its defenders, including many technology companies, argue that any alteration could cripple online discussion.

The internet industry has a financial incentive to keep Section 230 intact. The law has helped build companies worth hundreds of billions of dollars with a lucrative business model of placing ads next to largely free content from visitors.

That applies to more than social networks like Facebook, Twitter and Snapchat. Wikipedia and Reddit depend on its visitors to sustain the sites, while Yelp and Amazon count on reviews for businesses and products.

More recently, Section 230 has also provided legal cover for the complicated decisions regarding content moderation. Facebook and Twitter have recently cited it to defend themselves in court when users have sued after being barred from the platforms.

Many cases are quickly dismissed because companies assert they have the right to make decisions on content moderation as they see fit under the law.

The criticisms of Section 230 vary. While both Republicans and Democrats are threatening to make changes, they disagree on why.

Some Republicans have argued that tech companies should no longer enjoy the protections because they have censored conservatives and thereby violated the spirit of the law, which states that the internet should be “a forum for a true diversity of political discourse.”

Facebook, Twitter and Google, which runs YouTube, which are the main targets for bias claims, have said they are baseless.

On the flip side, some Democrats have argued that small and large internet sites aren’t serious about taking down problematic content or tackling harassment because they are shielded by Section 230.

Mr. Wyden, now a senator, said the law had been written to provide “a sword and a shield” for internet companies. The shield is the liability protection for user content, but the sword was meant to allow companies to keep out “offensive materials.”

However, he said firms had not done enough to keep “slime” off their sites. In an interview with The New York Times, Mr. Wyden said he had recently told tech workers at a conference on content moderation that if “you don’t use the sword, there are going to be people coming for your shield.”

There is also a concern that the law’s immunity is too sweeping. Websites trading in revenge pornography, hate speech or personal information to harass people online receive the same immunity as sites like Wikipedia.

“It gives immunity to people who do not earn it and are not worthy of it,” said Danielle Keats Citron, a law professor at Boston University who has written extensively about the statute.

The first blow came last year with the signing of a law that creates an exception in Section 230 for websites that knowingly assist, facilitate or support sex trafficking. Critics of the new law said it opened the door to create other exceptions and would ultimately render Section 230 meaningless.

Ms. Citron, who is also vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to combating online abuse, said this was “a moment of re-examination.” After years of pressing for changes, she said there was more political will to modify Section 230.

Senator Josh Hawley, a Republican from Missouri and a frequent critic of technology companies, introduced a bill in June that would eliminate the immunity under the law unless tech companies submitted to an external audit that their content moderation practices were politically neutral.

While there is growing political will to do something about Section 230, finding a middle ground on potential changes is a challenge.

“When I got here just a few months ago, everybody said 230 was totally off the table, but now there are folks coming forward saying this isn’t working the way it was supposed to work,” said Mr. Hawley, who took office in January. “The world in 2019 is very different from the world of the 1990s, especially in this space, and we need to keep pace.”

Source link