Supreme Court Report

Tech giants fear SCOTUS cases could wreak havoc on the internet

  •  
  •  
  •  
  • Print

shutterstock_gavel on laptop

Image from Shutterstock.

The New Yorker magazine calls them the “cases that could break the internet.” Reddit Inc., the moderated platform of online communities of shared interests, says in an amicus brief that “a sweeping ruling narrowing” the protections of Section 230 of the Communications Decency Act “would risk devastating the Internet.”

And Google, the giant search engine and parent of YouTube, the video-sharing website at the center of one of the cases, uses only slightly less dire language: A U.S. Supreme Court decision “jettisoning” that key liability protection of the 1996 federal law “would threaten the internet’s core functions,” Google says in a brief.

The cases coming before the justices on Tuesday and Wednesday arise from a series of terrorist acts overseas and in the United States. The Islamic State group took credit for a series of attacks in Paris in 2015 that killed, among others, Nohemi Gonzalez, a 23-year-old U.S. citizen studying abroad, whose parents and estate are seeking redress against Google. A 2017 Islamic State group attack on a nightclub in Istanbul killed 39 people, including Jordanian citizen Nawras Alassaf, whose American relatives are pursuing a case against Google, Facebook Inc., and Twitter Inc.

The plaintiffs sued under the federal Anti-Terrorism Act, which as amended in 2016 allows broad theories of secondary liability for any party that “aids and abets” international terrorism. The lawsuits alleged that the internet companies allowed the Islamic State group to post videos inciting violence and other content to radicalize new recruits and spread its message of terror.

“These are companies that do not act reasonably and responsibly,” says Keith L. Altman, a Farmington Hills, Michigan-based lawyer who helps represent the Gonzalez family and is behind several ATA suits against internet companies. “They should not receive a get-out-of-jail-free card on these claims.”

The internet companies respond that they work diligently to respond to terrorism-related content, and they do not believe they are liable under the law in these terrorist actions.

“YouTube undisputedly played no role whatsoever in ‘creating or developing’ alleged ISIS videos,” Google says in its a brief in Gonzalez v. Google, which the high court will hear Tuesday. “YouTube’s systems are designed to identify and remove prohibited content, and they automatically detected approximately 95% of videos that were removed for violating YouTube’s Violent Extremism policy in the second quarter of 2022.”

Twitter says that since August 2015, it has terminated more than 1.7 million accounts for violating its anti-terrorism policies.

Meta Platforms Inc., the parent of Facebook and Instagram, says in a brief that it “has invested billions of dollars to develop sophisticated safety and security systems that work to identify, block, and remove terrorist content quickly—typically before it is ever seen by any users.”

A split appeals court ruling for internet companies

The San Francisco-based 9th Circuit U.S. Circuit Court of Appeals ruled in 2021 on the Paris and Istanbul cases together with a third involving a 2015 attack by supporters of the Islamic State group in San Bernardino, California, that killed 14 people.

The Google case centers on recommendations of videos on YouTube that the plaintiffs argue were critical to the growth of the Islamic State group. The 9th Circuit ruled with respect to that case that the ATA claims were generally barred against Google because Section 230 of the Communications Decency Act protects internet platforms even when their algorithms recommend specific content.

The 9th Circuit held that the plaintiffs’ claims fell within Section 230’s protections because the Islamic State group, not YouTube, created or developed the relevant content. YouTube selects the particular content provided to a user “based on that user’s inputs.” The display of recommended content results from algorithms that are merely “’tools meant to facilitate the communication and content of others,’ and ‘not content in and of themselves,’” the court said.

When it came to the Istanbul attack, a lower court did not rule on Section 230 liability, but it had dismissed the plaintiff’s claims that Twitter, Google, and Facebook had aided and abetted the Islamic State group. But the 9th Circuit revived the aiding-and-abetting claim against the internet companies. ruling that the plaintiffs had plausibly alleged that the social media platforms’ assistance to the Islamic State group was “substantial.”

Twitter alone has appealed that ruling based on the ATA, and the Supreme Court will consider that appeal Wednesday in Twitter Inc. v. Taamneh.

Vehicles for a host of concerns, including political favoritism

The Google case has attracted much more attention, and many more amicus briefs, because of the Section 230 question.

“We think these cases are potentially very important,” says Caitlin Vogus, the deputy director of the Free Expression Project at the Center for Democracy and Technology in Washington, D.C., which has filed amicus briefs supporting Google and Twitter in the respective cases. “We’re concerned that depending on how the court rules on Section 230, there could be a lot of unforeseen consequences.”

Thus, Reddit frets in its brief on what a reexamination of Section 230 might mean for its content moderators.

“A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic,” the company says.

Along the same lines, the Wikimedia Foundation, which operates the user-generated Wikipedia reference website, argues in an amicus brief for Gonzalez v. Google that fundamental changes to Section 230’s liability protections “would pose existential threats to the organization.”

As the Google and Twitter cases come to the high court, they also serve as vehicles for multiple groups to express their grievances about internet platforms, content moderation, political favoritism and whether Section 230 needs modification.

“Platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions” in the text of Section 230, Sen. Ted Cruz, R-Texas, says in an amicus brief supporting neither party in the Google case.

Texas, one of two states that have passed laws prohibiting social media platforms from censoring speech based on viewpoint, argues in a brief in the Google case that an “overbroad interpretation” of Section 230 by the Supreme Court will make it difficult to enforce its law.

University of Texas law professor Steve Vladeck noted Monday on his OneFirst site on Substack that some of the concern around Section 230 has been prompted by Justice Clarence Thomas. In two relatively little-noticed opinions in 2020 and 2021, Thomas argued that lower courts have interpreted the provision of the Communications Decency Act too broadly.

“Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors,” Thomas wrote in his concurrence to a 2021 opinion denying review in Biden v. Knight First Amendment Institute. “Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”

Vladeck observed that the two cases before the Supreme Court next week “in important respects … don’t fully raise the debate that Justice Thomas has said that he wants the court to have.” Those issues will more squarely come up in cases arising from the content-moderation laws passed by Texas and Florida, he says.

One scholar who says cases will not ‘break the internet’

Hany Farid, a professor of electrical engineering and computer sciences and a faculty member of the School of Information at University of California at Berkeley, says he is anxious to see how the justices approach the cases, since in general he is not impressed with Washington policymakers’ understandings of sophisticated internet regulatory issues.

Although he has advised tech companies on content moderation and battling terrorism content, he joined an amicus brief in support of the plaintiffs in the Google case that argues that Google- and YouTube-generated recommendations are not content-neutral and should not fall under the protection of Section 230.

“Platforms like YouTube have learned that outrageous, tortious and divisive content drives users and profits,” Farid says. “The way the internet works today is very different from the chat rooms of the 1990s. The platforms are not neutral hosts. They vacuum up all your personal data and make recommendations that will maximize your viewing, and maximize ads, and maximize their profits.”

A ruling by the court that Section 230 imposes liability on internet companies for algorithms that recommend content would not “break the internet,” Farid says. Google and other companies are “far from powerless” in doing more to screen and remove violent and illegal content from their sites. But he agrees the Google case could be “quite consequential.”

“If the justices get it right, they could make things better,” Farid says. “If they get it wrong, they could make things worse.”

Give us feedback, share a story tip or update, or report an error.