The justices of the Supreme Court struggled during oral arguments on Feb. 21 about the extent to which social media platforms should be held liable when terrorist groups use the platforms to promote their causes.
Conservative and liberal members of the high court alike expressed confusion during a hearing that spanned 2 hours and 41 minutes as the lawyer for a terrorism victim’s family urged them to curtail federal protections enacted decades ago to spur the growth of the internet. Justices seemed concerned that going too far could undermine those federal protections and open the door to widespread litigation over internet content.
Big Tech and its supporters are deeply concerned that the court could eviscerate Section 230 of the federal Communications Decency Act of 1996, which generally prevents internet platforms and internet service providers from being held liable for what users say on them. They say the legal provision has fostered a climate online in which free speech has flourished.
Although social media platforms say they shouldn’t be held responsible if terrorists use their websites, critics say shielding social media platforms had led to real-world harm. Congress has been under pressure for years to change Section 230 as conservatives have complained about social media censorship and liberals have protested misinformation online.
Except for Justice Clarence Thomas, who has suggested that the Supreme Court should revisit the reach of Section 230, the justices’ views on the legal provision have been largely unknown.
The case, Gonzalez v. Google LLC, court file 21-1333, goes back to 2015, when student Nohemi Gonzalez, 23, a U.S. citizen, was killed in an ISIS attack in Paris. The killing was part of a larger series of attacks the terrorist group carried out in that city that led to 129 deaths.
Her family sued, claiming that Google, owner of YouTube, was liable under the federal Anti-Terrorism Act for aiding ISIS recruitment efforts by allegedly using algorithms to steer users to ISIS videos.
The “plaintiffs asserted that Google had knowingly permitted ISIS to post on YouTube hundreds of radicalizing videos inciting violence and recruiting potential supporters to join the ISIS forces then terrorizing a large area of the Middle East, and to conduct terrorist attacks in their home countries,” according to the family’s petition.
Because of the algorithm-based recommendations, users “were able to locate other videos and accounts related to ISIS even if they did not know the correct identifier or if the original YouTube account had been replaced.”
The family asserted that Google’s services “played a uniquely essential role in the development of ISIS’s image, its success in recruiting members from around the world, and its ability to carry out attacks.” The original complaint filed in the case added that “Google officials were well aware that the company’s services were assisting ISIS.”
A divided U.S. Court of Appeals for the 9th Circuit found that under Section 230 the viewing recommendations were protected by federal law even if the section “shelters more activity than Congress envisioned it would.” Google denied liability in a court filing, saying that it’s impossible for it to review every video that gets posted to YouTube, which accepts more than 500 hours of new content every minute.
During oral arguments on Feb. 21, the family’s attorney, Eric Schnapper, suggested that viewing recommendations fall outside the Section 230 shield.
“A number of the briefs in this case urge the court to adopt a general rule that things that might be referred to as a recommendation are inherently protected by the statute, a decision which would require the courts to then fashion some judicial definition of ‘recommendation,’” he said.
“We think the court should decline that invitation and should instead focus on interpreting the specific language of the statute.”
Chief Justice John Roberts was skeptical, telling Schnapper that despite any algorithm YouTube may use to push users to view videos, the company is “still not responsible for the content of the videos ... or text that is transmitted.”
Justice Elena Kagan told Schnapper he was correct to say the statute doesn’t distinguish between content and content recommendations, but said the law came about before the rise of online algorithms.
“Everybody is trying their best to figure out how this statute applies, [how] the statute, which was a pre-algorithm statute, applies in a post-algorithm world,” she said.
As Kagan noted Justice Thomas suggested earlier, online “everything involves ways of organizing and prioritizing material.” She said, “Does your position send us down the road such that 230 really can’t mean anything at all?”
Schnapper responded in the negative, saying “algorithms are ubiquitous, but the question is, what does the defendant do with the algorithm? If it uses the algorithm ... to encourage people to look at ISIS videos, that’s within the scope of JASTA.”
“JASTA” refers to the federal Justice Against Sponsors of Terrorism Act of 2016, which allows lawsuits to be filed in federal courts against a foreign state for supporting international terrorism regardless of whether the state is officially designated as a state sponsor of terrorism.
“I can imagine a world where you’re right that none of this stuff gets protection,“ Kagan said. ”And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear.
“On the other hand, I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” she said to laughter, as she poked fun at the court.
Justice Samuel Alito seemed befuddled by Schnapper’s comments.
“I’m afraid I’m completely confused by whatever argument you’re making at the present time,” Alito said.
Justice Brett Kavanaugh suggested that if the court departs from current trends and finds for the family, it could spur many more lawsuits.
“Isn’t it better ... to keep it the way it is,” Kavanaugh told Deputy U.S. Solicitor General Malcolm Stewart. He continued by suggesting that the court should “put the burden on Congress to change that and they can consider the implications and make these predictive judgments.”
Stewart said there could be a wave of lawsuits but few “would have much likelihood of prevailing.”
Google attorney Lisa Blatt said Congress spoke clearly on the matter when it created Section 230, which “forbids treating websites as ‘the publisher or speaker of any information provided by another.’”
So when plaintiffs are harmed by website content, the section “bars the claim,” Blatt said.
Justice Ketanji Brown Jackson suggested to the lawyer that while Section 230 shields tech companies, it also pushes them to remove offensive content.
“Isn’t it true that that statute had a more narrow scope of immunity ... than courts have ultimately interpreted it to have and that what YouTube is arguing here today ... [is] really ... just about making sure that your platform and other platforms weren’t disincentivized to block and screen and remove offensive ... content?”
On Feb. 22, the high court will hear a related case involving the relationship between a separate terrorist attack and social media.
In Twitter Inc. v. Taamneh, court file 21-1496, Twitter, Google, and Facebook are expected to argue that they should not be held liable under the Anti-Terrorism Act for aiding and abetting international terrorism merely because they provided services to billions of users, some of whom allegedly were supporters of ISIS.