The future of online speech is in the hands of the US Supreme Court.
On Tuesday, the Supreme Court heard oral arguments in one of two high-profile cases involving Google-owned YouTube that could reshape how people use the internet and what they can post online. The Supreme Court is scheduled to hear another case on Wednesday against Twitter, Google and Facebook. Both cases stem from lawsuits brought by relatives of people killed in separate terrorist attacks, alleging that the social media companies are liable for the harmful content that appears on their platforms.
At stake are questions about whether these online platforms should be held legally responsible for content created by their users but promoted by the companies’ algorithms. Tech companies have successfully fought back against these types of lawsuits because of protections they receive under a 27-year-old federal law.
But lawmakers on both sides of the aisle, including US President Joe Biden, have called for changes to what’s known as Section 230 because of growing concerns that tech companies aren’t doing enough to safeguard user safety. Tech companies say that removing this legal shield could hurt free expression because they could be subject to more lawsuits.
Eric Goldman, a professor at Santa Clara University School of Law, said tech platforms give people the ability to talk to others online. That could go away depending on what the Supreme Court decides.
“If the Supreme Court says that’s a risky option, then the Supreme Court isn’t sticking it to Big Tech,” said Goldman, who wrote a brief supporting Section 230 protections. “It’s sticking it to all of us.” Companies could limit who can post on their platforms or scrap user-generated content, he added.
Here’s what you need to know about this high-stakes battle over online speech:
What is Section 230?
Section 230 is part of the 1996 Communications Decency Act, which shields platforms, including Google, Twitter and Meta-owned Facebook, from certain lawsuits over posts created by users. It also allows these platforms to take action against offensive content.
The provision states that no “interactive computer service” provider or its user should be treated as a publisher of third-party content.
The co-authors of Section 230 — US Sen. Ron Wyden, an Oregon Democrat, and former Rep. Chris Cox, a California Republican — told the Supreme Court in a brief that Congress created it “to protect Internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content.” Even back then, online services were facing lawsuits over user content. In 1995, for example, the New York Supreme Court ruled that Internet message-board platform Prodigy Services could be liable for publishing alleged defamatory content.
Section 230 doesn’t apply to content that violates criminal, intellectual property, state, communications privacy and sex trafficking laws.
Why should I care?
Section 230 was designed to encourage free speech online. But a Supreme Court ruling on the matter could alter how you use the internet and what you can post online. If an online platform is worried about more lawsuits, it could change how it moderates content and potentially increase the scrutiny over what you say.
“Without Section 230’s protections, many online intermediaries would intensively filter and censor user speech, while others may simply not host user content at all,” the Electronic Frontier Foundation said in a blog post about the topic.
What cases are the Supreme Court hearing?
The Supreme Court is examining two cases involving online speech: Gonzalez v. Google and Twitter v. Taamneh.
Gonzalez v. Google, which was heard on Tuesday, centers on whether Section 230 protects online platforms including social networks from lawsuits when they recommend third-party content. The case stems from a lawsuit filed by the family of Nohemi Gonzalez, a 23-year-old American student who was killed in 2015 in terrorist attacks in Paris. The family alleged that Google-owned YouTube aided the ISIS terrorists because the video-sharing platform allowed them to post videos that incited violence and recruited supporters. The lawsuit also accuses YouTube of recommending ISIS videos to users.
A district court and the US Court of Appeals for the Ninth Circuit ruled in Google’s favor, dismissing Gonzalez’s claims.
In Wednesday’s Twitter v. Taamneh case, the Supreme Court is examining whether people can sue online platforms for aiding and abetting an act of terrorism. The case involves the 2017 death of Nawras Alassaf, a Jordanian citizen who was fatally shot in a nightclub in Istanbul during a mass shooting. ISIS claimed responsibility for the attack. Relatives of Alassaf sued Twitter, Google and Facebook, alleging that the platforms were liable under the Anti-Terrorism Act for aiding and abetting terrorism because the companies didn’t do enough to combat this harmful content.
A district court dismissed the claims in the lawsuit, but the US Court of Appeals for the 9th Circuit reversed the decision.
What happened during the Gonzalez v. Google hearing?
For more than two-and-a-half hours Tuesday, the Supreme Court justices asked lawyers representing Google and the Gonzalez family a variety of questions about Google’s algorithm, YouTube’s thumbnails, artificial intelligence and actions that users take, such as liking or sharing a post.
Justice Elena Kagan said everyone is trying their best to figure out how a “pre-algorithm statute” applies in a “post-algorithm world.”
“Every time anybody looks at anything on the internet, there is an algorithm involved,” she said.
Eric Schnapper, the lawyer representing the Gonzalez family, said they’re trying to make a distinction in their arguments between “liability for what’s in the content that’s on their websites” and actions companies take to encourage users to look at certain content.
At one point, Justice Samuel Alito told Schnapper that he was “confused” by the arguments the lawyer was making. He asked: If a user creates an ISIS video and it includes a preview image of the video, in what’s known as a thumbnail, whether YouTube could be sued because it would be considered a publisher for displaying the thumbnail.
“It is acting as a publisher but of something that they helped to create because the thumbnail is a joint creation that involves materials from a third party and a URL from them and some other things,” Schnapper replied.
Justice Amy Coney Barrett asked if a user could be liable for retweeting or liking a tweet.
“On your theory, I’m not protected by Section 230?” Barrett asked. “That’s content you’ve created,” Schnapper replied after the two went back and forth over the definition of a user under Section 230.
How have tech companies responded?
Google’s lawyer Lisa Blatt told the Supreme Court on Tuesday that if websites could be liable for recommending third-party content it “threatens today’s internet.”
“The internet would have never gotten off the ground if anybody could sue every time,” she said about Section 230 protections.
In a post about the case before the hearing, Google said that users would be “left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content.”
If the platform could get sued for content it recommends, consumers could have a tougher time finding content they want to view. The tech giant also says that removing Section 230 protections would make the internet less safe, hurt both big and small online platforms and cause websites to restrict more content or shut down some services because of the legal risks.
Other tech companies, including Reddit, Yelp, Microsoft and Meta, have also defended Section 230 protections in briefs filed to the court.
“Exposing companies to liability for decisions to organize and filter content from among the vast array of content posted online would incentivize them to simply remove more content in ways Congress never intended,” Jennifer Newstead, Meta’s chief legal officer, said in a January blog post about the topic.
Reddit said in its brief that users could become more wary about volunteering to moderate content on its platform or recommending content through actions such as “upvoting” because of legal risks.
In Twitter v. Taamneh, Twitter said it didn’t aid and abet an act of terrorism because the company didn’t intend to help terrorists, had rules against posting terrorist content and wasn’t connected to the terrorist attack in Turkey. Facebook and Google-owned YouTube backed Twitter in a brief, stating that the appeals court’s ruling on the Anti-Terrorism Act is “incorrect” and could result in more lawsuits against any provider of goods or services such as an airline company, financial services provider and pharmaceutical business that terrorists abuse.
Twitter, which no longer has a communications department, didn’t respond to a request for comment.
What do US lawmakers think about this?
Democrats and Republicans, surprisingly, agree that reforms to Section 230 are needed. But their motives strongly contradict each other.
Although the companies have repeatedly denied doing so, Republicans accuse Big Tech of suppressing conservative voices, with US House Judiciary Committee Chairman Jim Jordan last week issuing subpoenas to the CEOs of Google’s parent company Alphabet, Amazon, Apple, Meta and Microsoft.
Democrats argue that Section 230 prevents social media companies from being held accountable for failing to moderate hate speech, misinformation and other offensive content.
“We need Big Tech companies to take responsibility for the content they spread and the algorithms they use,” Biden wrote in an op-ed published in The Wall Street Journal in January.
Justice Brett Kavanaugh asked if it was better to keep Section 230 the way it is and leave it up to Congress to change the law. The Supreme Court is being asked to make a “predictive judgment” when they don’t know how “bad” the consequences could be, he added.
“I don’t know how we can assess that in any meaningful way,” he said.
What happens next?
The Supreme Court is expected to make a decision on the cases this year. The court is being asked to review other cases involving online speech. In January, it delayed saying whether it will hear cases about controversial laws passed in Texas and Florida that restrict how social media companies can moderate content.