On the docket for the United States Supreme Court this term is Gonzalez v. Google and Twitter v. Taamneh, high-profile cases that may redefine Section 230 of the Communications Decency Act, legislation that protects platforms from what their users say on their websites, and platform liability for hosting terrorist material.

Evelyn Douek (Image credit: Courtesy Stanford Law School)

Through the protections to free speech that Section 230 provides, movements like #MeToo have flourished, said Stanford Law School Professor Evelyn Douek whose scholarship examines the private and public regulation of online speech. Without Section 230, platforms may be more risk-averse in what content they allow on their platforms, she said in an interview with Stanford News.

Here, Douek talks about what Section 230 is and why it is important for freedom of speech.

 

What is Section 230 of the Communications Decency Act, and what makes it so controversial?

Section 230 has two important parts. First, it immunizes internet platforms from most forms of liability for the content that other users post on their sites. So if I defame you in a tweet, you can sue me, but not Twitter. This is perhaps the most well-known part of Section 230. But the law also has another part to it which immunizes platforms from liability for the content moderation decisions they make – that is, platforms can take down content and you can’t sue them for that either. The law is controversial because in a time of growing anger and discontent about platforms’ content moderation – from both sides of politics – Section 230 has become a general stand-in for “something that shields platforms from liability, which is bad because I want platforms to face more liability for all the bad stuff they do in the world.”

 

What do people – including pundits and politicians – misunderstand about Section 230?

The main goal of Section 230 is not to protect platforms, but to protect speech. In a world without Section 230, platforms that now face potential liability for content on their sites will be far more risk-averse and take a lot more content down to avoid the possibility of facing a lawsuit. The #MeToo movement, for example, might have played out very differently in a world where platforms took down any posts that even remotely looked defamatory. It’s also important to understand that much of the speech that people are worried about – hate speech, or political and medical misinformation, for example – is protected speech under the First Amendment. That’s why platforms don’t face liability, not because of Section 230. Get rid of section 230 tomorrow, and you still won’t be able to sue YouTube in the U.S. for hosting hate speech.

 

How could SCOTUS transform social media moderation and online publishing?

The main question SCOTUS is now going to consider is whether platforms are still immunized by Section 230 if they recommend certain content to users. Specifically, the plaintiffs want to sue YouTube for allegedly recommending ISIS content to users, and they argue that YouTube shouldn’t be immunized by Section 230 for its own recommendations. The thing is recommendation algorithms are a central part of the internet as we know it. When you type a search query into Google, the responses it serves up are those algorithmically recommended to you. Your Facebook newsfeed and Twitter feed are the result of algorithmic recommendations. If the court decides recommendation algorithms fall outside the scope of Section 230, and it takes a broad view of what constitutes a recommendation algorithm, many of these platforms may change the way they operate and become far more risk averse in the content they host. Before you celebrate, remember that it is often marginalized users and communities, or important forms of controversial speech, that such crackdowns will hit the hardest.

 

Anything else you would like to add?

This is going to be a blockbuster year for decisions about platform liability and regulation. There is also ongoing litigation around two social media laws from Texas and Florida that are widely predicted to end up at the Supreme Court imminently too (for a good primer on those, here’s a handy podcast I prepared earlier). Depending on what happens – and it’s really anyone’s guess – your internet could look pretty different in a few years.