Free & Open Internet

What Is Section 230? Why Ending It Would Create Problems

May 22, 2024

Calls to end Section 230 of the Communications Decency Act have picked up steam in the past few years, as advocates and lawmakers search desperately for ways to hold tech companies accountable for their conduct and their algorithms.
 

Policymakers see harmful content online  — focusing particularly on what they see as its damaging effects on children — and claim that companies like Meta, Google and Twitter use the law to avoid taking responsibility for what happens on their websites. And who can blame them for flocking to the allure of a simple, singular solution to the dramatic ways that tech platforms have impacted our society and our democracy.
 

But like most things in life (and policymaking): It’s just not that simple.

Targeted reforms could help address the proliferation of harmful content online, but the latest remedy — new legislation that would sunset Section 230 altogether by the end of 2025 — is the wrong call. According to the sunset bill’s sponsors, House Energy and Commerce Committee Chairwoman Cathy McMorris Rodgers and Ranking Member Frank Pallone, this would put the burden on “Big Tech … to work with Congress over 18 months to evaluate and enact a new legal framework.” Simply put, this approach lacks the nuance required to prevent this move from backfiring and destroying the internet as we know it.

What is Section 230? 

In the 1990s, litigants started asking courts to decide whether early online platforms should be held liable for content users posted on their services. To understand this question, it’s helpful to think of analogies to the offline world. For instance, there is a different legal standard for publishers and distributors of content, so a newspaper is held to a different standard than a newsstand selling that newspaper.

Publishers are essentially liable for the content they choose to publish but distributors need to have the knowledge that the content they distribute is harmful and actionable in court in some way. There’s a common-sense appeal to this distinction: Distributors, like bookstores, shouldn’t have to confirm that every single book they sell does not defame someone before they put it on their shelves.

Section 230 was created to settle differing outcomes in the courts over whether and how platforms are liable for their users’ speech when the platform engages in any content moderation. The central piece of the law says:

No provider … of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This protection is what allows websites and apps that host third-party content to moderate that content without immediately being sued for everything they leave up. The law doesn’t just benefit powerful tech platforms. It’s vital for big and smaller businesses alike. Most importantly, it’s essential for the hundreds of millions of people who use these services and share ideas online. For three decades, this has allowed the internet to transform into the online universe we know today — the good, the bad and everything in between.

Shielded from getting sued for everything their users post, online platforms can let their users post what they want (according to company terms) and let online communities thrive. And without the financial burden of monitoring costs and possible litigation, smaller companies, startups and noncommercial ventures are able to get off the ground and prosper.

But we know all too well that these online platforms are not just beacons of transformative hope and positive community change. Racism, misogyny, homophobia, transphobia and other forms of hate run rampant online, with all manner of amplified harassment and targeted disinformation. Platforms can and do monetize such content, and many even design their products to encourage engagement over healthy interactions. 

This occurs on a personal level and at a national and global scale as well. There’s obvious reason for concern about violent assaults on democratic institutions and the spread of disinformation about elections, civil and human rights, and public health. And while we disagree with the approach that Reps. McMorris Rodgers and Pallone have taken, they rightfully point out that a lot of the concerning harms — such as cyberbullying and predators — affect children in particular.

Free Press Action believes that platforms, like other distributors, should be held accountable for harms that they know they are causing and exacerbating. This doesn’t mean they’ll be liable in court every single time someone says something harmful online. There’s simply no legal relief for all instances of lies and awful content, and it’s really hard to prove in court that a defendant’s conduct is actually a cause of the harm in question.

Yet with targeted reforms to Section 230, more plaintiffs would get a chance to make the case for platform liability. And courts have already started to recognize when platforms are designing harmful products and are responsible for creating unlawful content themselves rather than merely hosting users’ content. Surgical changes to 230 are far better than the sledgehammer proposed in Congress.

Lawmakers should focus on reforming 230 in ways that allow better enforcement and application of existing civil-rights law. If we want to hold platforms accountable for how they collect and use people’s personal data in harmful and discriminatory ways, Congress should also pass a comprehensive data-privacy law. And Congress and the Federal Trade Commission should continue to seek greater transparency about platforms’ content-moderation policies and terms of services, and mandate consistent and equitable enforcement of those policies across all languages in which the platform operates.

The risks to gutting Section 230 

So much of the motivation to strike Section 230 is based on a desire to hold big tech platforms accountable. But this doesn’t just affect Facebook, Google and TikTok. Based on what Reps. McMorris Rodgers and Pallone say in their Op-Ed, this bill would harm small companies, including ones that don’t even exist yet. That’s another misstep of this new legislation: It would take Section 230 away from everyone, yet McMorris Rodgers and Pallone designate “Big Tech” to come up with its replacement. The interests of smaller and newer companies, as well as consumers, are nowhere to be found.

Without Section 230, online platforms would have to either screen and approve every single piece of content users post online or abandon content moderation altogether. And the burden to do the former is not just literally impossible to do well at internet scale; it’s also a giant barrier to entry for smaller and newer services. The tradeoff for sunsetting Section 230 is lose-lose — harm competition and remove platforms’ incentive to moderate content in good faith using their own discretion.

This could also have the effect of chilling expression for communities that media outlets often ignore, malign or underserve, including Black and Brown people, LGBTQIA+ individuals, immigrant populations, religious or language minorities, activists and political dissidents. We know platforms will most often challenge and take down the speech of such individuals, and can’t expect these companies to stand up against the wave of hate and calls to remove the posts of those who are already marginalized and underrepresented.

And gutting Section 230 — at the cost of competition, good-faith content moderation and chilling effects — wouldn’t even solve the problems these legislators hope to address. The First Amendment largely protects the harmful content they seek to curb, which means there’s very unlikely to be a monetary-damages remedy or an injunction at the end of these lawsuits.

How to make Section 230 better

Section 230 reform should return to the history of the law’s drafting and restore the original understanding of the text. When the statute says that platforms shouldn’t be treated as the publisher or speaker of any information that a user provides, platforms that knowingly amplify harmful content could still be treated as the distributor of third-party content. 

That’s not always going to be an easy test. The distinction between publishing content and distributing it is something we need to study and understand. But tech platforms could and should be held liable under the heightened standard for distributors when they have knowledge that the content they continue to amplify is harmful. This standard would ensure that providers could be sued for hosting third-party content they know (or have reason to know) violates a legal duty, but would not require them to screen all third-party content before allowing users to post. Congress could clarify the original meaning of Section 230 explicitly in an amendment.

Congress should also clarify that Section 230 (even as it stands now) does not protect “interactive computer services” (i.e., tech platforms) from claims about their own actions or content they create themselves in whole or in part. Many courts have recognized this already.

So what’s next? As Free Press Action has testified in Congress, any Section 230 reforms should preserve low barriers to the distribution of benign and beneficial content with changes that let injured parties seek to hold platforms accountable for their own bad acts.

Any Section 230 reforms should apply across the board, not just to Big Tech companies. These conversations must involve smaller entrepreneurs too. And we should also recognize that much harmful and abusive activity happens on smaller platforms, so there’s no sense in having separate rules for platforms big and small.

Last, changes to Section 230 shouldn’t let Congress off the hook for passing comprehensive data-privacy laws. We need the current efforts from this same House committee to land in a good place, limit the data that platforms collect about us and prohibit discriminatory and harmful use of that information in ways that have nothing to do with Section 230 and lawsuits over user-generated content.

A full repeal (or sunset) of Section 230 would be disastrous to online life. It would raise barriers to speech and chill expression by promoting excessive takedowns — possibly shuttering entire sites and services — and disproportionately shut out people of color and other marginalized speakers.

Help Free Press Action keep fighting to protect the open internet and combat online hate and disinformation: Donate today.