Table of Contents
When a book or newspaper, contains illegal content, for example, booksellers and newsstands are not legally liable; nevertheless, the publishers of such illegal content are. The emergence of online services, which included everything from early blogs and forums to search engines, social media, online marketplaces, and more, compounded the issue by blurring the lines between distributors and publishers. Section 230 can easily be considered as one of the most debated and attention-seeking topics in the recent decades since the rise of the Internet and along with it the rise of social media platforms such as Twitter, Facebook, etc. There are arguments from both viewpoints.
On one hand, the law allows online platforms to function properly without getting involved in defamation, hate speech, etc lawsuits by acting as middlemen in the whole scenario. Furthermore, by not holding platforms liable for their users’ speech, these online services can choose to control their platforms in the best interests of their communities, thereby creating a place for free speech and open debate. Whereas on the other side lawmakers consider Section 230 to be too broad shielding bad as well as good actors and protecting online platforms from legal liability and providing them with too much freedom to moderate and selectively silence voices that they might disagree with.
What is Section 230?
Section 230 is a provision of the Communication Decency Act of 1996 and is regarded as one of the most valuable tools which create a bubble for ensuring and protecting the flourishment of innovation and freedom of expression and speech online.
Section 230 contains two main provisions: one that shields online internet services and users from liability if they fail to remove objectionable third-party content, and another one that protects them from liability when they do remove such content.
Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In practicality, this would mean that online services would not be liable for defamatory or otherwise unlawful content posted by their users.
Section 230(c)(2) further elaborates, stating that online services are not liable for “any action voluntarily taken in good faith to restrict access to or availability of [objectionable content.]”
The second provision protects online platforms from legal liability for enforcing their community standards by engaging in content moderation.
The distinction between the two provisions is fairly visible as one protects them from civil liability when such platforms such as Facebook, Twitter, Craigslist, fail to or refrain from moderating or censoring the information on their websites while the second one protects them from liability for when they do filter or censor out objectionable content.
The interpretation of Section 230 in a legal sense allows platforms to moderate the services that they provide as well the content of their users by acting in “good faith”.
What are the exceptions to Section 230?
Section 230 is not absolute and there are exceptions in place in matters regarding copyright, criminal or federal law, or for violating federal or state sex-trafficking law. An exception to Section 230 was put in place in 2018 for content that “promoted or facilitated prostitution.” The Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) was passed in 2018 making it no longer applicable to the federal and state sex trafficking law.
What is the importance of a provision such as Section 230?
There is no doubt that Section 230 has allowed a wide variety of business modes to flourish and can somewhat be responsible for shaping the modern internet as we know it to be today. This statement is supported by the fact that Section 230 acts as a core pillar of Indian Freedom protection internet speech.
A wide variety of modern online platforms rely on user-generated content and therefore by protecting online services from third-party liability for the content that is posted by their users has paved way for a variety of business models and various other common features such as comments and reviews on websites which not have existed without the legal liability that is offered by Section 230. This is why bloggers are not considered liable for comments left by readers.
Some also consider that Section 230 provides an environment similar to that of a kind of a safe haven for websites that want to provide a legal environment favorable to free expression or for political or controversial speech as the legal protection provided under Section 230 is unique to the United States.
The Courts adopt a three-pronged liability test to interpret whether Section 230 applies to a particular case or not.
The first condition that they check out for is that the defendant must be a “provider or user of an interactive computer service.” For example, an Instagram user posts a defamatory statement or post. Instagram, the provider, would not be liable for that user’s defamation, nor would other users who would have shared the original post or story or statement in any other form; only the original poster would be liable for defamation. This is a case in which Section 230 can apply to both the provider as well as the user but in a different sense.
Secondly, the defendant must not be an “information content provider,” or a “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet.” In other words, if an online service played a part in the creation of illegal content, it can be held liable for that content.
Finally, the claim made by the plaintiff must treat the defendant as the “publisher or speaker” of the content in question. If a defendant can show that a case meets all three requirements, Section 230 will be applicable and they will not be held liable for the violation in question. If a defendant cannot show that a case meets all these requirements, the courts can hold them liable if the charges are proved in the trial.
In a world without the legal protection that Section 230 provides, online services would have to be aware of every post on their websites and make the correct content moderation decision 100 percent of the time to avoid liability for their users’ speech. The U.S. Supreme Court ruled in Smith v. California, that it’s unreasonable to expect a bookseller to know the contents of a few hundred books, and it’s even more unreasonable to expect the operators of a social media platform to know the contents of thousands, millions, or even billions of posts, even when using thousands of human moderators and advanced algorithms.
Section 230 enables and protects a wide variety of businesses and business models, creating an enormous economic impact as well by enabling these online businesses to flourish all over the world. Section 230 enables online services not only to host user-generated content but also to remove it without fear of reprisal. It is also important to understand the many ways Section 230 has benefited online services—from large tech companies to small, independent websites—and their billions of users, and that any changes that might come to the above-mentioned provisions should not intentionally or unintentionally eliminate or reduce these benefits that they at present provide.
This article is written by Sparsh Jain, a 3rd-year law student at Symbiosis Law School, Noida.