Page 29

Loading...
Tips: Click on articles from page
Page 29 1,931 viewsPrint | Download

The U.S. Supreme Court is considering a lawsuit that could repeal a section of the Communications Decency Act (CDA), which governs the laws of online speech.

Specifically, the CDA’s Section 230 protects online platforms such as YouTube, Facebook and Twitter from liability for harm stemming from content on their sites. Rather than overturning Section 230, we should add limited liability to incentivize platforms to do a better job recognizing and removing truly harmful content.

The case under review is Gonzalez v. Google, in which the plaintiff alleges that because Google’s YouTube failed to take down ISIS terrorist videos and even recommended them via its content algorithm, it is thus liable for damages under the Anti-Terrorism Act, as these videos may have played a role in inspiring the 2015 Paris attacks.

Although the plaintiffs have not demonstrated the ISIS videos were viewed by the perpetrators, or that removing the ISIS videos sooner would have prevented the attacks, the case remains underway. The precedent this lawsuit will set if successful, namely that Section 230 does not cover algorithmic recommendations of harmful content, is dangerous. The result would be a low bar for liability that would create a chilling effect for online speech.

When the CDA was enacted in 1996, major social media platforms had yet to emerge. Since then, the CDA has changed little, but our relationship with the web has changed dramatically. While many of these changes have been beneficial — allowing people to connect during the pandemic, for example — platform abuse has risen, and in some cases is facilitating real-world harm.

Section 230’s liability shield has allowed social media platforms to prosper. But there are many instances of harm slipping through the cracks, leaving victims with few avenues of recourse.

Calls for reform, or even the outright repeal of Section 230, have increased. But the consequences of such a repeal would be immense, changing how most sites operate and likely decreasing user participation.

Regulating online speech requires balancing two competing concerns: addressing negative impacts caused by platform abuse while avoiding actions that would prompt platforms to drop non-harmful content. The central question is whether it is worthwhile to change the operating procedure for the entire web to spare a few victims from inordinate harm. I believe the answer is no.

We should not, however, give up on addressing the negative consequences of unfettered content. One solution is a form of limited legal liability, which stipulates that harm must stem directly from the platform’s negligence, including failure to address known issues; and be sufficiently severe to warrant action (e.g., threat to security, loss of life or livelihood, etc.).

Most major content platforms have a process for considering content removal appeals, but are slow to respond to inquiries or ignore them altogether. A standard based on harm would incentivize companies to address abuse in days or weeks, instead of months or years. The key would be assigning a level of financial liability that would make responsiveness worthwhile, but not so high that platforms would shy away from hosting user content, which drives revenue.

With respect to Gonzalez v. Google, the standard would require demonstrating not only that the attacks followed directly from their perpetrators’ viewing of the ISIS videos, but also that Google was asked to take the videos down and refused or otherwise failed to do so until after the attacks. Without this additional degree of scrutiny, every platform could be held liable for any perceived harm stemming from user content.

Our focus should be on addressing the most extreme and egregious cases of negligence rather than trying to clamp down more broadly.

To avoid trivial or opportunistic lawsuits and limit potential damage to smaller platforms, the new standard would need to be comprehensive. Ideally, liability would align with the degree of harm caused.

Since smaller platforms presumably are less likely to prompt significant or widespread harm than large ones, they would be less likely to meet the minimum threshold in most cases. To prevent wealthy litigants from bankrupting smaller platforms through legal fees, certain requirements would need to be met before a lawsuit could even be filed.

Additionally, restrictions on the number and frequency of such lawsuits that could be filed, would reduce the incidence of serial and vexatious litigants. An approach akin to California’s law stipulating that a losing plaintiff pay the other side’s legal fees could be employed to prevent filing lawsuits in bad faith.

Jamie Forte is a graduate of Phillips Exeter Academy and the College of William and Mary. This opinion is based on his 2022 Honors Thesis, “Free Speech and its Limits: An Exploration of Tolerance in the Digital Age.”


The consequences of eliminating the Communications Decency Act provision would be immense.