Google has warned that a ruling against it in a pending Supreme Court case could jeopardize the entire internet, stripping key protection from lawsuits over content moderation decisions involving the artificial intelligence (AI).
Section 230 of the Communications Decency Act of 1996 (opens in a new tab) currently offers a general “liability shield” for how companies moderate content on their platforms.
However, as CNN reports (opens in a new tab)Google wrote in a legal document (opens in a new tab) that, if the Supreme Court rules in favor of the plaintiff in Gonzalez v. Google (concerning YouTube’s algorithms recommending pro-ISIS content to users), the internet world could be overrun with dangerous, offensive content and extremist.
Automation in moderation
Part of a nearly 27-year-old law already targeted for reform by US President Joe Biden (opens in a new tab)Section 230 is not equipped to legislate modern developments such as artificial intelligence algorithms, and this is where the problems begin.
Google’s main argument is that the Internet has grown so much since 1996 that the integration of artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would work if users had to sort content themselves,” the company says in its filing.
“The abundance of content” implies that technology companies must use algorithms to present this content to users in a manageable way, whether it’s search engine results, flight offers or job recommendations on job sites.
Google also said that under current law, technology companies simply refusing to moderate their platforms is a perfectly legal way to avoid liability, but it puts the internet at risk of becoming a ” virtual cesspool”.
The tech giant further pointed out that YouTube’s community guidelines expressly disavow terrorism, adult content, violence, and “any other dangerous or offensive content,” and that it continually tweaks its algorithms to preemptively block content. prohibited.
He further claimed that around 95% of videos violating YouTube’s “violent extremism policy” were automatically detected in the second quarter of 2022.
Nevertheless, the plaintiffs in the case argue that YouTube did not remove all Islamic State-related content and that in doing so it contributed to the “rise of the Islamic State”.
In an attempt to distance itself from any responsibility on this point, Google responded by saying that YouTube’s algorithms recommend content to users based on similarities between a piece of content and content a user is already interested in.
This is a complex case, and while it’s easy to buy into the idea that the internet has grown too big for manual moderation, it’s equally compelling to suggest that companies should be held accountable when their automated solutions fail.
After all, while even tech giants can’t guarantee what’s on their website, users of filters and parental controls can’t be sure they’re taking effective steps to block objectionable content.