The Supreme Court is about to reconsider Section 230, a law that’s been foundational to the internet for decades. But whatever the court decides might end up changing the rules for a technology that’s just getting started: artificial intelligence-powered search engines like Google Bard and Microsoft’s new Bing.
The Supreme Court could be about to decide the legal fate of AI search
The Supreme Court could be about to decide the legal fate of AI search
Next week, the Supreme Court will hear arguments in Gonzalez v. Google, one of two complementary legal complaints. Gonzalez is nominally about whether YouTube can be sued for hosting accounts from foreign terrorists. But its much bigger underlying question is whether algorithmic recommendations should receive the full legal protections of Section 230 since YouTube recommended those accounts to others. While everyone from tech giants to Wikipedia editors has warned of potential fallout if the court cuts back these protections, it poses particularly interesting questions for AI search, a field with almost no direct legal precedent to draw from.
Companies are pitching large language models like OpenAI’s ChatGPT as the future of search, arguing they can replace increasingly cluttered conventional search engines. (I’m ambivalent about calling them “artificial intelligence” — they’re basically very sophisticated autopredict tools — but the term has stuck.) They typically replace a list of links with a footnote-laden summary of text from across the web, producing conversational answers to questions.
Old-school search engines can rely on Section 230, but AI-powered ones are uncharted territory
These summaries often equivocate or point out that they’re relying on other people’s viewpoints. But they can still introduce inaccuracies: Bard got an astronomy fact wrong in its very first demo, and Bing made up entirely fake financial results for a publicly traded company (among other errors) in its first demo. And even if they’re simply summarizing other content from across the web, the web itself is full of false information. That means there’s a good chance that they’ll pass some of it on, just like regular search engines. If those mistakes cross the line into spreading defamatory information or other unlawful speech, it could put the search providers at risk of lawsuits.
Familiar search engine interfaces can rely on a measure of protection from Section 230 if they link to inaccurate information, arguing that they’re simply posting links to content from other sources. The situation for AI-powered chatbot search interfaces is much more complicated. “This would be a very new question for the courts to address,” says Jeff Kosseff, US Naval Academy law professor and author of The Twenty-Six Words That Created The Internet about the history of Section 230. “And I think part of it is going to depend on what the Supreme Court does in the Gonzalez case.”
If Section 230 remains mostly unchanged, many hypothetical future cases will hinge on whether an AI search engine was repeating somebody else’s unlawful speech or producing its own. Web services can claim Section 230 protections even if they’re lightly changing the language of a user’s original content. (In an example Kosseff offers, a news site could edit the grammar of a defamatory comment without taking responsibility for its message.) So an AI tool simply tweaking some words might not make it responsible for what it says. Microsoft CEO Satya Nadella has suggested that AI-powered Bing faces basically the same legal issues as vanilla Bing — and right now, the biggest legal questions for AI-generated content fall around copyright infringement, which sits outside of Section 230’s purview.
There are still limits here. Language models can “hallucinate” incorrect facts like Google’s and Bing’s errors above, and if these engines originate an error, they’re on shaky legal ground under any version of Section 230. How shaky? Until it comes up in court, we won’t know.
“There’s a real danger in making a rule that’s very specific to 2023 technology”
But Gonzalez could make AI search risky even if engines are simply giving an accurate summary of somebody else’s statement. The heart of the case is whether a web service can lose Section 230 protections by organizing user-generated content in a way that promotes or highlights it. Courts might not be eager to go back and apply this to ubiquitous services like old-school search engines, and Gonzalez’s plaintiffs have tried to establish that this won’t happen. Even if they’re cautious, they could be less likely to cut newer services any slack since they will come into common usage under the new precedent — particularly services like AI search engines, which dress up search results as direct speech from a digital persona.
“This case involves a fairly specific type of algorithm, but it’s also the first time in 27 years that the Supreme Court has interpreted Section 230,” says Kosseff. “There’s a danger that whatever the court does is going to have to endure for [another] 27 years. And I think there’s a real danger in making a rule that’s very specific to 2023 technology — when in five or ten years, it’s going to look completely antiquated.” If Gonzalez leads to harder limits on Section 230, courts could decide that simply summarizing a statement makes AI search engines responsible for it, even if they’re repeating it from somewhere else.
Precedents around people lightly editing posts by hand will only offer a limited guidepost for complicated, large-scale AI-generated writing. Courts could end up having to decide how much summarizing is too much for Section 230, and their decision could be colored by the political and cultural climate, not just the letter of the law. Judges have interpreted Section 230’s protections expansively in the past, but amid an anti-tech backlash and a Supreme Court reevaluation of the law, they may not afford any new technology the kind of latitude earlier platforms got. And the current Supreme Court has proven willing to throw out legal precedent by overturning the landmark Roe v. Wade decision, on top of some individual justices waging a culture war around online speech. Clarence Thomas, for example, has specifically argued for putting Section 230 on the chopping block.
The line between AI search and conventional search isn’t always clear-cut
None of which means that all AI search is legally doomed. Section 230 is an incredibly important law, but removing it wouldn’t let people automatically win a lawsuit over every incorrectly summarized fact. Defamation, for instance, requires demonstrating that the false information exists and you were harmed by it, among other conditions. “Even if 230 didn’t apply, it’s not like there would be automatic liability,” Kosseff notes.
This question gets even muddier because the language people use in queries already affects their conventional search results, and you can intentionally nudge language models into delivering false info with leading questions. If you’re entering dozens of queries trying to make Bard falsely tell you that some celebrity committed murder, is that legally equivalent to Bard delivering the accusation in a simple search for the person’s name? So far, no judge has ruled on this question, and it’s not clear it’s even been asked in court.
And the line between AI summaries and conventional search isn’t always clear-cut. The regular Google search results page already has suggested answer boxes that editorialize around its search results. These have delivered potentially dangerous misinformation in the past: in one search snippet, Google inadvertently turned a series of “don’ts” for dealing with a seizure into a list of recommendations. So far, this hasn’t produced a deluge of lawsuits.
But as courts reconsider the fundamentals of internet law, they’re doing it at the dawn of a new technology that could transform the internet — but might take on a lot of legal risk along the way.