How judges, not politicians, could dictate America’s AI rules
If these cases prove successful, they could force OpenAI, Meta, Microsoft, and others to change the way AI is built, trained, and deployed so that it is more fair and equitable.
They could also create new ways for artists, authors, and others to be compensated for having their work used as training data for AI models, through a system of licensing and royalties.
The generative AI boom has revived American politicians’ enthusiasm for passing AI-specific laws. However, we’re unlikely to see any such legislation pass in the next year, given the split Congress and intense lobbying from tech companies, says Ben Winters, senior counsel at the Electronic Privacy Information Center. Even the most prominent attempt to create new AI rules, Senator Chuck Schumer’s SAFE Innovation framework, does not include any specific policy proposals.
“It seems like the more straightforward path [toward an AI rulebook is] to start with the existing laws on the books,” says Sarah Myers West, the managing director of the AI Now Institute, a research group.
And that means lawsuits.
Lawsuits left, right, and center
Existing laws have provided plenty of ammunition for those who say their rights have been harmed by AI companies.
In the past year, those companies have been hit by a wave of lawsuits, most recently from the comedian and author Sarah Silverman, who claims that OpenAI and Meta scraped her copyrighted material illegally off the internet to train their models. Her claims are similar to those of artists in another class action alleging that popular image-generation AI software used their copyrighted images without consent. Microsoft, OpenAI, and GitHub’s AI-assisted programming tool Copilot are also facing a class action claiming that it relies on “software piracy on an unprecedented scale” because it’s trained on existing programming code scraped from websites.
Meanwhile, the FTC is investigating whether OpenAI’s data security and privacy practices are unfair and deceptive, and whether the company caused harm, including reputational harm, to consumers when it trained its AI models. It has real evidence to back up its concerns: OpenAI had a security breach earlier this year after a bug in the system caused users’ chat history and payment information to be leaked. And AI language models often spew inaccurate and made-up content, sometimes about people.