Courts and Not Politicians Will Decide The Limits On How AI Is Developed and Used In The US
Category Artificial Intelligence Thursday - July 20 2023, 02:18 UTC - 7 months ago Courts are likely to determine the limits on how AI is developed and used in the US, whilst an increasing number of lawsuits are surfacing surrounding AI companies who are allegedly breaking existing laws. The FTC are investigating OpenAI to ensure that consumer protection laws are being followed, and new systems of licensing and royalties could be set up to compensate artists and authors whose works are being used to train AI models.
Thursday - July 20 2023, 02:18 UTC - 7 months ago
Courts are likely to determine the limits on how AI is developed and used in the US, whilst an increasing number of lawsuits are surfacing surrounding AI companies who are allegedly breaking existing laws. The FTC are investigating OpenAI to ensure that consumer protection laws are being followed, and new systems of licensing and royalties could be set up to compensate artists and authors whose works are being used to train AI models.
It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US. Last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. Meanwhile, artists, authors, and the image company Getty are suing AI companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their models on their work without providing any recognition or payment .
If these cases prove successful, they could force OpenAI, Meta, Microsoft, and others to change the way AI is built, trained, and deployed so that it is more fair and equitable. They could also create new ways for artists, authors, and others to be compensated for having their work used as training data for AI models, through a system of licensing and royalties.The generative AI boom has revived American politicians’ enthusiasm for passing AI-specific laws .
However, we’re unlikely to see any such legislation pass in the next year, given the split Congress and intense lobbying from tech companies, says Ben Winters, senior counsel at the Electronic Privacy Information Center. Even the most prominent attempt to create new AI rules, Senator Chuck Schumer’s SAFE Innovation framework, does not include any specific policy proposals. "It seems like the more straightforward path [toward an AI rulebook is] to start with the existing laws on the books," says Sarah Myers West, the managing director of the AI Now Institute, a research group .
And that means lawsuits. Existing laws have provided plenty of ammunition for those who say their rights have been harmed by AI companies. In the past year, those companies have been hit by a wave of lawsuits, most recently from the comedian and author Sarah Silverman, who claims that OpenAI and Meta scraped her copyrighted material illegally off the internet to train their models. Her claims are similar to those of artists in another class action alleging that popular image-generation AI software used their copyrighted images without consent .
Microsoft, OpenAI, and GitHub’s AI-assisted programming tool Copilot are also facing a class action claiming that it relies on "software piracy on an unprecedented scale" because it’s trained on existing programming code scraped from websites. Meanwhile, the FTC is investigating whether OpenAI’s data security and privacy practices are unfair and deceptive, and whether the company caused harm, including reputational harm, to consumers when it trained its AI models .
It has real evidence to back up its concerns: OpenAI had a security breach earlier this year after a bug in the system caused users’ chat history and payment information to be leaked. And AI language models often spew inaccurate and made-up content, sometimes about people. OpenAI is bullish about the FTC investigation—at least in public. When contacted for comment, the company shared a Twitter thread from CEO Sam Altman in which he said the company is "confident we follow the law .
" An agency like the FTC cautions companies regarding unclear rules and emerging trends, says Karl Manheim, a professor of law at Loyola Law School. He says such warnings usually come with instructions on how to improve practices.