Regulating Generative AI - When Enough is Enough
Category Artificial Intelligence Wednesday - May 17 2023, 23:11 UTC - 9 months ago Google has announced plans to embed generative AI tools into their products, allowing billions of people access to powerful AI models. Calls for regulation of live AI tools are growing, but debates within the EU as to how to regulate these types of tools are intensifying. AI has been documented to lead to bias, discrimination, and numerous scams and pitfalls. Regulators are looking to review existing laws in order to regulate AI, but it is unclear when this will happen.
Wednesday - May 17 2023, 23:11 UTC - 9 months ago
Google has announced plans to embed generative AI tools into their products, allowing billions of people access to powerful AI models. Calls for regulation of live AI tools are growing, but debates within the EU as to how to regulate these types of tools are intensifying. AI has been documented to lead to bias, discrimination, and numerous scams and pitfalls. Regulators are looking to review existing laws in order to regulate AI, but it is unclear when this will happen.
Last week Google revealed it is going all in on generative AI. At its annual I/O conference, the company announced it plans to embed AI tools into virtually all of its products, from Google Docs to coding and online search. (Read my story here.) .
Google’s announcement is a huge deal. Billions of people will now get access to powerful, cutting-edge AI models to help them do all sorts of tasks, from generating text to answering queries to writing and debugging code. As MIT Technology Review’s editor in chief, Mat Honan, writes in his analysis of I/O, it is clear AI is now Google’s core product.
Because these sorts of AI tools are relatively new, they still operate in a largely regulation-free zone. But that doesn’t feel sustainable. Calls for regulation are growing louder as the post-ChatGPT euphoria is wearing off, and regulators are starting to ask tough questions about the technology.
In a statement, Harris said the companies have an "ethical, moral, and legal responsibility" to ensure that their products are safe. Senator Chuck Schumer of New York, the majority leader, has proposed legislation to regulate AI, which could include a new agency to enforce the rules.
"Everybody wants to be seen to be doing something. There’s a lot of social anxiety about where all this is going," says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
Getting bipartisan support for a new AI bill will be difficult, King says: "It will depend on to what extent [generative AI] is being seen as a real, societal-level threat." But the chair of the Federal Trade Commission, Lina Khan, has come out "guns blazing," she adds. Earlier this month, Khan wrote an op-ed calling for AI regulation now to prevent the errors that arose from being too lax with the tech sector in the past. She signaled that in the US, regulators are more likely to use existing laws already in their tool kit to regulate AI, such as antitrust and commercial practices laws.
The EU is set to create more rules to constrain generative AI too, and the parliament wants companies creating large AI models to be more transparent. These measures include labeling AI-generated content, publishing summaries of copyrighted data that was used to train the model, and setting up safeguards that would prevent models from generating illegal content.
But here’s the catch: the EU is still a long way away from implementing rules on generative AI, and a lot of the proposed elements of the AI Act are not going to make it to the final version. There are still tough negotiations left between the parliament, the European Commission, and the EU member countries. It will be years until we see the AI Act in force.
This statement is outrageous. The harm caused by AI has been well documented for years. There has been bias and discrimination, AI-generated fake news, and scams. Other AI systems have led to innocent people being arrested, people being trapped in poverty, and tens of thousands of people being wrongfully accused of fraud. These harms are likely to grow exponentially as generative AI is integrated deeper into our society, thanks to announcements like Google’s.
The question we should be asking ourselves is: How much harm are we willing to see? I’d say we’ve seen enough.