The EU is Drawing Guardrails for Artificial Intelligence

Category Machine Learning

tldr #

The European Parliament committee voted to pass the AI Act, a long-awaited proposal to draw up guardrails for artificial intelligence. The act will classify AI systems according to risk, with the highest risk applications needing to be more transparent and use accurate data. The proposal bans 'social scoring' systems, predictive policing and remote facial recognition, save for certain exceptions. It also gave content creators the right to know if their works are used to power algorithms.


content #

Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faced a pivotal moment on Thursday.A European Parliament committee voted to strengthen the flagship legislative proposal as it heads toward passage, part of a yearslong effort by Brussels to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advances of chatbots like ChatGPT highlight benefits the emerging technology can bring—and the new perils it poses.

The EU's AI Act will be the world's first integrated set of regulations for AI

Here's a look at the EU's Artificial Intelligence Act: .

--- HOW DO THE RULES WORK? --- The AI Act, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including being more transparent and using accurate data. Think about it as a "risk management system for AI," said Johann Laux, an expert at the Oxford Internet Institute.

Deployment of 'social scoring' systems is banned under the Act

--- WHAT ARE THE RISKS? --- One of the EU's main goals is to guard against any AI threats to health and safety and protect fundamental rights and values. That means some AI uses are an absolute no-no, such as "social scoring" systems that judge people based on their behavior. AI that exploits vulnerable people including children or that uses subliminal manipulation that can result in harm, such as an interactive talking toy that encourages dangerous behavior, is also forbidden.

The Act gives content creators the right to know if their works are used to train algorithms

Lawmakers beefed up the proposal by voting to ban predictive policing tools, which crunch data to forecast where crimes will happen and who will commit them. They also approved a widened ban on remote facial recognition, save for a few law enforcement exceptions like preventing a specific terrorist threat. The technology scans passers-by and uses AI to match their faces to a database.

The aim is "to avoid a controlled society based on AI," Brando Benifei, the Italian lawmaker helping lead the European Parliament's AI efforts, told reporters Wednesday. "We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high." .

AI systems that use vulnerable people or encourages dangerous behaviour is not permisisble under the Act

AI systems used in high risk categories like employment and education, which would affect the course of a person's life, face tough requirements such as being transparent with users and putting in place risk assessment and mitigation measures. The EU's executive arm says most AI systems, such as video games or spam filters, fall into the low- or no-risk category.

--- WHAT ABOUT CHATGPT? --- The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled so users know they're interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT, subjecting them to some of the same requirements as high-risk systems. One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video or music that resembles human work. That would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their works are being used in ways they don't approve of.

The European Parliament committee voted to ban predictive policing tools under the Act

hashtags #
worddensity #

Share