The Potential Dangers of Artificial Intelligence: A Global Commitment to Cooperation
Category Machine Learning Sunday - May 26 2024, 08:56 UTC - 5 months ago Countries and tech companies have pledged to work together against threats posed by advanced AI, including its potential for misuse and ability to evade human control. This comes after concerns were raised about AI's impact on society, such as the development of chemical or biological weapons by non-state actors and the use of deceptive deepfake content in elections. The summit emphasizes the need for international cooperation and standards to govern the responsible use of AI, with experts highlighting the challenge of regulating a rapidly developing technology.
More than a dozen countries and some of the world's biggest tech firms pledged on Wednesday to cooperate against the potential dangers of artificial intelligence, including its ability to dodge human control, as they wrapped up a global summit in Seoul. This commitment comes in light of the fast-paced development and growing presence of advanced AI technologies in our world today, and the need for responsible and ethical use of such capabilities.
AI safety was front and center of the agenda at the two-day gathering. In the latest declaration, more than two dozen countries including the United States and France agreed to work together against threats from cutting-edge AI, including "severe risks". Such risks could include an AI system helping "non-state actors in advancing the development, production, acquisition or use of chemical or biological weapons", said a joint statement from the nations. These dangers also include an AI model that could potentially "evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation", they added.
The ministers' statement followed a commitment on Tuesday by some of the biggest AI companies, including ChatGPT maker OpenAI and Google DeepMind, to share how they assess the risks of their tech, including what is considered "intolerable". The 16 tech firms also committed to not deploying a system where they cannot keep risks under those limits.
The Seoul summit, co-hosted by South Korea and Britain, was organized to build on the consensus reached at the inaugural AI safety summit last year. "As the pace of AI development accelerates, we must match that speed... if we are to grip the risks," UK technology secretary Michelle Donelan said. "Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI." .
The summit also saw a separate commitment—the so-called Seoul AI Business Pledge—from a group of tech companies including South Korea's Samsung Electronics and US titan IBM, to develop AI responsibly. AI is "a tool in the hands of humans. And now is our moment to decide how we're going to use it as a society, as companies, as governments," Christina Montgomery, IBM's Chief Privacy and Trust Officer, told AFP on the sidelines of the summit. "Anything can be misused, including AI technology," she added. "We need to put guardrails in place, we need to put protections in place, we need to think about how we're going to use it in the future." .
Seeking consensus, AI's proponents have heralded it as a breakthrough that will improve lives and businesses around the world, especially after the stratospheric success of ChatGPT. However, critics, rights activists and governments have warned that the technology can be misused in a wide variety of ways, including election manipulation through AI-generated disinformation such as "deepfake" pictures and videos of politicians. Many have called for international standards to govern the development and use of AI. But experts at the Seoul summit warned that AI poses a huge challenge to regulators because it is rapidly developing. "Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades," said Mady Delvaux, a former Member of European Parliament and now a key AI policy adviser in the European Commission. AI poses challenges to traditional regulatory frameworks, she added, and there is a need for cooperation on developing new policies and regulations that can adapt to AI and prevent its misuse. According to Delvaux, AI's ability to dodge human control is a major concern for regulators. "Some people say: if we put ethics in the robot they will do ethical things. But there is a very fundamental difference between human ethics and robot ethics," she explained. "Robots are not human - they do not have feelings. We have to make sure that we have safeguards in place, that ethics decisions are made by humans and not by autonomous machines," she added.
Share