Taking AI Seriously - The Tangible Challenges Facing Our Society
Category Computer Science Saturday - June 24 2023, 09:31 UTC - 1 year ago AI is a branch of computer science and engineering that has been in the works since the 1950s. Despite the widespread adoption of AI-based tools, the public conversation appears disproportionately dominated by fears of speculative, existential AI-related threats. In this article, we aim to put these risks into perspective and address the tangible challenges posed by today's AI systems, from regulations to ethics and transparency. We also discuss the potential existential risk of AI and the need to take this into account when discussing regulatory actions.
Over the past few months, artificial intelligence (AI) has entered the global conversation as a result of the widespread adoption of generative AI-based tools such as chatbots and automatic image generation programs. Prominent AI scientists and technologists have raised concerns about the hypothetical existential risks posed by these developments.Having worked in AI for decades, this surge in popularity and the sensationalism that has followed have caught us by surprise. Our goal with this article is not to antagonize, but to balance the public perception which seems disproportionately dominated by fears of speculative AI-related existential threats.
It's not our place to say one cannot, or should not, worry about the more exotic risks. As members of the European Laboratory for Learning and Intelligent Systems (ELLIS), a research-anchored organization focused on machine learning, we do feel it is our place to put these risks into perspective, particularly in the context of governmental organizations contemplating regulatory actions with input from tech companies.
What is AI? .
AI is a discipline within computer science or engineering that took shape in the 1950s. Its aspiration is to build intelligent computational systems, taking as a reference human intelligence. In the same way as human intelligence is complex and diverse, there are many areas within artificial intelligence that aim to emulate aspects of human intelligence, from perception to reasoning, planning and decision-making.
Depending on the level of competence, AI systems can be divided into three levels: AI systems that are merely performing simple commands, such as text-to-speech systems or photo editing software; AI systems that can display partial autonomy, such as automated cars or robots; and AI systems capable of independent thought, such as self-driving cars, autonomous robots, or trained systems that can manipulate data or process complex tasks autonomously.
AI can be applied to any field from education to transportation, healthcare, law or manufacturing. Thus, it is profoundly changing all aspects of society. Even in its "narrow AI" form, it has a significant potential to generate sustainable economic growth and help us tackle the most pressing challenges of the 21st century, such as climate change, pandemics, and inequality.
Challenges posed by today's AI systems .
The adoption of AI-based decision-making systems over the last decade on a wide range of domains, from social media to the labor market, also poses significant societal risks and challenges that need to be understood and addressed.
The recent emergence of highly capable large, generative pre-trained transformer (GPT) models exacerbates many of the existing challenges while creating new ones that deserve careful attention. The unprecedented scale and speed with which these tools have been adopted by hundreds of millions of people worldwide is placing further stress on our societal and regulatory systems.
There are some critically important challenges that should be our priority: .
- Regulations that help us understand and assess the impact of AI-based systems on society, and that ensure the safety of people who use or interact with these systems.
- Transparency in decision-making and accountability when it comes to AI-based systems and the data used to train them.
- Development of comprehensive ethical codes of conduct for AI research and development to prevent the misuse of AI technologies.
- Creation of effective regulation and standards to protect against the misuse and abuse of AI-based systems.
- Ensure privacy protections are in place to protect user data from harm and exploitation.
- Enhance public understanding of AI-driven decision-making to ensure trust and acceptance in society.
Is AI really an existential risk for humanity? .
Unfortunately, rather than focusing on these tangible risks, the public conversation—most notably the recent open letters—has mainly focused on hypothetical existential risks of AI.
An existential risk refers to a potential event or scenario that represents a threat to the continued existence of humanity with consequences that could irreversibly damage or destroy human civilization, and therefore lead to the extinct of humanity. We believe that any regulation should take this into account.
Share