OpenAI's Turmoil Highlights AI Safety and AGI Catastrophic Risks

Category Technology

tldr #

The turmoil at OpenAI highlights the need to consider AI safety and the catastrophic risks posed by the development of artificial general intelligence (AGI). Task-specific AI is widely used, and can cause harm through bias and proxy discrimination. The risk of AI going rogue is speculative for now, but is a key topic for discussion.


content #

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altman’s termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAI’s remarkable growth – products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide – has hindered the company’s ability to focus on catastrophic risks posed by AGI. OpenAI’s goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

AI affects every aspect of our daily lives, including but not limited to, job searches, online shopping and banking services and medical diagnosis.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work – and how they can harm people. AI is pervasive and plays a visible part in many people’s lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also performs roles that are often forgotten about, such as reshaping social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service. AI also affects our lives in ways we are unaware of, such as employers using AI in the hiring process, banks using AI to decide loan applications, health care providers using AI to analyze medical images, and AI being used in the criminal justice system.

Machine learning-based tools can acquire bias from human-generated training data due to factors such as past practices.

Because many AI systems operate in the background, they can contain biases that cause harm to people. For example, machine learning methods are tasked with generalizing patterns from training data, and a machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men. In addition, predictive methods in areas ranging from health care to child welfare could exhibit cohort bias that leads to unequal risk assessments across different groups in society. But even when legal practices prohibit discrimination based on attributes such as race and gender, such as in consumer lending, proxy discrimination can still occur when AI algorithms do not use characteristics that are legally protected, such asZip codes or credit scores.

Proxy discrimination can occur when algorithmic decision-making models use characteristics that are highly correlated with race and gender.

OpenAI’s board terminating and then rehiring Altman has caused prolific conversation about the risks posed by AGI and how we safeguard ourselves against a rogue AI in the future. OpenAI’s original mission statement was to “develop friendly AGI”, and it’s become just as important to consider the safety of existing AI systems as it is the evolution of current AGI technology.


hashtags #
worddensity #

Share