AI: The Black Box and the Rabbit Hole
Category Machine Learning Thursday - January 4 2024, 20:45 UTC - 10 months ago In 2023, AI growth has triggered a global risk of extinction due to its complexity, lack of understanding, and the effect of people's perceptions on AI models. To mitigate the risk, governments have pledged cooperation, while researchers focus on making sure AI models are aligned with human values. AI originates from a mathematical definition by neurophysiologist Warren McCulloch and logician Walter Pitts, and is intertwined with the cognitive sciences, neuroscience and computer science.
Future historians may well regard 2023 as a landmark in the advent of artificial intelligence (AI). But whether that future will prove utopian, apocalyptic or somewhere in between is anyone's guess.
In February, ChatGPT set the record as the fastest app to reach 100 million users. It was followed by similar "large language" AI models from Google, Amazon, Meta and other big tech firms, which collectively look poised to transform education, health care and many other knowledge-intensive fields.
However, AI's potential for harm was underscored in May by an ominous statement signed by leading researchers: .
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." .
In November, responding to the growing concern about AI risk, 27 nations (including the UK, US, India, China and the European Union) pledged cooperation at an inaugural AI Safety Summit at Bletchley Park in England, to ensure the safe development of AI for the benefit of all.
To achieve this, researchers focus on AI alignment—that is, how to make sure AI models are consistent with human values, preferences and goals. But there's a problem—AI's so-called "dark secret": large-scale models are so complex they are like a black box, impossible for anyone to fully understand.
AI's black box problem .
Although the transparency and explainability of AI systems are important research goals, such efforts seem unlikely to keep up with the frenetic pace of innovation.
The black box metaphor explains why people's beliefs about AI are all over the map. Predictions range from utopia to extinction, and many even believe an artificial general intelligence (AGI) will soon achieve sentience.
But this uncertainty compounds the problem. AI alignment should be a two-way street: we must not only ensure AI models are consistent with human intentions, but also that our beliefs about AI are accurate.
This is because we are remarkably adept at creating futures that accord with those beliefs, even if we are unaware of them. So-called "expectancy effects," or self-fulfilling prophecies, are well known in psychology. And research has shown that manipulating users' beliefs influences not just how they interact with AI, but how AI adapts to the user.
In other words, how our beliefs (conscious or unconscious) affect AI can potentially increase the likelihood of any outcome, including catastrophic ones.
AI, computation, logic and arithmetic .
We need to probe more deeply to understand the basis of AI—like Alice in Wonderland, head down the rabbit hole and see where it takes us.
Firstly, what is AI? It runs on computers, and so is automated computation. From its origin as the "perceptron" —an artificial neuron defined mathematically in 1943 by neurophysiologist Warren McCulloch and logician Walter Pitts—AI has been intertwined with the cognitive sciences, neuroscience and computer science.
This convergence of minds, brains and machines has led to the widely-held belief that, because AI is computation by machine, then natural intelligence (the mind) must be computation by the brain.
But what is computable, and thus engineering-friendly? Arithmetic, logic and mathematics are all rooted in computation. So, if AI is computational, and the mind is computational, then there must be some shared underlying logic between AI and natural intelligence.
Share