Exploring the Limit of AI's Intelligence

Category Artificial Intelligence

tldr #

This article explores the philosophical implications of AI and how to build a machine that is aware of itself and others. A new concept is explained, called Bennett's Razor, which states the explanation of a process should not be more specific than necessary. Additionally, Gricean Pragmatics is explored for its relevance to understanding meaning and intent.

content #

To build a machine, one must know what its parts are and how they fit together. To understand the machine, one needs to know what each part does and how it contributes to its function. In other words, one should be able to explain the "mechanics" of how it works.

According to a philosophical approach called mechanism, humans are arguably a type of machine—and our ability to think, speak, and understand the world is the result of a mechanical process we don’t understand.

GPT-4 is considered to one of the most advanced AI models available.

To understand ourselves better, we can try to build machines that mimic our abilities. In doing so, we would have a mechanistic understanding of those machines. And the more of our behavior the machine exhibits, the closer we might be to having a mechanistic explanation of our own minds.

This is what makes AI interesting from a philosophical point of view. Advanced models such as GPT-4 and Midjourney can now mimic human conversation, pass professional exams, and generate beautiful pictures with only a few words.

The idea of Bennett’s Razor is that explanations should not be more specific than necessary.

Yet, for all the progress, questions remain unanswered. How can we make something self-aware, or aware that others are aware? What is identity? What is meaning? .

Although there are many competing philosophical descriptions of these things, they have all resisted mechanistic explanation.

In a sequence of papers accepted for the 16th Annual Conference in Artificial General Intelligence in Stockholm, I pose a mechanistic explanation for these phenomena. They explain how we may build a machine that’s aware of itself, of others, of itself as perceived by others, and so on.

Ockham's Razor is a principle from philosophy which states that the simplest explanation is usually the correct one.

--- Intelligence and Intent --- .

A lot of what we call intelligence boils down to making predictions about the world with incomplete information. The less information a machine needs to make accurate predictions, the more "intelligent" it is.

For any given task, there’s a limit to how much intelligence is actually useful. For example, most adults are smart enough to learn to drive a car, but more intelligence probably won’t make them better drivers.

In the experiment comparing how much data AI systems need to learn basic maths, the AI which preferred explanations that were less specific outperformed one which preferred simpler explanations by 500 percent.

My papers describe the upper limit of intelligence for a given task, and what is required to build a machine that attains it.

I named the idea Bennett’s Razor, which in non-technical terms is that "explanations should be no more specific than necessary." This is distinct from the popular interpretation of Ockham’s Razor (and mathematical descriptions thereof), which is a preference for simpler explanations.

A concept in philosophy of language called Gricean Pragmatics looks at how meaning is related to intent.

The difference is subtle, but significant. In an experiment comparing how much data AI systems need to learn simple maths, the AI that preferred less specific explanations outperformed one preferring simpler explanations by as much as 500 percent.

Exploring the implications of this discovery led me to a mechanistic explanation of meaning —something called "Gricean pragmatics." This is a concept in philosophy of language that looks at how meaning is related to intent.

To survive, animals need to be able to predict how their environment and other animals around them will act and react based off of their feelings and preferences.

To survive, an animal needs to predict how its environment, including other animals, will act and react. You wouldn’t hesitate to leave a car unattended near a dog, but the same can’t be said of your rump steak lunch.

Being intelligent in a community means being able to infer the intent of others, which stems from their feelings and preferences. If a machine was to attain the upper limit of intelligence for a task, then it should be able to do this too —but how? .

hashtags #
worddensity #