AI Uncertainty Principle: Why Consumers are Going too Easy on ChatGPT

Category Machine Learning

tldr #

Johns Hopkins University cybersecurity and artificial intelligence expert Anton Dahbura is warning consumers, industry players, and governments about the dangers of large language models and their tendency to produce fabricated and biased content. Better data, corporate accountability, and educating users about what AI is and what its risks are can help mitigate risk. The AI Uncertainty Principle expresses the inherent difficulty in predicting the outcome of an AI system when not every possible situation has been explicitly trained.


content #

he popular press and consumers are going too easy on ChatGPT, says Johns Hopkins University cybersecurity and artificial intelligence expert Anton Dahbura. According to him, the unreliability of such large language models, or LLMs, and their production of fabricated and biased content pose a real and growing threat to society."Industry is making lots of money off products that they know are being used in wrong ways, on a huge scale," says Dahbura, director of Johns Hopkins' Information Security Institute and co-director of the Institute for Assured Autonomy. "ChatGPT's 'hallucinations' [the system's tendency to sometimes generate senseless content] would be called 'failures' if they occurred in other consumer products—for example, if my car's accelerator had a 'hallucination' and ran me into a brick wall." .

Large Language Models (LLMs) are AI-based technology advancements that are used to generate and interpret data sets that are too complex for conventional programming languages.

Better data, corporate accountability, and educating users about what AI is and what its limits are can help mitigate risk, Dahbura says, "but they will never make the problem go away completely unless the problem is so simple that AI shouldn't have been used to solve it in the first place.".

The Hub sat down with Dahbura to discuss the reasons for uncertainty in large language models, the role he believes industry and government should play in educating consumers about AI and its risks, and the threats these new technologies might pose to society.

LLM technologies have been used to drive popular applications like autonomous vehicles, medical diagnostics, and natural language processing applications that produce structured output for question-answer formate.

You've referred to ChatGPT and other large language models as the 'modern-day version of the Magic 8 Ball.' Explain.

Artificial intelligence is a broad class of approaches to solving difficult problems that don't have easy or "rule-based" solutions. A thermostat is an answer to a simple problem: When the temperature rises above a certain threshold, it turns on the air conditioning, and when it goes below that threshold, it turns on the heat.

The AI Uncertainty Principle is an inextricable property of AI systems that expresses the inherent difficulty in accurately predicting the outcome of an AI system when not every possible situation has been explicitly trained.

But sometimes questions don't have clear answers that simple rules alone can solve. For instance, when training AI to differentiate between images of dogs and cats, the factors that the AI system uses for its classification are extremely complex and rarely well understood. Therefore, it is difficult to be able to place guarantees on how the system will respond to an image of a dog or cat that it hasn't been trained on, much less an image of an orange. It may not even respond predictably to an image that it's been trained on! .

Industry players are using ChatGPT and other LLMs to create, reinforce, and amplify bias, leading to false accusations and fabricated stories and facts.

I've coined this inherent and inextricable property of AI systems as the "AI uncertainty principle" because the complexity of AI problems means that certainty and AI cannot coexist unless the solution is so simple that it doesn't require AI, or rule-based guardrails that are built to temper the unpredictable nature of the AI system.

What I am saying is that it is not possible to train these technologies on every single scenario, so you cannot accurately predict the outcome of using it every single time. It's the same with the Magic 8 Ball: The answer might not be what you expect to get.

Johns Hopkins Information Security Institute director Professor Anton Dahbura is calling for more corporate accountability and better data practices to mitigate the risks of LLM technologies.

You call companies irresponsible for failing to warn people about LLMs' potential to 'hallucinate.' Could you share an example of what you mean by a hallucination? .

Hallucinations refer to responses that contain informations that LLMs were not explicitly trained to produce. For example, a LLM like ChatGPT might output a sentence about a person's actions using language it has never seen before. It could even generate a story about activities that were never actually done. This can lead to wrongly accusing innocent people of things they never did, which is immensely dangerous in terms of fairness and justice.

The education of consumers on AI and its potential risks are considered to be paramount in order to reduce the negative effects of AI applications.

Large language models (LLMs) are widely used for a variety of applications in both consumer and industry sectors. While these advancements have been crucial in aiding AI applications and creating innovative products, the potential for bias and error should not be overlooked. LLM technologies have the potential to produce inaccurate results that lead to false convictions, unreliable data, and fabricated stories and facts. Johns Hopkins Information Security Institute director Professor Anton Dahbura is warning consumers, industry players, and governments about the dangers of relying on such unreliable technology and their potential to cause unintended damage.


hashtags #
worddensity #

Share