Why Is Google's AI Search Feature Producing Unreliable Results?

Category Artificial Intelligence

tldr #
45 seconds

Google's new AI-powered search feature, AI Overviews, has been producing unreliable and potentially harmful information due to the inherent unreliability of AI systems. However, Google has been working on technical improvements to reduce incorrect responses and limit the inclusion of satirical or humorous content. The feature uses a new generative AI model integrated with Google's core web ranking systems, but also relies on a technique called retrieval-augmented generation (RAG) to prevent hallucinations. RAG allows the system to check specific sources for information, leading to more up-to-date and accurate responses. However, RAG is not infallible and relies on both retrieving and generating information correctly. Misinformation can still occur in responses to user queries.


content #
3 minutes, 10 seconds

When Google announced it was rolling out its artificial-intelligence-powered search feature earlier this month, the company promised that "Google will do the googling for you." The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.

Unfortunately, AI systems are inherently unreliable. Within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875.

Google's AI Overviews feature aims to provide brief summaries on top of search results.

On Thursday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system to make it less likely to generate incorrect answers, including better detection mechanisms for nonsensical queries. It is also limiting the inclusion of satirical, humorous, and user-generated content in responses, since such material could result in misleading advice.

But why is AI Overviews returning unreliable, potentially dangerous information? And what, if anything, can be done to fix it? .

AI systems are inherently unreliable due to their reliance on large language models (LLMs).

In order to understand why AI-powered search engines get things wrong, we need to look at how they’ve been optimized to work. We know that AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs), that’s been customized for Google Search. That model has been integrated with Google’s core web ranking systems and designed to pull out relevant results from its index of websites.

Google has been making improvements to their AI Overviews feature to reduce the likelihood of incorrect answers.

Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.

The Gemini model used in AI Overviews has been integrated with Google's core web ranking systems.

Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.

One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.) .

Retrieval-augmented generation (RAG) is a technique used to prevent LLMs from hallucinating.

But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results in misinformation that can be harmful for users.


hashtags #
worddensity #

Share