Cheraw Chronicle

Complete News World

Google: The illogical answers in artificial intelligence search results were not a hallucination – IT – News

The strange and often inaccurate answers that Google’s application of artificial intelligence has shown in its search engine in recent weeks were not a hallucination but were typically a “misunderstood search query.” Google also says that not much information can be searched at all.

Google responds In a blog post to recent social media posts about the new AI Overview feature. The company showed it off during the I/O Developers Conference. AI Overview is a feature in which artificial intelligence, based on Gemini, attempts to provide answers to queries at the top of search results. Strange results appeared in recent weeks and quickly spread rapidly. This is how artificial intelligence recommended people with depression To jump off the bridgeOr that users It is best to stare directly at the sun for fifteen minutes.

In a blog post, Google Now says there are several reasons why the search engine provided these answers, but also that there were a lot of screenshots that the company believes are fake. Google cites a number of examples, including where AI advised pregnant women to smoke, but says “those results never happened.” The company does not say how it can say that with such certainty.

Google acknowledges that other answers that don’t make sense appear. According to Google, in most cases this is due to the presence of so-called Data is empty came into existence; There is too little information available on the Internet for an AI to be able to provide a good answer. In many cases, these questions don’t make sense at first, such as “How many stones should you eat each day?” “Before these screenshots went viral, almost no one asked Google,” the company says. For this reason, there are almost no websites that provide a serious answer to this question. Therefore, AI Overview cannot rely on too much training data.

Google Bridge AI overview

Google also says that “several examples” of answers appeared based on “sarcastic or troll-like content in the forums.” The company also admits that the AI ​​Overview misinterpreted language and displayed incorrect information in a “limited number of cases,” but denies that the AI ​​Overview was a hallucination.

Generative AI is known for providing answers that are not content-based, but merely a linguistic representation of what someone could reasonably expect in response. According to Google, AI Overview is not just a language model, but the information only comes from the top results that typically come from the search engine. “When AI Overview goes wrong, it’s usually for another reason, such as misunderstanding of searches or nuances of Internet language, or because there’s not enough good information available.”