Google's

TapTechNews May 26 - Google Search's newly launched AI Summaries (AIOverviews) feature has recently come under fire as it often provides seriously wrong search result information. For example, the feature once suggested that users use glue to prevent the cheese on pizza from slipping.

Googles_0

Earlier this week, according to TheVerge, Google's CEO Sundar Pichai admitted in an interview that the hallucinations produced by these AI Summaries features are an inherent flaw of large language models (LLMs), which are the core technology of the AI Summaries feature. Pichai said that there is still no solution to this problem (isstillanunsolvedproblem).

This means that although Google engineers have been constantly working to fix the various strange and seriously wrong answers that appear in the AI Summaries feature, such problems will still continue to occur.

However, Pichai seems to downplay the severity of these errors. He said, The AI Summaries feature sometimes makes mistakes, but that doesn't mean it's not useful. I don't think that's the right way to look at this feature. Have we made progress? Yes, definitely. Compared to last year, we have made great progress in terms of factual accuracy indicators. The whole industry is improving, but the problem is not completely solved yet.

Googles_1

TapTechNews noted that the AI consultant and SEO expert Britney Muller wrote on social media, People expect the accuracy of AI to be far greater than that of traditional methods, but this is not always the case! Google is taking a risky gamble in the search field in an attempt to outdo competitors Perplexity and OpenAI, but they could have used AI in much larger and more valuable use cases.

Likes