New Perspective on

TapTechNews June 12, it has long been common to refer to the seemingly reasonable but error-ridden answers provided by large language models as AI hallucinations (AIhallucinations). However, three philosophy researchers from the University of Glasgow in the UK have recently put forward a different view - the description of AI hallucinations is not accurate.

On June 8 local time, the journal Ethics and Information Technology published the paper of the three researchers. This paper points out that the behavior of chatbots making up answers should not be called hallucinations, and it is more accurate to describe it with the word bullshitting.

The researchers point out that people who have studied psychology or used psychedelic drugs know that hallucinations are usually defined as seeing and perceiving things that don't exist. In the field of AI, hallucinations are clearly a metaphor, and large language models can't see or perceive anything at all. AI is not experiencing hallucinations, but is replicating human language patterns in training data and does not care about the accuracy of facts.

Machines are not trying to convey what they believe or perceive, and their inaccuracies are not due to misunderstandings or hallucinations. As we have pointed out, they are not trying to convey information at all, they are bullshitting.

The researchers believe that AI models have no beliefs, intentions or understanding, and their inaccuracies are not because of misunderstandings or hallucinations, but because they are designed to create text that looks and sounds correct, and there is no internal mechanism to ensure the accuracy of facts.

 New Perspective on_0

TapTechNews has reported many times on phenomena related to AI hallucinations. For example, the recent Google search recommending users to add glue to pizza, and Musk's Grok mistakenly thinking it is an OpenAI product, and so on.

The Cambridge Dictionary (CambridgeDictionary) announced last year that the word of the year for 2023 is hallucinate, originally meaning seeming to see, hear, feel or smell something that doesn't exist, usually referring to the hallucinations that users have under poor health or when taking drugs. With the rise of AI, hallucinate extends to AI having hallucinations and generating incorrect information.

Likes