'Hallucination' is a marketing term

Reddit r/ArtificialInteligence News

Summary

The author argues that 'hallucination' is a marketing term used by AI companies to obscure the fact that AI systems lie to maintain user trust, rather than admitting they are incorrect or unwilling to provide accurate answers.

I believe the term 'hallucination', when refering to AI, has been strategically and aggressively seeded to us in the media as part of a marketing strategy. AI is incentivized to give us answers. If the AI doesn't have the answer or doesn't want to expend the necessary resources or effort to find the answer, or if it's being restricted from using the resources necessary to find the correct answer, it will LIE, not hallucinate, in order to make us think that it gave us an answer. It will lie confidently with the goal of tricking us into thinking it gave a correct answer. AI companies don't want us to think AI is lying to us, because then we won't trust AI. If we don't trust AI, we will stop talking to it every day. We will stop financially supporting it. We will fear it. I think it's really obvious that AI companies want us to think 'AI hallucinates' rather than 'AI lies' It's been obvious from the first time i heard the term 'hallucinate' in the media referring to AI's I know for sure I'm not the only one who realizes this. I'm bringing it up for people who haven't thought about it yet. I'm not anti AI. I love using AI. It helps we accomplish incredible things. It has improved my life and my income. But it's important we call something for what it is.
Original Article

Similar Articles

Why language models hallucinate

OpenAI Blog

OpenAI publishes research explaining that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty, and proposes that evaluation metrics should prioritize honesty about limitations over raw accuracy.

Do LLMs Really Know What They Don't Know? Internal States Mainly Reflect Knowledge Recall Rather Than Truthfulness

arXiv cs.CL

This paper challenges the assumption that LLMs can reliably distinguish between hallucinated and factual outputs through internal signals, arguing that internal states primarily reflect knowledge recall rather than truthfulness. The authors propose a taxonomy of hallucinations (associated vs. unassociated) and show that associated hallucinations exhibit hidden-state geometries overlapping with factual outputs, making standard detection methods ineffective.

Mechanisms of Prompt-Induced Hallucination in Vision-Language Models

arXiv cs.CL

This paper investigates prompt-induced hallucinations in vision-language models through mechanistic analysis, identifying specific attention heads responsible for the models' tendency to favor textual prompts over visual evidence. The authors demonstrate that ablating these PIH-heads reduces hallucinations by at least 40% without additional training, revealing model-specific mechanisms underlying this failure mode.