llm-hallucination

Tag

Cards List
#llm-hallucination

Marc Andreessen Mocked for Accidentally Revealing That He Seems to Have a Deep Misunderstanding of How AI Actually Works

Reddit r/artificial · 2d ago Cached

Marc Andreessen faced online mockery after sharing a custom AI prompt that demonstrated a fundamental misunderstanding of how large language models work, particularly regarding hallucinations and knowledge limits.

0 favorites 0 likes
#llm-hallucination

To Know is to Construct: Schema-Constrained Generation for Agent Memory

arXiv cs.CL · 2026-04-23 Cached

UnionPay researchers propose SCG-MEM, a schema-constrained generative memory architecture that eliminates structural hallucinations by forcing LLMs to decode only valid memory keys within a dynamic cognitive schema, outperforming dense-retrieval baselines on the LoCoMo benchmark.

0 favorites 0 likes
#llm-hallucination

Understanding New-Knowledge-Induced Factual Hallucinations in LLMs: Analysis and Interpretation

arXiv cs.CL · 2026-04-20 Cached

This paper investigates how fine-tuning LLMs on new knowledge induces factual hallucinations, showing that unfamiliarity within specific knowledge types drives hallucinations through weakened attention to key entities. The authors propose mitigating this by reintroducing known knowledge during later training stages.

0 favorites 0 likes
#llm-hallucination

Do LLMs Really Know What They Don't Know? Internal States Mainly Reflect Knowledge Recall Rather Than Truthfulness

arXiv cs.CL · 2026-04-20 Cached

This paper challenges the assumption that LLMs can reliably distinguish between hallucinated and factual outputs through internal signals, arguing that internal states primarily reflect knowledge recall rather than truthfulness. The authors propose a taxonomy of hallucinations (associated vs. unassociated) and show that associated hallucinations exhibit hidden-state geometries overlapping with factual outputs, making standard detection methods ineffective.

0 favorites 0 likes
← Back to home

Submit Feedback