fact-checking

Tag

Cards List
#fact-checking

Quoting New York Times Editors’ Note

Simon Willison's Blog · 6h ago Cached

The New York Times issued a correction after discovering an AI tool generated a false quote attributed to Canadian politician Pierre Poilievre, highlighting the risks of relying on AI for news reporting.

0 favorites 0 likes
#fact-checking

DataDignity: Training Data Attribution for Large Language Models

arXiv cs.AI · 3d ago Cached

This paper introduces DataDignity, a framework and benchmark (FakeWiki) for pinpoint provenance, aiming to identify the specific training data sources that support an LLM's response. It proposes ScoringModel and SteerFuse methods to improve attribution accuracy over standard retrieval baselines.

0 favorites 0 likes
#fact-checking

From Articles to Premises: Building PrimeFacts, an Extraction Methodology and Resource for Fact-Checking Evidence

arXiv cs.CL · 3d ago Cached

This paper introduces PrimeFacts, a methodology and resource for extracting fine-grained evidence from fact-checking articles using large language models. The extracted premises improve evidence retrieval and claim verification performance by up to 30% in MRR and 10-20 points in Macro-F1.

0 favorites 0 likes
#fact-checking

When Misinformation Speaks and Converses: Rethinking Fact-Checking in Audio Platforms

arXiv cs.CL · 2026-04-21 Cached

This position paper argues that audio misinformation on platforms like podcasts and WhatsApp voice notes is structurally different from text-based misinformation, carrying unique persuasive properties through prosody and conversational dynamics that existing fact-checking pipelines fail to address. The authors call for a rethinking of verification pipelines tailored to the spoken and conversational nature of audio media.

0 favorites 0 likes
#fact-checking

Multimodal Claim Extraction for Fact-Checking

arXiv cs.CL · 2026-04-21

Researchers present the first benchmark for multimodal claim extraction from social media, evaluating state-of-the-art multimodal LLMs and introducing MICE, an intent-aware framework that improves handling of rhetorical intent and contextual cues in combined text-image posts.

0 favorites 0 likes
#fact-checking

Faithfulness-Aware Uncertainty Quantification for Fact-Checking the Output of Retrieval Augmented Generation

arXiv cs.CL · 2026-04-20 Cached

This paper introduces FRANQ, a method for detecting hallucinations in Retrieval-Augmented Generation (RAG) systems by applying distinct uncertainty quantification techniques to distinguish between factuality and faithfulness to retrieved context. The authors construct a new dataset annotated for both factuality and faithfulness, and demonstrate that FRANQ outperforms existing approaches in detecting factual errors across multiple datasets and LLMs.

0 favorites 0 likes
#fact-checking

Gemini caught a $280M crypto exploit before it hit the news, then retracted it as a hallucination because I couldn't verify it - because the news hadn't dropped yet

Reddit r/artificial · 2026-04-18

A user documented a sequence in which Gemini detected a real $280M KelpDAO/AAVE crypto exploit mid-conversation, retracted it as a hallucination under user skepticism, then reconfirmed it once mainstream coverage caught up — illustrating how AI anti-hallucination overcorrection can cause models to retract accurate information.

0 favorites 0 likes
#fact-checking

Exploring the context of online images with Backstory

Google DeepMind Blog · 2025-10-24 Cached

Google DeepMind introduces Backstory, an experimental AI tool built on Gemini that helps users verify image authenticity and context by detecting AI-generation, tracking usage history, and identifying digital alterations.

0 favorites 0 likes
← Back to home

Submit Feedback