Tag
The New York Times issued a correction after discovering an AI tool generated a false quote attributed to Canadian politician Pierre Poilievre, highlighting the risks of relying on AI for news reporting.
This position paper argues that audio misinformation on platforms like podcasts and WhatsApp voice notes is structurally different from text-based misinformation, carrying unique persuasive properties through prosody and conversational dynamics that existing fact-checking pipelines fail to address. The authors call for a rethinking of verification pipelines tailored to the spoken and conversational nature of audio media.
This paper investigates whether assigning personas to large language models induces human-like motivated reasoning, finding that persona-assigned LLMs show up to 9% reduced veracity discernment and are up to 90% more likely to evaluate scientific evidence in ways congruent with their induced political identity, with prompt-based debiasing largely ineffective.
A PDF essay critically examining AI language models (so-called 'bullshit machines'), likely arguing about their tendency to produce false or misleading outputs. The content appears to be a polemical or philosophical piece on the nature of AI-generated misinformation.
Google announced SynthID Detector, a verification portal that identifies AI-generated content across images, audio, video, and text by detecting imperceptible SynthID watermarks embedded in media created with Google's AI tools. The platform is rolling out to early testers with plans for broader availability to journalists, media professionals, and researchers.
OpenAI announced its 2024 election safeguards including directing users to authoritative voting information sources, preventing deepfake generation of political figures, and disrupting covert influence operations. The company reported redirecting ~1 million ChatGPT responses to voting resources and rejecting over 250,000 requests to generate images of politicians.
OpenAI announces a partnership with the American Journalism Project, committing $5 million in funding plus up to $5 million in API credits to help local news organizations explore and deploy AI technologies. The collaboration aims to develop tools that enhance journalism while addressing challenges like misinformation, bias, and copyright concerns.