audit-study

Tag

Cards List
#audit-study

Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation

arXiv cs.CL · 2026-04-20 Cached

This paper presents a large-scale audit of recommendation biases in LLM-based content curation across OpenAI, Anthropic, and Google using 540,000 simulated selections from Twitter/X, Bluesky, and Reddit data. The study finds that LLMs systematically amplify polarization, exhibit distinct toxicity handling trade-offs, and show significant political leaning bias favoring left-leaning authors despite right-leaning plurality in datasets.

0 favorites 0 likes
← Back to home

Submit Feedback