LLM Agents Predict Social Media Reactions but Do Not Outperform Text Classifiers: Benchmarking Simulation Accuracy Using 120K+ Personas of 1511 Humans
Summary
Large-scale study finds LLM agents can predict individual social-media reactions with 70.7 % accuracy but still lag behind simple TF-IDF classifiers, highlighting both manipulation risks and policy-simulation utility.
View Cached Full Text
Cached at: 04/23/26, 10:02 AM
# LLM Agents Predict Social Media Reactions but Do Not Outperform Text Classifiers: Benchmarking Simulation Accuracy Using 120K+ Personas of 1511 Humans Source: [https://arxiv.org/abs/2604.19787](https://arxiv.org/abs/2604.19787) [View PDF](https://arxiv.org/pdf/2604.19787) > Abstract:Social media platforms mediate how billions form opinions and engage with public discourse\. As autonomous AI agents increasingly participate in these spaces, understanding their behavioral fidelity becomes critical for platform governance and democratic resilience\. Previous work demonstrates that LLM\-powered agents can replicate aggregate survey responses, yet few studies test whether agents can predict specific individuals' reactions to specific content\. This study benchmarks LLM\-based agents' accuracy in predicting human social media reactions \(like, dislike, comment, share, no reaction\) across 120,000\+ unique agent\-persona combinations derived from 1,511 Serbian participants and 27 large language models\. In Study 1, agents achieved 70\.7% overall accuracy, with LLM choice producing a 13 percentage\-point performance spread\. Study 2 employed binary forced\-choice \(like/dislike\) evaluation with chance\-corrected metrics\. Agents achieved Matthews Correlation Coefficient \(MCC\) of 0\.29, indicating genuine predictive signal beyond chance\. However, conventional text\-based supervised classifiers using TF\-IDF representations outperformed LLM agents \(MCC of 0\.36\), suggesting predictive gains reflect semantic access rather than uniquely agentic reasoning\. The genuine predictive validity of zero\-shot persona\-prompted agents warns against potential manipulation through easily deploying swarms of behaviorally distinct AI agents on social media, while simultaneously offering opportunities to use such agents in simulations for predicting polarization dynamics and informing AI policy\. The advantage of using zero\-shot agents is that they require no task\-specific training, making their large\-scale deployment easy across diverse contexts\. Limitations include single\-country sampling\. Future research should explore multilingual testing and fine\-tuning approaches\. ## Submission history From: Ljubisa Bojic \[[view email](https://arxiv.org/show-email/6ba6425b/2604.19787)\] **\[v1\]**Tue, 31 Mar 2026 19:27:59 UTC \(1,491 KB\)
Similar Articles
Evaluating LLMs as Human Surrogates in Controlled Experiments
This paper evaluates whether off-the-shelf LLMs can reliably simulate human responses in controlled behavioral experiments by comparing LLM-generated data with human survey responses on accuracy perception. The findings show that while LLMs capture directional effects and aggregate belief-updating patterns, they do not consistently match human-scale effect magnitudes, clarifying when synthetic LLM data can serve as behavioral proxies.
LLM-guided Semi-Supervised Approaches for Social Media Crisis Data Classification
This paper presents an empirical evaluation of LLM-guided semi-supervised learning for classifying social media crisis data. It demonstrates that LG-CoTrain outperforms classical baselines in low-resource settings and highlights the potential of transferring knowledge from LLMs to smaller, deployable models for disaster response.
Beyond Static Benchmarks: Synthesizing Harmful Content via Persona-based Simulation for Robust Evaluation
Researchers from KAIST propose a framework that uses persona-guided LLM agents to synthesize diverse harmful content for stress-testing detection systems, addressing limitations of static benchmarks such as scalability, diversity, and data contamination. Both human and LLM evaluations confirm the synthetic scenarios are harder to detect than existing benchmarks while maintaining linguistic and topical diversity.
Persona-Assigned Large Language Models Exhibit Human-Like Motivated Reasoning
This paper investigates whether assigning personas to large language models induces human-like motivated reasoning, finding that persona-assigned LLMs show up to 9% reduced veracity discernment and are up to 90% more likely to evaluate scientific evidence in ways congruent with their induced political identity, with prompt-based debiasing largely ineffective.
Expressing Social Emotions: Misalignment Between LLMs and Human Cultural Emotion Norms
Research paper examining how large language models express social emotions compared to human cultural norms, finding systematic misalignment where LLMs show inconsistent patterns of engaging vs. disengaging emotion expressivity across cultural personas (European American and Latin American) compared to human responses.