Benchmarked Yet Not Measured -- Generative AI Should be Evaluated Against Real-World Utility

arXiv cs.LG Papers

Summary

This paper argues that Generative AI evaluation should shift from static benchmarks to measuring real-world utility and human outcomes. It introduces the SCU-GenEval framework and supporting instruments to address the disconnect between benchmark performance and deployment success.

arXiv:2605.06856v1 Announce Type: new Abstract: Generative AI systems achieve impressive performance on standard benchmarks yet fail to deliver real-world utility, a disconnect we identify across 28 deployment cases spanning education, healthcare, software engineering, and law. We argue that this benchmark utility gap arises from three recurring failures in evaluation practice: proxy displacement, temporal collapse, and distributional concealment. Motivated by these observations, we argue that generative AI evaluation requires a paradigm shift from static benchmark-centered transparency toward stakeholder, goal, and context-conditioned utility transparency grounded in human outcome trajectories. Existing evaluations primarily characterize properties of model outputs, while deployment success depends on whether interaction with AI improves stakeholders' ability to achieve their goals over time. The missing construct is therefore utility: the change in a stakeholder's capability induced through sustained interaction with an AI system within a deployment context. To operationalize this perspective, we propose SCU-GenEval, a four-stage evaluation framework consisting of stakeholder-goal mapping, construct-indicator specification, mechanism modeling, and longitudinal utility measurement. To make these stages practically deployable, we introduce three supporting instruments: structured deployment protocols, context-conditioned user simulators, and persona- and goal-conditioned proxy metrics. We conclude with domain-specific calls to action, arguing that progress in generative AI must be evaluated through measurable improvements in human outcomes rather than benchmark performance alone.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/11/26, 06:57 AM

# Benchmarked Yet Not Measured -- Generative AI Should be Evaluated Against Real-World Utility
Source: [https://arxiv.org/abs/2605.06856](https://arxiv.org/abs/2605.06856)
[View PDF](https://arxiv.org/pdf/2605.06856)

> Abstract:Generative AI systems achieve impressive performance on standard benchmarks yet fail to deliver real\-world utility, a disconnect we identify across 28 deployment cases spanning education, healthcare, software engineering, and law\. We argue that this benchmark utility gap arises from three recurring failures in evaluation practice: proxy displacement, temporal collapse, and distributional concealment\. Motivated by these observations, we argue that generative AI evaluation requires a paradigm shift from static benchmark\-centered transparency toward stakeholder, goal, and context\-conditioned utility transparency grounded in human outcome trajectories\. Existing evaluations primarily characterize properties of model outputs, while deployment success depends on whether interaction with AI improves stakeholders' ability to achieve their goals over time\. The missing construct is therefore utility: the change in a stakeholder's capability induced through sustained interaction with an AI system within a deployment context\. To operationalize this perspective, we propose SCU\-GenEval, a four\-stage evaluation framework consisting of stakeholder\-goal mapping, construct\-indicator specification, mechanism modeling, and longitudinal utility measurement\. To make these stages practically deployable, we introduce three supporting instruments: structured deployment protocols, context\-conditioned user simulators, and persona\- and goal\-conditioned proxy metrics\. We conclude with domain\-specific calls to action, arguing that progress in generative AI must be evaluated through measurable improvements in human outcomes rather than benchmark performance alone\.

## Submission history

From: Ishani Mondal \[[view email](https://arxiv.org/show-email/04925bc4/2605.06856)\] **\[v1\]**Thu, 7 May 2026 18:56:07 UTC \(1,923 KB\)

Similar Articles

Creating and Evaluating Personas Using Generative AI: A Scoping Review of 81 Articles

arXiv cs.CL

This scoping review analyzes 81 articles (2022-2025) examining the use of generative AI for creating and evaluating user personas, identifying strengths in reproducibility but critical issues including lack of evaluation in 45% of studies, over-reliance on GPT models (86%), and risks of circularity where the same model generates and evaluates personas.

Measuring the performance of our models on real-world tasks

OpenAI Blog

OpenAI introduces GDPval, a new evaluation framework measuring AI model performance on economically valuable, real-world tasks across 44 occupations in the top 9 US GDP-contributing industries. The benchmark includes 1,320 specialized tasks based on actual professional work products, representing a progression from academic benchmarks to more realistic occupational assessments.