calibration

Tag

Cards List
#calibration

Sampling More, Getting Less: Calibration is the Diversity Bottleneck in LLMs

arXiv cs.CL · 21h ago Cached

This paper introduces a validity-diversity framework attributing diversity collapse in LLMs to order and shape miscalibration during decoding, validated across 14 language models.

0 favorites 0 likes
#calibration

A Semantic-Sampling Framework for Evaluating Calibration in Open-Ended Question Answering

arXiv cs.CL · yesterday Cached

This paper introduces Sem-ECE, a semantic-sampling framework for evaluating calibration in open-ended question answering by grouping model answers into semantic classes to estimate confidence.

0 favorites 0 likes
#calibration

Distributional Process Reward Models: Calibrated Prediction of Future Rewards via Conditional Optimal Transport

arXiv cs.LG · 2d ago Cached

This paper introduces Distributional Process Reward Models, using conditional optimal transport to calibrate PRMs for more accurate success probability estimates in inference-time scaling. It demonstrates improved calibration and downstream performance on mathematical reasoning benchmarks like MATH-500 and AIME.

0 favorites 0 likes
#calibration

Log-Likelihood, Simpson's Paradox, and the Detection of Machine-Generated Text

arXiv cs.CL · 5d ago Cached

This paper addresses the degradation of likelihood-based machine-generated text detectors by identifying a Simpson's paradox in token-score aggregation. It proposes a learned local calibration step that significantly improves detection performance across various models and datasets.

0 favorites 0 likes
#calibration

Teaching AI models to say “I’m not sure”

MIT News — Artificial Intelligence · 2026-04-22 Cached

MIT CSAIL researchers introduce RLCR, a method using Brier scores in reinforcement learning to train AI models to output calibrated confidence estimates, significantly reducing overconfidence without sacrificing accuracy.

0 favorites 0 likes
#calibration

The Illusion of Certainty: Decoupling Capability and Calibration in On-Policy Distillation

Hugging Face Daily Papers · 2026-04-18 Cached

This paper identifies that on-policy distillation (OPD) in language models leads to severe overconfidence due to information mismatch between training and deployment, and proposes CaOPD, a calibration-aware framework that improves both performance and confidence reliability.

0 favorites 0 likes
#calibration

TwinTrack: Post-hoc Multi-Rater Calibration for Medical Image Segmentation

Hugging Face Daily Papers · 2026-04-17 Cached

TwinTrack is a post-hoc calibration framework for pancreatic cancer segmentation that aligns ensemble model probabilities with the empirical mean human response across multiple annotators, improving interpretability and calibration metrics on multi-rater benchmarks.

0 favorites 0 likes
#calibration

Teaching models to express their uncertainty in words

OpenAI Blog · 2022-05-28 Cached

OpenAI researchers demonstrate that GPT-3 can learn to express calibrated uncertainty about its answers in natural language without using model logits, introducing the CalibratedMath benchmark suite to evaluate this capability. The approach shows robust generalization under distribution shift and represents the first evidence of models expressing well-calibrated verbal uncertainty about their own predictions.

0 favorites 0 likes
← Back to home

Submit Feedback