uncertainty-quantification

Tag

Cards List
#uncertainty-quantification

BaLoRA: Bayesian Low-Rank Adaptation of Large Scale Models

arXiv cs.LG · yesterday Cached

BaLoRA introduces a Bayesian extension to Low-Rank Adaptation (LoRA) that provides calibrated uncertainty estimates and improves prediction accuracy by narrowing the gap with full fine-tuning.

0 favorites 0 likes
#uncertainty-quantification

Conformal Agent Error Attribution

arXiv cs.LG · 2d ago Cached

This paper presents a framework for error attribution in multi-agent systems using conformal prediction, providing statistical guarantees for identifying decisive errors in agent trajectories. The approach enables automated recovery and debugging by isolating errors within contiguous prediction sets.

0 favorites 0 likes
#uncertainty-quantification

Online Localized Conformal Prediction

arXiv cs.LG · 5d ago Cached

This paper proposes Online Localized Conformal Prediction (OLCP) to address covariate heterogeneity in online learning and time-series settings. It introduces OLCP-Hedge for bandwidth selection and demonstrates valid long-run coverage with narrower prediction sets compared to existing baselines.

0 favorites 0 likes
#uncertainty-quantification

Estimating the Black-box LLM Uncertainty with Distribution-Aligned Adversarial Distillation

arXiv cs.CL · 5d ago Cached

This paper proposed Distribution-Aligned Adversarial Distillation (DisAAD), a method that uses a lightweight proxy model to estimate uncertainty in black-box LLMs with only 1% of the original model size, achieving reliable quantification without requiring internal parameters or multiple sampling.

0 favorites 0 likes
#uncertainty-quantification

Teaching AI models to say “I’m not sure”

MIT News — Artificial Intelligence · 2026-04-22 Cached

MIT CSAIL researchers introduce RLCR, a method using Brier scores in reinforcement learning to train AI models to output calibrated confidence estimates, significantly reducing overconfidence without sacrificing accuracy.

0 favorites 0 likes
#uncertainty-quantification

Mind the Unseen Mass: Unmasking LLM Hallucinations via Soft-Hybrid Alphabet Estimation

arXiv cs.CL · 2026-04-22 Cached

Researchers introduce SHADE, a hybrid estimator that combines Good-Turing coverage with graph-spectral cues to quantify semantic uncertainty and detect LLM hallucinations when only a few black-box samples are available.

0 favorites 0 likes
#uncertainty-quantification

Faithfulness-Aware Uncertainty Quantification for Fact-Checking the Output of Retrieval Augmented Generation

arXiv cs.CL · 2026-04-20 Cached

This paper introduces FRANQ, a method for detecting hallucinations in Retrieval-Augmented Generation (RAG) systems by applying distinct uncertainty quantification techniques to distinguish between factuality and faithfulness to retrieved context. The authors construct a new dataset annotated for both factuality and faithfulness, and demonstrate that FRANQ outperforms existing approaches in detecting factual errors across multiple datasets and LLMs.

0 favorites 0 likes
#uncertainty-quantification

Beyond Surface Statistics: Robust Conformal Prediction for LLMs via Internal Representations

arXiv cs.CL · 2026-04-20 Cached

This paper proposes a conformal prediction framework for LLMs that leverages internal representations rather than output-level statistics, introducing Layer-Wise Information (LI) scores as nonconformity measures to improve validity-efficiency trade-offs under distribution shift. The method demonstrates stronger robustness to calibration-deployment mismatch compared to text-level baselines across QA benchmarks.

0 favorites 0 likes
#uncertainty-quantification

A better method for identifying overconfident large language models

MIT News — Artificial Intelligence · 2026-03-19 Cached

MIT researchers developed a new method for identifying overconfident LLMs by measuring cross-model disagreement across similar models, rather than relying solely on self-consistency metrics. This approach better captures epistemic uncertainty and more accurately identifies unreliable predictions in high-stakes applications.

0 favorites 0 likes
#uncertainty-quantification

Teaching models to express their uncertainty in words

OpenAI Blog · 2022-05-28 Cached

OpenAI researchers demonstrate that GPT-3 can learn to express calibrated uncertainty about its answers in natural language without using model logits, introducing the CalibratedMath benchmark suite to evaluate this capability. The approach shows robust generalization under distribution shift and represents the first evidence of models expressing well-calibrated verbal uncertainty about their own predictions.

0 favorites 0 likes
← Back to home

Submit Feedback