benchmarking

Tag

Cards List
#benchmarking

Measuring Five-Nines Reliability: Sample-Efficient LLM Evaluation in Saturated Benchmarks

arXiv cs.LG · 14h ago Cached

This paper proposes a sample-efficient framework using the cross-entropy method to estimate extreme reliability ('five-nines') in LLMs, addressing the limitations of standard benchmarks in detecting rare failures.

0 favorites 0 likes
#benchmarking

The Scaling Law of Evaluation Failure: Why Simple Averaging Collapses Under Data Sparsity and Item Difficulty Gaps, and How Item Response Theory Recovers Ground Truth Across Domains

arXiv cs.LG · 14h ago Cached

This paper argues that simple averaging in AI benchmarks fails under data sparsity and difficulty heterogeneity, proposing Item Response Theory (IRT) as a robust alternative to recover ground truth rankings.

0 favorites 0 likes
#benchmarking

An Empirical Study of Automating Agent Evaluation

arXiv cs.CL · 14h ago Cached

This paper introduces EvalAgent, a system that automates the evaluation of AI agents by encoding domain-specific expertise, addressing the limitations of standard coding assistants in this task. It also presents AgentEvalBench, a benchmark for testing evaluation pipelines, and demonstrates significant improvements in evaluation reliability.

0 favorites 0 likes
#benchmarking

Is using vLLM actually worth it if you aren't serving the model to other people?

Reddit r/LocalLLaMA · 20h ago

A user discusses the trade-offs between using vLLM and llama.cpp for local, single-user inference on AMD hardware, questioning if vLLM's performance benefits justify the complexity in non-enterprise settings.

0 favorites 0 likes
#benchmarking

Been picking frontier models on benchmarks that don't match our deployment conditions

Reddit r/AI_Agents · yesterday

The article highlights a performance rank-order flip between Claude Opus and Gemini Pro on a forecasting benchmark, depending on whether models perform their own web research or are given fixed evidence. This suggests that Opus excels at the research phase while Gemini is superior at judgment over fixed evidence, exposing a mismatch between standard benchmarks and actual deployment conditions.

0 favorites 0 likes
#benchmarking

@AmelieTabatta: ColBERT models continue to embarrass models 54× their sizes , this is why we trust late interaction @LightOnIO . A 1-ye…

X AI KOLs Following · yesterday Cached

The article highlights how ColBERT models, despite being smaller and older, outperform larger models like Qwen3-embed-8B when coupled with late interaction techniques and minimal fine-tuning.

0 favorites 0 likes
#benchmarking

MagicQuant (v2.0) - Hybrid Mixed GGUF Models + Unsloth Dynamic Learned Quant Configurations + Benchmark table with collapsed winners and more

Reddit r/LocalLLaMA · yesterday

MagicQuant v2.0 is a pipeline for creating hybrid mixed GGUF quant models, learning from Unsloth and other methods to find optimal quant configurations based on KLD benchmarks, with a focus on nonlinear wins and anomaly detection.

0 favorites 0 likes
#benchmarking

MTP+GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 - llama.cpp

Reddit r/LocalLLaMA · yesterday

A user benchmarks token generation speed on llama.cpp with the GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 flag, comparing performance with and without MTP (Multi-Token Prediction). Results show a significant speedup from 49 tok/s to 64 tok/s when MTP is enabled on an RTX5090 with a Qwen3.6-27B model.

0 favorites 0 likes
#benchmarking

Why is every "context layer" tool lying about token savings?

Reddit r/AI_Agents · yesterday

The author critiques the lack of transparent benchmarking in emerging context layer and MCP optimizer tools that promise drastic token savings, noting that real-world tests fail to replicate claimed efficiencies. They urge developers to demand open, reproducible benchmarks and ask for recommendations of tools that actually deliver measurable results.

0 favorites 0 likes
#benchmarking

Models and Quants quality test results - the chessboard svg (Qwen3.6 27B/35B-A3B/Zaya1)

Reddit r/LocalLLaMA · yesterday

Community testers evaluate quantized versions of Qwen3.6, ZAYA1, and other models for SVG chessboard generation accuracy using local inference frameworks like MLX.

0 favorites 0 likes
#benchmarking

Log analysis is necessary for credible evaluation of AI agents

arXiv cs.AI · yesterday Cached

This paper argues that log analysis is essential for credible AI agent evaluation, as outcome-only benchmarks often fail to reveal underlying capabilities, safety risks, or failure modes.

0 favorites 0 likes
#benchmarking

Measuring What Matters: Benchmarking Generative, Multimodal, and Agentic AI in Healthcare

arXiv cs.AI · yesterday Cached

This paper presents a structured framework for benchmarking generative, multimodal, and agentic AI in healthcare, addressing the gap between high benchmark scores and real-world clinical reliability, safety, and relevance.

0 favorites 0 likes
#benchmarking

Blackwell LLM Toolkit - NVFP4 Config +Wheels + Benchmarks for Blackwell GPUs via TensorRT-LLM - 270 tk/s Nemotron 3 Omni

Reddit r/LocalLLaMA · yesterday

A developer toolkit providing configurations, wheels, and benchmarks for running large language models with NVFP4 precision on Nvidia Blackwell GPUs using TensorRT-LLM.

0 favorites 0 likes
#benchmarking

Your harness is failing your agent but there's no benchmark to prove it

Reddit r/AI_Agents · yesterday

The article highlights a lack of benchmarks for evaluating the reliability of agent harnesses, specifically focusing on how MCP implementations handle tool calls and errors compared to the models themselves.

0 favorites 0 likes
#benchmarking

MemoryOS – AI agent memory with temporal knowledge graph and 9ms ingest and 78ms retrieval

Reddit r/AI_Agents · yesterday

MemoryOS is an open-source, self-hosted AI agent memory tool using a temporal knowledge graph, achieving 86.2% accuracy on LongMemEval-s with fast 78ms retrieval speeds.

0 favorites 0 likes
#benchmarking

@_EldarKurtic: TurboQuant has drawn a lot of attention recently, but the accompanying evals didn't tell the full story. So we ran what…

X AI KOLs Following · 2d ago Cached

Eldar Kurtic presents a comprehensive study on TurboQuant, revealing its real-world effects on accuracy, latency, and throughput beyond initial evaluations.

0 favorites 0 likes
#benchmarking

Benchmarked Yet Not Measured -- Generative AI Should be Evaluated Against Real-World Utility

arXiv cs.LG · 2d ago Cached

This paper argues that Generative AI evaluation should shift from static benchmarks to measuring real-world utility and human outcomes. It introduces the SCU-GenEval framework and supporting instruments to address the disconnect between benchmark performance and deployment success.

0 favorites 0 likes
#benchmarking

Domain-level metacognitive monitoring in frontier LLMs: A 33-model atlas

arXiv cs.CL · 2d ago Cached

This study presents a 33-model atlas analyzing domain-level metacognitive monitoring in frontier LLMs using MMLU benchmarks, revealing significant variations in confidence calibration across different knowledge domains that are obscured by aggregate metrics.

0 favorites 0 likes
#benchmarking

Getting a feel for how fast X tokens/second really is.

Reddit r/LocalLLaMA · 3d ago

The author introduces a web-based script designed to help users intuitively understand token-per-second speeds in local LLM setups by simulating text, code, and reasoning generation rates.

0 favorites 0 likes
#benchmarking

LLM rankings are not a ladder: experimental results from a transitive benchmark graph [D]

Reddit r/MachineLearning · 3d ago

The author introduces LLM Win, a tool that visualizes LLM benchmark results as a directed graph to analyze transitive relationships and ranking reversals. Experimental findings suggest that LLM rankings function more like a capability graph with high weak-to-strong reachability rather than a linear ladder.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback