model-evaluation

Tag

Cards List
#model-evaluation

@no_stp_on_snek: mrcr v2 8-needle at 1m, open weights stack, single rented mi300x. longctx directional 0.688 (n=30, mass-val rerun pendi…

X AI KOLs Following · yesterday Cached

Shares early benchmark scores and evaluation metrics for an open-weight model stack run on a single AMD MI300X, noting competitive performance against closed-source alternatives.

0 favorites 0 likes
#model-evaluation

Does anyone else feel like AI benchmarks are becoming less useful for predicting real-world performance?

Reddit r/ArtificialInteligence · 2d ago

The article discusses the growing disconnect between high AI benchmark scores and actual real-world performance, highlighting issues like consistency, latency, and context handling.

0 favorites 0 likes
#model-evaluation

Our evaluation of OpenAI's GPT-5.5 cyber capabilities

Simon Willison's Blog · 2026-04-30 Cached

Simon Willison evaluates OpenAI's GPT-5.5 cyber capabilities, examining its performance in cybersecurity tasks.

0 favorites 0 likes
#model-evaluation

Opus 4.7 scores lower than 4.6 and 4.5 on SimpleBench

Reddit r/singularity · 2026-04-22

Claude Opus 4.7 shows decreased performance compared to versions 4.6 and 4.5 on SimpleBench evaluation.

0 favorites 0 likes
#model-evaluation

Large Language Models for Math Education in Low-Resource Languages: A Study in Sinhala and Tamil

arXiv cs.CL · 2026-04-20 Cached

This paper evaluates the mathematical reasoning capabilities of large language models in Sinhala and Tamil, two low-resource South Asian languages, using a parallel dataset of independently authored problems. The study demonstrates that while basic arithmetic transfers well across languages, complex reasoning tasks show significant performance degradation in non-English languages, with implications for deploying AI tutoring tools in multilingual educational contexts.

0 favorites 0 likes
#model-evaluation

A better method for identifying overconfident large language models

MIT News — Artificial Intelligence · 2026-03-19 Cached

MIT researchers developed a new method for identifying overconfident LLMs by measuring cross-model disagreement across similar models, rather than relying solely on self-consistency metrics. This approach better captures epistemic uncertainty and more accurately identifies unreliable predictions in high-stakes applications.

0 favorites 0 likes
#model-evaluation

Our First Proof submissions

OpenAI Blog · 2026-02-20 Cached

OpenAI submitted proof attempts for the First Proof challenge, a research-level math competition testing whether AI can produce correct, checkable proofs. The company's internal model successfully solved at least five of the ten problems, demonstrating significant progress in sustained reasoning and rigorous mathematical thinking.

0 favorites 0 likes
#model-evaluation

Deepening our partnership with the UK AI Security Institute

Google DeepMind Blog · 2025-12-11 Cached

Google DeepMind announced an expanded partnership with the UK AI Security Institute (AISI) via a new Memorandum of Understanding to deepen collaborative research on AI safety, security, and risk mitigation.

0 favorites 0 likes
#model-evaluation

How evals drive the next chapter in AI for businesses

OpenAI Blog · 2025-11-19 Cached

OpenAI publishes a framework for business leaders on using AI evaluations (evals) to measure and improve AI system performance in organizational contexts, distinguishing between frontier evals for model development and contextual evals tailored to specific business workflows.

0 favorites 0 likes
#model-evaluation

Three lessons for creating a sustainable AI advantage

OpenAI Blog · 2025-07-30 Cached

Intercom shares three lessons from rapidly adopting AI to transform their customer service platform: testing models early and deeply, building AI-first from the ground up rather than bolting it on, and using rigorous evaluation processes to quickly adopt new models like GPT-4.1.

0 favorites 0 likes
#model-evaluation

OpenAI safety practices

OpenAI Blog · 2024-05-21 Cached

OpenAI outlines 10 safety practices it actively uses and improves upon, including empirical red-teaming, alignment research, abuse monitoring, and voluntary commitments shared at the AI Seoul Summit. The company emphasizes a balanced, scientific approach to safety integrated into development from the outset.

0 favorites 0 likes
#model-evaluation

Frontier Model Forum updates

OpenAI Blog · 2023-10-25 Cached

The Frontier Model Forum announces the creation of a new AI Safety Fund with over $10 million in initial funding from major AI companies (Anthropic, Google, Microsoft, OpenAI) and philanthropic partners to support independent AI safety research. The fund will focus on developing model evaluations and red-teaming techniques to assess frontier AI systems' dangerous capabilities.

0 favorites 0 likes
← Back to home

Submit Feedback