Solving (some) formal math olympiad problems
Summary
OpenAI achieved a new state-of-the-art 41.2% on the miniF2F formal math olympiad benchmark using a technique called 'statement curriculum learning,' which iteratively trains a neural prover on proofs of increasing difficulty. The approach builds on iterative proof search and retraining over 8 iterations to significantly outperform the previous best of 29.3%.
View Cached Full Text
Cached at: 04/20/26, 02:55 PM
Similar Articles
Solving math word problems
OpenAI trained a system using verifiers to solve grade school math word problems with 90% of child-level accuracy, nearly doubling fine-tuned GPT-3 performance. The approach addresses language models' weakness in multistep reasoning by training verifiers to evaluate candidate solutions and select the best one.
Our First Proof submissions
OpenAI submitted proof attempts for the First Proof challenge, a research-level math competition testing whether AI can produce correct, checkable proofs. The company's internal model successfully solved at least five of the ten problems, demonstrating significant progress in sustained reasoning and rigorous mathematical thinking.
[Google DeepMind] the AI co-mathematician also achieves state of the art results on hard problemsolving benchmarks, including scoring 48% on FrontierMath Tier 4, a new high score among all AI systems evaluated.
Google DeepMind's AI co-mathematician achieves state-of-the-art results on hard problem-solving benchmarks, scoring 48% on FrontierMath Tier 4, the highest among all AI systems evaluated.
Discover and Prove: An Open-source Agentic Framework for Hard Mode Automated Theorem Proving in Lean 4
This paper introduces Discover and Prove (DAP), an open-source agentic framework for automated theorem proving in Lean 4 that tackles 'Hard Mode' problems where the answer must be discovered independently before formal proof construction. The work releases new Hard Mode benchmark variants and achieves state-of-the-art results while revealing a significant gap between LLM answer accuracy (>80%) and formal prover success (<10%).
Introducing OpenAI o1
OpenAI released o1, a new series of reasoning-focused AI models that outperform previous models on complex tasks in science, coding, and mathematics. The preview model solved 83% of IMO problems compared to GPT-4o's 13%, and reached the 89th percentile in competitive coding.