What do Language Models Learn and When? The Implicit Curriculum Hypothesis

Hugging Face Daily Papers Papers

Summary

This paper proposes the Implicit Curriculum Hypothesis, demonstrating that language model pretraining follows a structured, compositional curriculum where capabilities emerge consistently across architectures and can be predicted from internal representations. The authors validate this through designed tasks spanning retrieval, morphology, coreference, reasoning, and mathematics, finding highly consistent emergence orderings (ρ=0.81) across four model families.

Large language models (LLMs) can perform remarkably complex tasks, yet the fine-grained details of how these capabilities emerge during pretraining remain poorly understood. Scaling laws on validation loss tell us how much a model improves with additional compute, but not what skills it acquires in which order. To remedy this, we propose the Implicit Curriculum Hypothesis: pretraining follows a compositional and predictable curriculum across models and data mixtures. We test this by designing a suite of simple, composable tasks spanning retrieval, morphological transformations, coreference, logical reasoning, and mathematics. Using these tasks, we track emergence points across four model families spanning sizes from 410M-13B parameters. We find that emergence orderings of when models reach fixed accuracy thresholds are strikingly consistent (ρ= .81 across 45 model pairs), and that composite tasks most often emerge after their component tasks. Furthermore, we find that this structure is encoded in model representations: tasks with similar function vector representations also tend to follow similar trajectories in training. By using the space of representations derived from our task set, we can effectively predict the training trajectories of simple held-out compositional tasks throughout the course of pretraining (R^2 = .68-.84 across models) without previously evaluating them. Together, these results suggest that pretraining is more structured than loss curves reveal: skills emerge in a compositional order that is consistent across models and readable from their internals.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 08:29 AM

Paper page - What do Language Models Learn and When? The Implicit Curriculum Hypothesis

Source: https://huggingface.co/papers/2604.08510

Abstract

Pretraining follows a structured, compositional curriculum where model capabilities emerge consistently across different architectures and can be predicted from internal representations.

Large language models (https://huggingface.co/papers?q=Large%20language%20models)(LLMs) can perform remarkably complex tasks, yet the fine-grained details of how these capabilities emerge duringpretraining (https://huggingface.co/papers?q=pretraining)remain poorly understood.Scaling laws (https://huggingface.co/papers?q=Scaling%20laws)on validation loss tell us how much a model improves with additional compute, but not what skills it acquires in which order. To remedy this, we propose theImplicit Curriculum Hypothesis (https://huggingface.co/papers?q=Implicit%20Curriculum%20Hypothesis):pretraining (https://huggingface.co/papers?q=pretraining)follows a compositional and predictable curriculum across models and data mixtures. We test this by designing a suite of simple, composable tasks spanning retrieval, morphological transformations, coreference, logical reasoning, and mathematics. Using these tasks, we trackemergence points (https://huggingface.co/papers?q=emergence%20points)across four model families spanning sizes from 410M-13B parameters. We find that emergence orderings of when models reach fixed accuracy thresholds are strikingly consistent (ρ= .81 across 45 model pairs), and that composite tasks most often emerge after their component tasks. Furthermore, we find that this structure is encoded inmodel representations (https://huggingface.co/papers?q=model%20representations): tasks with similarfunction vector representations (https://huggingface.co/papers?q=function%20vector%20representations)also tend to follow similar trajectories in training. By using the space of representations derived from our task set, we can effectively predict thetraining trajectories (https://huggingface.co/papers?q=training%20trajectories)of simple held-outcompositional tasks (https://huggingface.co/papers?q=compositional%20tasks)throughout the course ofpretraining (https://huggingface.co/papers?q=pretraining)(R^2 = .68-.84 across models) without previously evaluating them. Together, these results suggest thatpretraining (https://huggingface.co/papers?q=pretraining)is more structured than loss curves reveal: skills emerge in a compositional order that is consistent across models and readable from their internals.

View arXiv page (https://arxiv.org/abs/2604.08510)View PDF (https://arxiv.org/pdf/2604.08510)GitHub5 (https://github.com/KaiserWhoLearns/ElementalTask)Add to collection (https://huggingface.co/login?next=%2Fpapers%2F2604.08510)

Get this paper in your agent:

hf papers read 2604.08510

Don’t have the latest CLI?curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper0

No model linking this paper

Cite arxiv.org/abs/2604.08510 in a model README.md to link it from this page.

Datasets citing this paper0

No dataset linking this paper

Cite arxiv.org/abs/2604.08510 in a dataset README.md to link it from this page.

Spaces citing this paper0

No Space linking this paper

Cite arxiv.org/abs/2604.08510 in a Space README.md to link it from this page.

Collections including this paper0

No Collection including this paper

Add this paper to acollection (https://huggingface.co/new-collection)to link it from this page.

Similar Articles

Towards Intrinsic Interpretability of Large Language Models: A Survey of Design Principles and Architectures

arXiv cs.CL

A comprehensive survey reviewing recent advances in intrinsic interpretability for Large Language Models, categorizing approaches into five design paradigms: functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction. The paper addresses the challenge of building transparency directly into model architectures rather than relying on post-hoc explanation methods.

Why language models hallucinate

OpenAI Blog

OpenAI publishes research explaining that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty, and proposes that evaluation metrics should prioritize honesty about limitations over raw accuracy.

Improving understanding with language

MIT News — Artificial Intelligence

This article profiles MIT senior Olivia Honeycutt, highlighting her interdisciplinary research at the intersection of linguistics, computation, and cognition, with a focus on comparing human language processing with large language models.

Can Large Language Models Reinvent Foundational Algorithms?

Hugging Face Daily Papers

Researchers introduce 'Unlearn-and-Reinvent', a pipeline that removes knowledge of foundational algorithms (e.g., Dijkstra's, Euclid's) from LLMs via unlearning, then tests whether models can independently reinvent them. Results show LLMs can reinvent algorithms with intuitive structures but struggle with those requiring non-obvious data structures or counterintuitive invariants.

Causal Probing for Internal Visual Representations in Multimodal Large Language Models

arXiv cs.AI

This paper proposes a causal framework for probing internal visual representations in Multimodal Large Language Models, revealing differences in how entities and abstract concepts are encoded. The study highlights that increasing model depth is crucial for encoding abstract concepts and uncovers a disconnect between perception and reasoning in current MLLMs.