@aakashgupta: Karpathy told Dwarkesh that a 1 billion parameter model, trained on clean data, could hit the intelligence of today's 1…

X AI KOLs Timeline News

Summary

Andrej Karpathy claimed to Dwarkesh Patel that a 1B-parameter model trained on ultra-clean data could match today's 1.8T-parameter frontier models, implying 1,800× effective compression.

Karpathy told Dwarkesh that a 1 billion parameter model, trained on clean data, could hit the intelligence of today's 1.8 trillion parameter frontier. That is a 1,800x compression claim. The math behind it is more defensible than it sounds. When researchers at frontier labs
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/22/26, 11:28 AM

Karpathy told Dwarkesh that a 1 billion parameter model, trained on clean data, could hit the intelligence of today’s 1.8 trillion parameter frontier. That is a 1,800x compression claim. The math behind it is more defensible than it sounds. When researchers at frontier labs

Similar Articles

@runes_leo: At Sequoia Ascent on 4/30, Karpathy compressed this year’s most valuable explanation of AI into three core arguments. You’ll see AI differently after reading this. 1. AI Isn’t Just “Faster,” It’s a New Paradigm For the past two years, the narrative has been that AI speeds things up. Karpathy says this is a misunderstanding...

X AI KOLs Timeline

This article summarizes Karpathy’s core points at the Sequoia Ascent conference, highlighting that AI is a paradigm shift restructuring workflows rather than merely an acceleration tool. It introduces the concept of a "jagged edge" for model capabilities based on verifiability and economic viability, and predicts that future software will evolve into an agent-native architecture where LLMs serve as the logic layer and traditional code functions as sensors and actuators.