@rohanpaul_ai: Columbia CS Prof Vishal Misra explains why LLMs can’t generate new science ideas. Bcz LLMs learn a structured map, Baye…

X AI KOLs Following News

Summary

Columbia CS Prof Vishal Misra argues LLMs can’t generate truly novel science because they only interpolate within learned Bayesian manifolds rather than create new conceptual maps.

Columbia CS Prof Vishal Misra explains why LLMs can’t generate new science ideas. Bcz LLMs learn a structured map, Bayesian manifold of known data & work well within it, but fail outside it. True discovery requires creating new maps, which LLMs can't do
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/21/26, 05:13 PM

Columbia CS Prof Vishal Misra explains why LLMs can’t generate new science ideas. Bcz LLMs learn a structured map, Bayesian manifold of known data & work well within it, but fail outside it. True discovery requires creating new maps, which LLMs can’t do

Similar Articles

LLM Neuroanatomy III - LLMs seem to think in geometry, not language

Reddit r/LocalLLaMA

Researcher analyzes LLM internal representations across 8 languages and multiple models, finding that concept thinking occurs in geometric space in middle transformer layers independent of input language, supporting a universal deep structure hypothesis similar to Chomsky's theory rather than Sapir-Whorf linguistic relativism.

Quoting Bryan Cantrill

Simon Willison's Blog

Bryan Cantrill critiques LLMs for lacking the optimization constraint of human laziness, arguing that LLMs will unnecessarily complicate systems rather than improve them, and highlighting how human time limitations drive the development of efficient abstractions.