Human-level performance via ML was *not* proven impossible with complexity theory [D]

Reddit r/MachineLearning News

Summary

A paper claiming AGI via ML is impossible using complexity theory has been rebutted by a new paper showing the proof is flawed due to an undefined key term.

Van Rooij, Guest, Adolfi, Kolokolova, and Rich [claimed to have proven that AGI via ML is impossible](https://link.springer.com/article/10.1007/s42113-024-00217-5) in *Computational Brain & Behavior* in 2024. The basic idea was to try to reduce a known NP-hard problem to the problem of learning a human-level classifier from data. The purported result, called "Ingenia Theorem" by the authors, made some noise on the internet, including here. My paper showing that the proof is irreparably broken is now [also out in CBB](https://link.springer.com/article/10.1007/s42113-026-00284-w) (ungated preprint [here](https://arxiv.org/abs/2411.06498)). The basic issue is that "human-level classifier" is not mathematically defined, which the authors solve by ... never defining it. They have a construct that corresponds to "distribution of human situation-behaviour tuples" when they introduce the problem, but the construct then gets swapped out for "for all polytime-sampleable distributions" when it comes time to doing the formal proof. This means that the paper, if you find-and-replace human situation-behavior tuples for ImageNet inputs/labels, also proves that learning to classify ImageNet is intractable. Blogpost discussion similar attempts from Penrose to Chomsky [here](https://mikeguerzhoy.substack.com/p/barriers-to-complexity-theoretic).
Original Article

Similar Articles

Can AGI be achieved with LLMs alone?

Reddit r/singularity

This post explores the debate among top AI figures regarding whether LLMs alone can achieve AGI or if additional breakthroughs like world models are required.

Bias by Necessity: Impossibility Theorems for Sequential Processing with Convergent AI and Human Validation

arXiv cs.AI

This paper proves impossibility theorems showing that primacy effects, anchoring, and order-dependence are architecturally necessary biases in autoregressive language models due to causal masking constraints. The authors validate these theoretical bounds across 12 frontier LLMs and confirm related predictions through pre-registered human experiments involving working memory loads.