Human-level performance via ML was *not* proven impossible with complexity theory [D]
Summary
A paper claiming AGI via ML is impossible using complexity theory has been rebutted by a new paper showing the proof is flawed due to an undefined key term.
Similar Articles
What fundamental research exists anwering if / if not AGI can be achieved through LLMs?
A question seeking fundamental research or papers on whether AGI can or cannot be achieved through large language models, looking to move beyond opinion-based discussion.
Can AGI be achieved with LLMs alone?
This post explores the debate among top AI figures regarding whether LLMs alone can achieve AGI or if additional breakthroughs like world models are required.
When Can Human-AI Teams Outperform Individuals? Tight Bounds with Impossibility Guarantees
This paper derives tight theoretical bounds for human-AI teams, proving when confidence-based aggregation leads to complementarity and establishing impossibility results under specific error correlations.
Bias by Necessity: Impossibility Theorems for Sequential Processing with Convergent AI and Human Validation
This paper proves impossibility theorems showing that primacy effects, anchoring, and order-dependence are architecturally necessary biases in autoregressive language models due to causal masking constraints. The authors validate these theoretical bounds across 12 frontier LLMs and confirm related predictions through pre-registered human experiments involving working memory loads.
Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.'
Google DeepMind senior scientist Alexander Lerchner argues that large language models cannot achieve consciousness, dubbing the assumption the 'Abstraction Fallacy' and suggesting this limitation persists even over a century-long timeframe.