What fundamental research exists anwering if / if not AGI can be achieved through LLMs?
Summary
A question seeking fundamental research or papers on whether AGI can or cannot be achieved through large language models, looking to move beyond opinion-based discussion.
Similar Articles
Learning to reason with LLMs
OpenAI publishes an article exploring reasoning techniques with LLMs through cipher-decoding examples, demonstrating step-by-step problem-solving approaches and pattern recognition in language models.
Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.'
Google DeepMind senior scientist Alexander Lerchner argues that large language models cannot achieve consciousness, dubbing the assumption the 'Abstraction Fallacy' and suggesting this limitation persists even over a century-long timeframe.
Can Large Language Models Reinvent Foundational Algorithms?
Researchers introduce 'Unlearn-and-Reinvent', a pipeline that removes knowledge of foundational algorithms (e.g., Dijkstra's, Euclid's) from LLMs via unlearning, then tests whether models can independently reinvent them. Results show LLMs can reinvent algorithms with intuitive structures but struggle with those requiring non-obvious data structures or counterintuitive invariants.
Experiments or Outcomes? Probing Scientific Feasibility in Large Language Models
UMBC researchers show LLMs judge scientific claim feasibility better when given outcome data than experiment descriptions, and that incomplete experimental context can hurt accuracy.
Disentangling Mathematical Reasoning in LLMs: A Methodological Investigation of Internal Mechanisms
This paper investigates how large language models perform arithmetic operations by analyzing internal mechanisms through early decoding, revealing that proficient models exhibit a clear division of labor between attention and MLP modules in reasoning tasks.