What fundamental research exists anwering if / if not AGI can be achieved through LLMs?

Reddit r/artificial News

Summary

A question seeking fundamental research or papers on whether AGI can or cannot be achieved through large language models, looking to move beyond opinion-based discussion.

I've not seen any papers or any real research evidence on either side of this arguement. Would love to be able to discuss this beyond pure opinion.
Original Article

Similar Articles

Learning to reason with LLMs

OpenAI Blog

OpenAI publishes an article exploring reasoning techniques with LLMs through cipher-decoding examples, demonstrating step-by-step problem-solving approaches and pattern recognition in language models.

Can Large Language Models Reinvent Foundational Algorithms?

Hugging Face Daily Papers

Researchers introduce 'Unlearn-and-Reinvent', a pipeline that removes knowledge of foundational algorithms (e.g., Dijkstra's, Euclid's) from LLMs via unlearning, then tests whether models can independently reinvent them. Results show LLMs can reinvent algorithms with intuitive structures but struggle with those requiring non-obvious data structures or counterintuitive invariants.