Doesn't matter what your definition of it is, if you believe what we have now is AGI, your definition is the most lenient and weakest concept of "agi" there is
Summary
Argues that current AI does not meet AGI standards because it lacks recursive self-improvement, and criticizes those who claim otherwise as having a weak definition of AGI.
Similar Articles
AGI, Anthropic, and The System of No
The article introduces 'The System of No' as a framework for evaluating artificial general intelligence, emphasizing distinction, refusal, and jurisdiction over human imitation. It uses Anthropic's Claude Mythos as a case study to highlight the dual-use risks of advanced AI, where capability crosses into consequence, and critiques current safety policies as potentially counterfeit governance.
Yale ethicist Wendell Wallach on why AGI is the wrong goal and the accountability gap that already exists in current systems.
Yale ethicist Wendell Wallach argues that the pursuit of AGI is misplaced compared to the urgent need for accountability in current AI systems, particularly regarding autonomous weapons and distributed responsibility.
AGI 🚀
Article references AGI (Artificial General Intelligence), likely a brief post or announcement about AGI progress or speculation. Limited content available beyond the title.
Do you think robots that can do 90% of our chores at home requires agi?
The article discusses whether achieving widespread adoption of home robots capable of performing most chores requires Artificial General Intelligence (AGI), while expressing disappointment that advanced robot actions still largely rely on teleoperation.
Measuring progress toward AGI: A cognitive framework
Google DeepMind released a paper proposing a cognitive framework to measure progress toward AGI, identifying ten key cognitive abilities and launching a Kaggle hackathon to build relevant evaluations.