The author expresses disappointment in AI progress, arguing that despite years of development and massive spending, large language models still struggle with basic reasoning, referencing an Apple paper that exposes fundamental flaws. They question whether the hype around superintelligence is misguided.
For starters, I was an early adopter of AI. Like, was running deep bach on my mac to write bach chorales in 2018, tried to train a model before I knew how it worked in like 2019. When chatGPT came out I thought it was awesome and used it all of the time. I became intimately and immediately familiar with what it could and could not do. For instance, it was great at writing a first draft in a tone I was bad with, but couldn't be used for anything that required a lot of reasoning or intuition around sound. And everything kept getting better, and a lot of things kept getting fixed, but I noticed that my core problems, like rhyme, never really got fixed. Better certainly, but never fixed. Then I read the apple paper on AI reasoning and realized that their lack of reasoning is a fairly fundamental flaw in large language models, and now I have not been able to unsee it. All of these models are just very sophisticated text prediction machines. Of course they can't reason about towers of hanoi beyond the scope of their training data (even though it is a children's game...). That's all fine and dandy, and I definitely don't think that it undermines the usefulness of the models for some things, but what baffles me is the hype... people keep talking about super-intelligent AI, or a coming permanent underclass or whatever, but they haven't figured out a way to get them to reason soundly about simple algorithms we learned in elementary school. It's been a while now, we've spent more on this than we did on the railroad and dot-com bubbles combined, and nobody seems to have fixed the reasoning problem. Are these people ignorant of their own machines? Are they being deliberately misleading for profit? Or have they succumbed to AI psychosis of some kind? Or am I completely wrong and have missed some major AI milestones? Let me know! Ed: Apple paper: [https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf](https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf)
A former AI advocate details disillusionment with large language models, citing reliability issues, regression between versions, broken enterprise workflows, and lack of accountability in AI systems deployed across critical industries.
A newcomer's observation that AI discussion is polarized between doom and hype, questioning whether enough effort is going into user experience and smaller-model system design versus pure scaling.
This article critiques the persistent 'Singularity' narrative in AI discourse, arguing that current Large Language Models should be analyzed as social technologies rather than mythical paths to superintelligence.
Marc Andreessen faced online mockery after sharing a custom AI prompt that demonstrated a fundamental misunderstanding of how large language models work, particularly regarding hallucinations and knowledge limits.
The article argues that the primary AI risk may not be superintelligence but rather systems that optimize flawed, incomplete representations of reality, leading to institutional drift, automated misclassification, and invisible governance failures.