I think most companies are building AI backwards

Reddit r/artificial News

Summary

The article argues that companies are overinvested in AI intelligence (model capability) while neglecting crucial runtime layers for authority, accountability, and reality representation, leading to potential failures when AI acts within institutions.

Everyone keeps talking about smarter AI. Bigger models. Longer context windows. More autonomous agents. Better reasoning. Better coding. Better memory. But I think we’re missing the real problem. An AI system can sound intelligent… and still operate on completely broken reality. Imagine an AI agent: * approving refunds * escalating incidents * updating records * contacting customers * changing prices * triggering workflows Now ask a simple question: How does the AI know the reality it sees is actually correct? Not “technically accessible.” Actually correct. Because enterprise reality is messy: * stale systems * conflicting databases * outdated approvals * missing context * silent exceptions * contradictory records * unclear ownership * shifting policies And then there’s an even bigger question: Even if the AI *knows* something… is it actually allowed to act on it? Under whose authority? With what limits? Who is accountable? Can the action be reversed? What happens if the AI is wrong? That’s why I’m starting to think the future AI stack is not just: data → model → agent → action There are missing runtime layers in between. The mental model I’ve been exploring is: * **SENSE** → reality representation * **CORE** → reasoning * **DRIVER** → governed action And honestly, it feels like the industry is massively overinvested in CORE. We obsess over intelligence. But the real bottlenecks may become: * representation quality * legitimacy * authority boundaries * reversibility * accountability * runtime governance In other words: The biggest AI failures may not come from “bad intelligence.” They may come from machines acting on incomplete reality with unclear authority. And I think this becomes a huge issue once AI moves from: “helping humans” to “acting inside institutions.” Curious what others here are seeing. Are companies actually solving these layers internally? Or are most organizations still mainly focused on model capability and agent demos right now?
Original Article

Similar Articles

Most AI agent failures are organizational design failures, not model failures

Reddit r/AI_Agents

The article argues that AI agent failures in production are often due to poor organizational design and undefined responsibility boundaries rather than model limitations. It proposes a maturity model distinguishing between AI assistants, automation, and AI employees to guide task ownership.

We are in the gaslighting phase of AI adoption

Reddit r/ArtificialInteligence

The article argues that companies are exaggerating AI maturity, offloading risks to workers, and gaslighting employees into ignoring real problems like hallucinations and fragile workflows.

Are Enterprises Using AI in the Wrong Places?

Reddit r/artificial

This analysis challenges the reflexive insertion of AI into all enterprise workflows, suggesting that deterministic systems often require traditional software rather than probabilistic models. It argues for a strategic approach to distinguish where AI creates leverage versus where established architectures remain superior.