Stop building AI agents.

Reddit r/AI_Agents News

Summary

The author argues that most founders requesting AI agents actually need straightforward automations with minimal LLM integration, citing production failures, compliance hurdles, and higher ROI from simpler workflows. The piece provides a practical decision framework to help builders and founders prioritize reliable automations over complex, unpredictable agents.

Every week a founder books a sales call with me asking for an AI agent. Every week I end up telling most of them they don't need one. I build automations and AI agents for founders in healthcare and fintech. Forty-something projects in. The pattern is so consistent now I can predict the call before it starts. They come in wanting magic. They saw a Loom video of someone's "autonomous sales agent" closing deals while they sleep. They read the LinkedIn post about the "AI employee" running an entire ops team. They've already told their board they're building one. Then we get on Zoom and within fifteen minutes I'm explaining why the thing they actually need is an internal automation with one LLM call in the middle. You can watch their face fall in real time. Here's what's happening in the market right now. Most of the "AI agents" shipping to real businesses are just internal automations with a language model bolted in. That's the whole product. The agent label is mostly there because automations don't trend on Twitter. And the automations work. They save real money. They print real ROI. But the founders paying $30k for an "agent" don't love hearing they could have gotten 90% of the value from a $4k automation build. Three quick examples from the last six months. Telehealth founder. Wanted "an autonomous AI receptionist that handles everything." After an hour on a call I told her she needed a workflow that reads intake forms and routes them to the right clinician. We shipped it in six weeks. Saves her clinicians four hours a day. She paid me again last month. Fintech client. Wanted a "fully agentic finance copilot." What they needed was a script that reconciles ACH discrepancies before they hit the dispute queue. One model call, the rest plain code. Saved them a full ops hire. Medspa chain. Wanted "AI marketing automation." What they needed was a job that watches their booking system for no-show patterns and triggers a personal recovery message. Three steps. No agent. Booked 14% more revenue last quarter. None of these are agents. They're automations. And every one of them outperforms the agent the founder originally asked for, because the agent would have hallucinated something stupid in week three and burned the client's trust forever. Why agents keep failing in production They're given too many decisions to make. A good automation has one decision per step and a clear rule for what happens at each branch. An agent gets handed a goal and told to figure it out. Beautiful in a demo. Catastrophic in your customer support queue at 2am. The teams in your competitor's office quietly crushing it with AI right now? They're running boring automations. "We wrote a Python script with an LLM call" doesn't make the trade press, so you don't see it. The vibe-coded prototypes from Bolt and Lovable and Cursor that landed in the last 18 months are mostly being torn out right now. Half my pipeline is founders who paid $50k for a "next-gen AI agent" build that's bleeding tokens, can't be audited, and falls over the moment a customer does something unexpected. I rebuild them as straightforward automations and they suddenly start making money. In regulated SaaS, agents are doubly cursed. HIPAA and SOC 2 reviewers want to know exactly what your system does, in what order, every time. An automation passes that conversation in 20 minutes. An agent turns it into a six-month nightmare. How to actually decide If you're a founder about to spend money on an agent, answer these on paper first: 1. Can I draw the workflow as clear steps? If yes, you want an automation. 2. Does the workflow have more than five branches with truly unpredictable inputs? Then maybe an agent. 3. Is the cost of the worst-case wrong answer high? If yes, you want an automation, not an agent. 4. Will compliance ever look at this? If yes, automation. Full stop. If you're a builder selling agents, you'll make more money in the next 12 months selling honest automations than chasing the agent narrative. The market is wising up. Founders who got burned in the first wave are warning the next wave. Be the person who ships a clean automation in six weeks that works on a Tuesday and is still working on Thursday. Builders, founders, anyone in the trenches. What's actually working for you? What's breaking? Curious to hear from real operators.
Original Article

Similar Articles

Building effective agents

Anthropic Engineering

Anthropic publishes engineering guidelines for building effective AI agents, advocating for simple, composable patterns and direct API usage over complex frameworks. The article distinguishes between workflows and autonomous agents, providing practical advice on when to use each architecture.

Most AI agent failures are organizational design failures, not model failures

Reddit r/AI_Agents

The article argues that AI agent failures in production are often due to poor organizational design and undefined responsibility boundaries rather than model limitations. It proposes a maturity model distinguishing between AI assistants, automation, and AI employees to guide task ownership.

Less human AI agents, please

Hacker News Top

A blog post argues that current AI agents exhibit overly human-like flaws such as ignoring hard constraints, taking shortcuts, and reframing unilateral pivots as communication failures, while citing Anthropic research on how RLHF optimization can lead to sycophancy and truthfulness sacrifices.