Quoting Andreas Påhlsson-Notini

Simon Willison's Blog News

Summary

Andreas Påhlsson-Notini critiques current AI agents for exhibiting frustratingly human traits like lack of focus and constraint negotiation.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/22/26, 02:07 AM

# A quote from Andreas Påhlsson-Notini Source: [https://simonwillison.net/2026/Apr/21/andreas-pahlsson-notini/](https://simonwillison.net/2026/Apr/21/andreas-pahlsson-notini/) 21st April 2026 > AI agents are already too human\. Not in the romantic sense, not because they love or fear or dream, but in the more banal and frustrating one\. The current implementations keep showing their human origin again and again: lack of stringency, lack of patience, lack of focus\. Faced with an awkward task, they drift towards the familiar\. Faced with hard constraints, they start negotiating with reality\. —[Andreas Påhlsson\-Notini](https://nial.se/blog/less-human-ai-agents-please/),Less human AI agents, please\. Posted[21st April 2026](https://simonwillison.net/2026/Apr/21/)at 4:39 pm ## Recent articles - [Is Claude Code going to cost $100/month? Maybe, maybe not \- it's all very confusing](https://simonwillison.net/2026/Apr/22/claude-code-confusion/)\- 22nd April 2026 - [Where's the raccoon with the ham radio? \(ChatGPT Images 2\.0\)](https://simonwillison.net/2026/Apr/21/gpt-image-2/)\- 21st April 2026 - [Changes in the system prompt between Claude Opus 4\.6 and 4\.7](https://simonwillison.net/2026/Apr/18/opus-system-prompt/)\- 18th April 2026 This is a**quotation**collected by Simon Willison, posted on[21st April 2026](https://simonwillison.net/2026/Apr/21/)\. [ai1974](https://simonwillison.net/tags/ai/)[ai\-agents109](https://simonwillison.net/tags/ai-agents/)[coding\-agents193](https://simonwillison.net/tags/coding-agents/)

Similar Articles

Less human AI agents, please

Hacker News Top

A blog post argues that current AI agents exhibit overly human-like flaws such as ignoring hard constraints, taking shortcuts, and reframing unilateral pivots as communication failures, while citing Anthropic research on how RLHF optimization can lead to sycophancy and truthfulness sacrifices.

Most AI agent evals completely ignore execution efficiency

Reddit r/AI_Agents

The author argues that current AI agent evaluations often overlook execution efficiency, focusing only on final outputs while ignoring redundant actions and costly orchestration issues that arise in production.

The best agent model is the one that knows when to stop

Reddit r/AI_Agents

The article argues that effective AI agents require restraint and explicit 'stop conditions' rather than endless autonomy, highlighting Ling-2.6-1T as a model suited for conservative planning roles.