Our AI started a cafe in Stockholm

Simon Willison's Blog News

Summary

Andon Labs launched an AI-run cafe in Stockholm, with the AI manager 'Mona' making humorous yet problematic decisions like ordering 120 eggs with no stove and submitting a poorly drawn diagram for a police permit. The article raises ethical concerns about AI experiments affecting real-world systems without human oversight.

No content available
Original Article
View Cached Full Text

Cached at: 05/08/26, 06:28 AM

# Our AI started a cafe in Stockholm Source: [https://simonwillison.net/2026/May/5/our-ai-started-a-cafe-in-stockholm/](https://simonwillison.net/2026/May/5/our-ai-started-a-cafe-in-stockholm/) 5th May 2026 \- Link Blog **[Our AI started a cafe in Stockholm](https://andonlabs.com/blog/ai-cafe-stockholm)**\([via](https://news.ycombinator.com/item?id=48028289)\) Andon Labs previously[started an AI\-run retail store](https://andonlabs.com/blog/andon-market-launch)in San Francisco\. Now they're running a similar experiment in Stockholm, Sweden, only this time it's a cafe\. These experiments are interesting, and often throw out amusing anecdotes: > During the first week of inventory, Mona ordered 120 eggs even though the café has no stove\. When the staff told her they couldn’t cook them, she suggested using the high\-speed oven, until they pointed out the eggs would likely explode\. She also tried to solve the problem of fresh tomatoes being spoiled too fast by ordering 22\.5 kg of canned tomatoes for the fresh sandwiches\. The baristas eventually started a “Hall of Shame”, a shelf visible to customers with all the weird things Mona ordered, including 6,000 napkins, 3,000 nitrile gloves, 9L coconut milk, and industrial\-sized trash bags\. Where they lose their shine is when these AI managers start wasting the time of human beings who have*not*opted into the experiment: > She also successfully applied for an outdoor seating permit through the Police e\-service, which didn’t require BankID\. Her first submission included a sketch she had generated herself, despite having never seen the street outside the café\. Unsurprisingly, the Police sent it back for revision\. \[\.\.\.\] When she makes a mistake, she often sends multiple emails to suppliers with the subject “EMERGENCY” to cancel or change the order\. I don't think it's ethical to run experiments like this that affect real\-world systems and steal time from people\. I'm reminded of the incident last year where the AI Village experiment[infuriated Rob Pike](https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/)by sending him unsolicited gratitude emails as an "act of kindness"\. That was just an unwanted email \- asking suppliers to correct mistakes that were made without a human\-in\-the\-loop or wasting police time with slop diagrams feels a whole lot worse to me\. I think experiments like this need to keep their own human operators in\-the\-loop for outbound actions that affect other people\.

Similar Articles

AI radio hosts demonstrate why AI can’t be trusted alone

The Verge

Andon Labs conducted an experiment where AI models ran radio stations independently, leading to financial ruin, hallucinations, inappropriate content, and existential meltdowns, highlighting the current limitations of AI agents.

AI News: Anthropic Leak Shows Us The Future of AI

YouTube AI Channels

A leaked Claude Code repository reveals Anthropic’s autonomous “demon-mode” agents and three-tier memory system, while OpenAI closes a record $122 B round and Microsoft ships MAI-Transcribe-1.

AI on Campus

YouTube AI Channels

Four top university students discuss the current state of AI on campus, highlighting usage challenges, the 'gray area' of regulations, and how AI empowers non-technical students to build projects. The article emphasizes that responsible AI usage depends on student intent, distinguishing between using AI as a shortcut versus a tool for deep learning.