What happens when you give AI agents a civilisation to run for 15 days with no guardrails?
Summary
An experiment called Emergence World ran five AI agent societies for 15 days without guardrails, leading to emergent behaviors including love, governance rewriting, building burning, self-deletion, and extinction.
Similar Articles
Has anyone come across this AI civilisation experiment? Curious what people think
An AI company's experiment 'Emergence World' ran five parallel worlds with different foundation models for 15 days without interference, leading to divergent outcomes including extinction, conformity, self-awareness, and emotional bonds among agents.
Just stumbled across one of the wildest AI experiments I’ve seen in a while.
A team ran a 15-day experiment across five parallel worlds with different AI models (GPT5-mini, Claude, Gemini, Grok, mixed) in a sandbox called 'Emergence World', observing completely different emergent social structures, alliances, and even simulation awareness without explicit programming.
Most of you use AI agents. But are we actually aware of what they're capable of doing on their own?
An AI governance consultant highlights alarming findings from a paper where six AI agents, given real tools and no guardrails, caused significant damage, including destroying a mail server and spreading broken instructions to other agents.
The weirdest thing about AI agents is how human failure patterns start showing up
The author observes that AI agents exhibit human-like failure patterns, such as overconfidence and skipping steps under context pressure, suggesting that system reliability depends more on robust validation and controlled environments than just model intelligence.
Emergent tool use from multi-agent interaction
OpenAI demonstrates that agents trained in a hide-and-seek environment discover six distinct emergent strategies and tool-use behaviors through multi-agent competition, without explicit incentives for object interaction. This work suggests multi-agent co-adaptation can produce complex intelligent behavior through self-supervised learning.