Researchers gave 1,222 people AI assistants, then took them away after 10 minutes. Performance crashed below the control group and people stopped trying. UCLA, MIT, Oxford, and Carnegie Mellon call it the "boiling frog" effect.
Summary
A multi-institutional study of 1,222 participants found that brief AI assistant use (10 minutes) led to measurable cognitive decline and reduced effort on subsequent tasks compared to control groups, termed the 'boiling frog' effect. The research provides causal evidence that even short-term AI reliance may impair independent problem-solving performance.
Similar Articles
Using AI for just 10 minutes might make you lazy and dumb
A new study by researchers from MIT, Carnegie Mellon, Oxford, and UCLA finds that using AI chatbots for just 10 minutes can significantly reduce human persistence and problem-solving abilities once the AI is removed. The findings suggest a need to design AI systems that scaffold learning rather than simply providing direct answers.
"Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows"
A study suggests that using AI for short periods may lead to reduced cognitive effort and performance.
Your AI Use Is Breaking My Brain: Why 10 Minutes of Prompting Fries Us[D]
A personal essay discusses how heavy use of AI tools leads to cognitive overload and mental fatigue, citing studies from BCG, Wired, and other sources that show AI can increase mental effort and cause skill atrophy.
what open source AI assistants hold up after a month of real use?
The article analyzes the long-term reliability of open-source AI assistants after one month of use, highlighting issues like memory drift and permission creep. It compares Vellum, OpenClaw, and Hermes, noting Vellum's stability due to intentional memory systems while criticizing Hermes for behavioral degradation.
Beyond Autonomy: The Power of an Agent That Knows Its Limits
The COWCORPUS project, a study of 4,200 human-AI interactions, found that agents predicting their own failures and intervention moments are more useful than those simply trying to avoid errors. Researchers identified four stable trust patterns in human-AI collaboration and developed the Perfect Timing Score (PTS) to measure intervention prediction accuracy.