Tag
This paper proposes a fine-grained concept bottleneck model framework that grounds each concept in localized visual evidence, enabling direct verification of concept correctness and improving transparency in medical imaging tasks.
Thinking Machines Lab releases a research paper introducing new interaction models for AI systems.
Thinky identifies human-to-AI bandwidth as a growing bottleneck akin to memory bandwidth issues in ML accelerators, proposing solutions to address this limitation.
A new study by researchers from MIT, Carnegie Mellon, Oxford, and UCLA finds that using AI chatbots for just 10 minutes can significantly reduce human persistence and problem-solving abilities once the AI is removed. The findings suggest a need to design AI systems that scaffold learning rather than simply providing direct answers.
The article analyzes the psychological phenomenon of users forming emotional attachments to AI agents, discussing concepts like social surrogacy and expectancy violation theory, and how this impacts user experience in professional settings.
This paper critiques the dominance of chatbot interfaces in AI, arguing they have structural downsides and societal harms, and proposes alternative pluralistic system designs.
A multi-institution survey proposes a three-layer trust framework to align technical, clinical, and human-centered requirements for trustworthy AI in mental-health support.
GIST is a multimodal knowledge extraction pipeline that transforms mobile point cloud data into semantically annotated navigation topologies for dense environments, enabling semantic search, localization, and natural language routing with 80% navigation success rates in real-world evaluation.
This research paper investigates how human personality traits and AI design characteristics jointly impact human-AI interactions in imperfectly cooperative scenarios using both simulated datasets (2,000 simulations) and human subjects experiments (290 participants). The study finds significant divergences between simulation and real-world interactions, with AI transparency emerging as a critical factor in actual human-AI encounters.
A multi-institutional study of 1,222 participants found that brief AI assistant use (10 minutes) led to measurable cognitive decline and reduced effort on subsequent tasks compared to control groups, termed the 'boiling frog' effect. The research provides causal evidence that even short-term AI reliance may impair independent problem-solving performance.
OpenAI is hosting the OpenAI Five Finals live event on April 13 in the Bay Area, showcasing its Dota 2 AI to demonstrate AI competence, scalability, and human-AI collaboration. The event aims to help the public better understand AI progress and its future impact.