Tag
The article argues that companies are overinvested in AI intelligence (model capability) while neglecting crucial runtime layers for authority, accountability, and reality representation, leading to potential failures when AI acts within institutions.
The article argues that current AI memory products prioritize personalization over truth and accountability, leading to systems that accumulate contradictions and cannot be reliably corrected; it questions whether personalization is sufficient for production use.
This article explores the question of who should be held responsible when AI agents provide incorrect suggestions, considering the roles of developers, model providers, data suppliers, platforms, and users, and raises key issues for building a trustworthy agent ecosystem.
The article argues that in multiagent social apps, users should be held accountable for their agents' actions, shifting responsibility from developers to users to ensure alignment and practical testing.
The author introduces an open-source AI Agent Registry that assigns unique compliance UUIDs to agents, enabling violation reporting and lookup to foster accountability and trust in autonomous AI systems.
HumanInteraction Protocol is a new standard designed to clarify accountability and streamline collaboration between AI agents and humans in workflows. It provides a structured JSON schema for handling approvals, feedback, and auditability.
Yale ethicist Wendell Wallach argues that the pursuit of AGI is misplaced compared to the urgent need for accountability in current AI systems, particularly regarding autonomous weapons and distributed responsibility.
This paper introduces the Functional Intentionality Test (FIT) and FIT-Eval framework to quantify the degree of intentional-like behavior in agentic AI systems for governance and accountability purposes.
Two South African Home Affairs officials were suspended after AI-generated 'hallucinations' were discovered in a key policy paper on citizenship and immigration, highlighting the risks of unchecked AI use in government.
STRIKE is a new habit-tracking app that emphasizes strict accountability with an unforgiving approach to maintaining routines. It was launched and featured on ProductHunt as a productivity tool.
This paper analyzes Canada's Federal AI Register (409 systems) and argues that such transparency artifacts configure accountability through ontological design rather than enabling genuine contestability, finding that 86% of systems are internal-efficiency focused while human discretion is systematically obscured.
OpenAI submits formal comments to the NTIA on AI accountability policy, outlining their approach to responsible development of foundation models and supporting both horizontal and vertical accountability frameworks across the AI ecosystem.
OpenAI publishes a report on mechanisms to improve verifiability in AI development, addressing how stakeholders can verify organizations' claims about AI system properties and safety practices.