Tag
The Document Foundation has revoked membership status from approximately 30 Collabora-affiliated developers, prompting Collabora to announce plans for a new, differentiated office suite project. This move has sparked controversy within the LibreOffice community regarding governance and power dynamics.
The article argues that increasing AI agent capability does not inherently improve reliability, emphasizing the need for robust control systems, audits, and human oversight similar to accounting standards to prevent convincing failures.
Two South African Home Affairs officials were suspended after AI-generated 'hallucinations' were discovered in a key policy paper on citizenship and immigration, highlighting the risks of unchecked AI use in government.
A practitioner highlights the under-discussed importance of agent governance for production AI agents and shares an article outlining a 5-layer governance stack.
AEGIS is a proposed open framework for collectively governed, distributed AI cyber-defense to counteract emergent autonomous vulnerability discovery, triggered by Anthropic’s unreleased Claude Mythos model that uncovered widespread zero-days.
Blog post argues that good software architecture should be self-evident and frictionless, advocating Netflix/Spotify-style “paved road” patterns over coercive governance boards or embedded architects.
Analysis of a recurring failure pattern in production AI systems where technically correct decisions become contextually wrong as underlying assumptions shift, framed as the 'Formalisation Trap' where meaning gets locked into outdated structures.
OpenAI has published its Raising Concerns Policy, which protects employees' rights to make protected disclosures about AI safety, legal issues, or company policies, including a 24/7 anonymous Integrity Line introduced in April 2024. The policy explicitly prohibits retaliation and allows employees to report concerns to government agencies, while maintaining confidentiality agreements around trade secrets.
OpenAI has completed its recapitalization, establishing the OpenAI Foundation as a nonprofit entity controlling the for-profit business with approximately $130 billion in equity, enabling the foundation to pursue a $25 billion philanthropic commitment in health breakthroughs and AI resilience while maintaining mission-focused governance.
OpenAI announces its nonprofit will retain control while gaining an equity stake exceeding $100 billion in a new Public Benefit Corporation (PBC), and launches a $50 million grant initiative to support AI literacy and community innovation. The restructuring aims to ensure the nonprofit benefits financially as OpenAI grows, while maintaining its mission of ensuring AGI benefits all of humanity.
OpenAI's Board of Directors released a statement acknowledging findings from an independent Nonprofit Commission report, which evaluated the organization's philanthropic initiatives and long-term impact strategy. The board committed to using the commission's recommendations to improve its nonprofit operations and mission alignment.
OpenAI released an updated Preparedness Framework with sharper focus on high-risk AI capabilities, introducing clearer criteria for prioritizing risks and new Research Categories for emerging threats like autonomous replication and sandbagging alongside established Tracked Categories for biological, chemical, and cybersecurity capabilities.
OpenAI is forming a commission of experts to guide the development of what it calls the world's best-equipped nonprofit, leveraging AI technology and financial resources for philanthropic impact in health, education, and public services. Commission members will be announced in April 2025, with insights submitted to the OpenAI Board within 90 days.
DeepMind has published an updated Frontier Safety Framework (v2.0) with stronger security protocols for frontier AI models, including new Critical Capability Level (CCL) security recommendations and enhanced approaches to deceptive alignment risks. The framework aims to prevent unauthorized model weight exfiltration and manage risks as AI systems become more powerful.
OpenAI appoints Scott Schools as Chief Compliance Officer to strengthen governance and navigate evolving AI regulatory environments while advancing responsible AI development.
OpenAI announced the establishment of an independent Board Safety and Security Committee chaired by Zico Kolter, with authority to oversee and delay model releases based on safety concerns. The company also introduced an integrated safety and security framework for model development and deployment, reorganizing teams to strengthen collaboration across research, safety, and policy functions.
Zico Kolter, a Professor and Director of the Machine Learning Department at Carnegie Mellon University, has joined OpenAI's Board of Directors and the Board's Safety and Security Committee. His expertise in AI safety, alignment, and robustness will contribute to OpenAI's governance and critical safety decisions.
OpenAI's Board of Directors has established a Safety and Security Committee led by Bret Taylor to oversee critical safety and security decisions as the company trains its next frontier model. The committee will evaluate and develop OpenAI's safety processes and safeguards over 90 days, then report recommendations to the full board for public disclosure.
OpenAI publishes a white paper on governing agentic AI systems, proposing definitions, lifecycle responsibilities, and baseline safety practices for autonomous AI agents. The paper addresses risks and indirect impacts of widespread agentic AI adoption while launching a research grant program.
OpenAI publishes details on its approach to frontier AI risks and announces progress on voluntary safety commitments made in July 2023, including the release of DALL-E 3 system card and the development of a new Preparedness Framework to manage catastrophic risks from advanced AI systems.