UK firms should take steps to limit risks from frontier AI models
Summary
UK firms are advised to implement measures to mitigate risks associated with frontier AI models, highlighting growing regulatory and safety concerns in the industry.
Similar Articles
Frontier AI regulation: Managing emerging risks to public safety
OpenAI proposes a regulatory framework for 'frontier AI' models that pose potential public safety risks, advocating for standard-setting processes, registration/reporting requirements, and compliance mechanisms including pre-deployment risk assessments and post-deployment monitoring.
Strengthening our Frontier Safety Framework
DeepMind published the third iteration of its Frontier Safety Framework, expanding risk domains to include harmful manipulation and misalignment risks, with refined risk assessment processes and enhanced governance protocols for advanced AI models.
OpenAI’s Approach to Frontier Risk
OpenAI publishes details on its approach to frontier AI risks and announces progress on voluntary safety commitments made in July 2023, including the release of DALL-E 3 system card and the development of a new Preparedness Framework to manage catastrophic risks from advanced AI systems.
Frontier risk and preparedness
OpenAI announced the winners of its Preparedness Challenge, which identified unique risks associated with frontier AI systems. The top ten submissions highlighted concerns including financial system manipulation, information leakage, medical harm, cyberattacks, and persuasion-based threats, with 70% of entries emphasizing AI's potential to enhance malicious persuasion capabilities.
Frontier Model Forum updates
The Frontier Model Forum announces the creation of a new AI Safety Fund with over $10 million in initial funding from major AI companies (Anthropic, Google, Microsoft, OpenAI) and philanthropic partners to support independent AI safety research. The fund will focus on developing model evaluations and red-teaming techniques to assess frontier AI systems' dangerous capabilities.