Tag
The article argues that Spain's creation of a national AI regulator is causing top AI talent to prefer stable government jobs over high-risk startups, potentially hindering the country's innovation ecosystem.
VP JD Vance held a closed-door call with top tech executives including Elon Musk, Sam Altman, and Dario Amodei to warn about AI cybersecurity threats, prompted by Anthropic's unreleased model 'Mythos' that demonstrated elite hacker-level ability to autonomously find and exploit security vulnerabilities. The White House is now considering an executive order for oversight of advanced AI models, marking a significant reversal of the administration's previously hands-off AI policy.
The U.S. government has established voluntary pre-release security review agreements with every major domestic AI lab, marking a significant step in federal oversight of frontier model development. The policy aims to proactively assess national security risks before powerful AI systems are publicly deployed.
The U.S. and China are considering AI crisis controls ahead of a summit where Trump and Xi may discuss AI risks.
Minnesota Governor Tim Walz signed a pioneering law aimed at preventing the use of AI to generate or distribute child sexual abuse material.
Chen Tianqiao uses the Manus episode to outline the mindset and requirements for founding an AI company that straddles multiple legal systems amid rapidly evolving regulation, geopolitics and public scrutiny.
A social-media post laments AI models being blocked in China and calls for global unity in the AI era.
OpenAI submitted a response to the UK's copyright consultation advocating for a broad text and data mining (TDM) exception to support AI innovation and competitiveness. The company argues that clear data access policies are essential for the UK to establish itself as Europe's AI leader while balancing creator and rightholder concerns.
OpenAI published an economic blueprint outlining its vision for US AI leadership, emphasizing infrastructure (chips, data, energy, talent), free-market competition, and sensible regulations to attract global investment and counter Chinese influence. The initiative includes a January 30 Washington DC event and a nationwide 'Innovating for America' program to drive AI economic benefits.
OpenAI submitted comments to the NTIA advocating for increased US data center investment as critical to maintaining American AI leadership, citing potential economic benefits of $17-20 billion in state GDP and 40,000 jobs per 5GW facility.
OpenAI appoints Scott Schools as Chief Compliance Officer to strengthen governance and navigate evolving AI regulatory environments while advancing responsible AI development.
OpenAI proposes a regulatory framework for 'frontier AI' models that pose potential public safety risks, advocating for standard-setting processes, registration/reporting requirements, and compliance mechanisms including pre-deployment risk assessments and post-deployment monitoring.
Sam Altman responds to Senate questions on AI regulation, advocating for balanced legislation, voluntary safety commitments, and registration/licensing requirements for highly capable foundation models. OpenAI details its safety evaluation approaches and System Card methodology for assessing dangerous capabilities in models like GPT-4.
Sam Altman testified before the U.S. Senate Judiciary Committee about OpenAI's work in AI development, safety practices, and governance structure. He outlined OpenAI's unique nonprofit-controlled structure designed to ensure safe and beneficial AI development while advocating for collaborative government regulation.