A Primer on the EU AI Act: What It Means for AI Providers and Deployers

OpenAI Blog News

Summary

OpenAI announces its decision to sign the EU AI Act's Code of Practice for General Purpose AI, which takes effect August 2, 2025, demonstrating commitment to compliance through industry-leading safety measures including its Preparedness Framework, System Cards, and Red Teaming Network.

We’re sharing a preliminary overview of the EU AI Act including upcoming deadlines and requirements, with a particular focus on prohibited and high-risk use cases
Original Article
View Cached Full Text

Cached at: 04/20/26, 02:44 PM

# A Primer on the EU AI Act: What It Means for AI Providers and Deployers Source: [https://openai.com/global-affairs/a-primer-on-the-eu-ai-act/](https://openai.com/global-affairs/a-primer-on-the-eu-ai-act/) ***Update from July 11, 2025****: Following the publication of the final text of the Code of Practice for General Purpose AI, we’re sharing an overview of how we are approaching the entry into force of provisions applicable to General Purpose AI Models on August 2, 2025\.* *Last year, we published this primer on the EU AI Act to lay out preliminary insight into how we were preparing for the implementation of these new legal requirements\.* *Since then we’ve been actively involved in the implementation of the text by taking part in the elaboration of the Code of Practice for General Purpose AI, a framework for AI providers to comply with the EU AI Act\. After months of collective efforts alongside experts, civil society and industry, a final Code has been published\. Today, we are announcing our decision to sign the Code of Practice and use it to demonstrate compliance with our relevant obligations under the EU AI Act\.* *By signing the Code we are taking a concrete step in our broader compliance plan with the EU AI Act\. It reflects our commitment to ensuring continuity, reliability, and trust as regulations take effect, while continuing to partner with European businesses and citizens, bringing them increasingly capable, safe, and secure AI models to reap the benefits of the AI revolution\.* *Signing the Code reinforces many of the industry\-leading safety and transparency measures we have pioneered over the past several years\. We were one of the first companies to publish a comprehensive safety and security protocol, our Preparedness Framework \(2023\), which outlines our approach to deploying frontier AI models safely\.* *In keeping with our commitment to continuously review and improve internal accountability and governance frameworks, we*[*published*⁠](https://openai.com/index/updating-our-preparedness-framework/)*an updated Preparedness Framework in April 2025\.* *As we continue to develop and deploy increasingly capable technology, we actively monitor and mitigate a broad range of novel risks and real\-world safety concerns to keep our models reliable and secure\. And we are constantly refining and improving these processes\.* - *We have long published detailed System Cards and technical documentation with our major releases that lay out what our models can and can’t do, what risks we’ve tested for, and where we’re still learning\.* - *The Safety Hub provides public access to safety evaluation results for our models\.* - *Our Red Teaming Network brings in external experts to pressure\-test our models* - *The Model Spec offers a window into how we shape model behaviour to reflect human values and democratic norms\.* *Together, this work has been instrumental for setting security and safety standards in the industry and informing the development of a workable Code of Practice based on industry best practices\. Building safe and responsible AI is never finished\. We will continue to iteratively improve our approach to safety to help ensure that our technology is used to benefit everyone responsibly, wherever they are in the world\.* *We will work closely with the EU AI Office, relevant authorities and our customers as the AI Act is implemented in the coming months and years, so we can collectively secure the benefits of AI for Europe’s society and economy\.* --- *Update: On September 25, 2024, we signed up to the three core commitments in the EU AI Pact\.* 1. *Adopt an AI governance strategy to foster the uptake of AI in the organization and work towards future compliance with the AI Act;* 2. *carry out to the extent feasible a mapping of AI systems provided or deployed in areas that would be considered high\-risk under the AI Act;* 3. *promote awareness and AI literacy of their staff and other persons dealing with AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons affected by the use of the AI systems\.* *We believe the AI Pact’s core focus on AI literacy, adoption, and governance targets the right priorities to ensure the gains of AI are broadly distributed\. Furthermore, they are aligned with our mission to provide safe, cutting\-edge technologies that benefit everyone\.* --- The[EU AI Act⁠\(opens in a new window\)](https://artificialintelligenceact.eu/)is a significant regulatory framework designed to manage the development, deployment, and use of AI across Europe\. It has a substantial focus on safety to promote trustworthy AI adoption in Europe while protecting health, safety, and fundamental rights\. It introduces new requirements based on the risks associated with AI systems, with a particular focus on high\-risk and unacceptable\-use cases, as well as special obligations for general purpose AI \(GPAI\) models and systems\. While the legislative process is complete and the law will enter into force in August 2024, further guidance and implementing legislation will be required to define the scope of the law, especially as it applies to GPAI models like OpenAI’s\. At OpenAI, we are committed to complying with the Act, not only because this is a legal obligation, but also because the goal of the law aligns with our mission to develop and deploy safe AI to benefit all of humanity\. We are proud to release models that are industry leading on both capabilities and safety\. We believe in a balanced, scientific approach where[safety measures⁠](https://openai.com/index/openai-safety-update/)are integrated into the development process from the outset\. Our teams span a wide spectrum of technical efforts tackling AI safety challenges including, evaluations of models under our[Preparedness Framework⁠](https://openai.com/preparedness/)prior to their deployment, internal and external[red\-teaming⁠](https://openai.com/index/red-teaming-network/), post\-deployment[monitoring⁠](https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/)for abuse,[Bug Bounty⁠](https://openai.com/index/bug-bounty-program/)and[Cybersecurity Grant⁠](https://openai.com/index/openai-cybersecurity-grant-program/)Programs, and contribution to[authenticity standards⁠](https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online/), among others\. We will work closely with the EU AI Office and other relevant authorities as the new law is implemented in the coming months, and we hope that the expertise we’ve built will help advance the objectives of the Act when it comes to deploying safe and beneficial AI\. In this post, we provide an overview of some key topics in the AI Act, with a special focus on prohibited and high\-risk use cases\. The AI Act principally applies to “AI systems,” which the Act defines as “a machine‑based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments\.” This definition is broadly consistent with the OECD’s definition of “AI systems” issued in 2023 and the definition used in the Biden Administration’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence\. Importantly, the AI Act differentiates between*providers*and*deployers*of AI systems\. Providers are entities, like OpenAI, that develop an AI system or a general\-purpose AI model\. It also includes entities that have an AI system or a general\-purpose AI model developed and place it on the market or who put the AI system into service under its own name or trademark, whether for payment or free of charge\. Deployers are customers or partners who use these systems or models in their own applications, such as integrating GPT‑4o into a specific use case\. Although the majority of obligations under the AI Act fall on providers rather than deployers, it’s important to note that a deployer that integrates an AI model into their own AI system can become a provider under the Act, such as by using their own trademark on an AI system or modifying the AI system in ways that weren’t intended by the provider\. Other AI systems that do not pose unacceptable or high\-risks face only limited requirements, such as transparency obligations\. For example, the Act specifies that**individuals should be informed when they are interacting with an AI system**like a chatbot, and that artificially manipulated images, audio, or video content need to be clearly labeled\. Most AI systems on the market are likely to fall under this category\.

Similar Articles

The EU Code of Practice and future of AI in Europe

OpenAI Blog

OpenAI announces its intention to sign the EU's Code of Practice for General Purpose AI and launches the 'OpenAI for Countries European Rollout' to support Europe's AI development. The move aims to balance regulatory compliance with fostering innovation and economic growth across the European continent.

EU AI Act Compliance: How to Build It Into Your Product

Reddit r/artificial

The article discusses how companies can integrate EU AI Act compliance into their product development from the design phase, highlighting transparency, guardrails, and human oversight as key architectural changes.

The next chapter for AI in the EU

OpenAI Blog

OpenAI launches EU Economic Blueprint 2.0 with initiatives to accelerate AI adoption across Europe, including a program to train 20,000 SMEs in partnership with Booking.com, €500,000 in NGO grants for youth safety research, and new data on Europe's 'capability overhang'—the gap between AI capabilities and actual usage.

OpenAI’s EU Economic Blueprint

OpenAI Blog

OpenAI presents an EU Economic Blueprint proposing four pillars to drive AI-fueled growth in Europe: establishing foundational resources (chips, data, energy, talent), streamlining regulatory frameworks, maximizing AI adoption across sectors, and ensuring responsible development aligned with European values. The blueprint includes concrete initiatives like a 300% computing capacity increase by 2030, a €1 billion AI Accelerator Fund, and training 100 million Europeans in AI skills.

Accelerating AI adoption in Europe

OpenAI Blog

OpenAI and Allied for Startups released the Hacktivate AI report featuring 20 policy proposals to accelerate AI adoption across Europe, ahead of the European Commission's Apply AI Strategy launch. The initiative brought together 65 participants from EU institutions, governments, enterprises, and startups to design practical solutions for broader AI uptake and competitiveness.