EU AI Act Compliance: How to Build It Into Your Product

Reddit r/artificial News

Summary

The article discusses how companies can integrate EU AI Act compliance into their product development from the design phase, highlighting transparency, guardrails, and human oversight as key architectural changes.

No content available
Original Article
View Cached Full Text

Cached at: 05/15/26, 07:05 PM

# EU AI Act Compliance: How to Build It Into Your Product Source: [https://shiftmag.dev/how-developers-should-build-ai-tools-so-the-eu-doesnt-lose-it-9482/](https://shiftmag.dev/how-developers-should-build-ai-tools-so-the-eu-doesnt-lose-it-9482/) ![](https://shiftmag.dev/wp-content/uploads/2026/05/eu-ai-act-compliance-1200x630-1.png?x91379)The August 2026 deadline for the[EU AI Act](https://artificialintelligenceact.eu/)is getting close, and companies and developerds building AI products are starting to feel it\. High\-risk AI systems need to be compliant by then, and the ones doing it well aren’t treating it as a last\-minute legal scramble\. They’re**building compliance in from the start**\. We sat down with**Ervin Jagatic**\(AI Business Unit Director, Infobip\) to talk about what that actually looks like at Infobip, and why compliance\-by\-design is turning into something engineers think about, not just lawyers\. ## Compliance starts in the design phase AI Act compliance doesn’t start at deployment\. Ervin is clear on this:**it has to enter during system architecture, before a single line of agent code is written**: > Compliance enters during the design phase – system architecture, data flow planning\. Every layer of our AI Agents product, from planning to memory to tool execution, needs to be designed with traceability and human oversight in mind\. We can’t bolt that on after the orchestrator is already coordinating multiple sub\-agents autonomously\. ## **The AI Act is changing product development in 3 ways** That shift has already changed how Infobip’s teams design and ship AI\-powered features\. Ervin points to three major changes that came directly from the AI Act\. ### 1\. Transparency and auditability Transparency is the first\. Infobip’s**AI Agents documentation is explicit**: “you cannot script exact responses” – agents “generate responses dynamically\.” That unpredictability is exactly why the company expanded its logging and analytics infrastructure, Ervin explains: > The AI Act’s transparency obligations pushed us to build comprehensive logging into our Insights and Analytics layer\. Every agent execution now produces detailed logs – requests, responses, processing steps\. That’s not just good engineering, it’s a direct response to auditability requirements\. ### 2\. Explicit guardrails instead of assumptions The second shift relates to behavioral boundaries and guardrails\. Infobip now**requires customers to define capability boundaries, mandatory restrictions, and compliance rules directly inside every agent’s system prompt**, Ervin points out: > Our own documentation warns that if you do not explicitly define these constraints, the agent makes assumptions\. That design philosophy, forcing explicit guardrails rather than relying on implicit model behavior, comes directly from the Act’s emphasis on risk mitigation by design\. ### 3\. Human oversight is a part of the architecture The third shift is human oversight – not as an external policy layer, but**built directly into the product architecture**\. Ervin explains: > [AgentOS](https://www.infobip.com/agentos)uses a human\-in\-the\-loop model where complex issues are escalated from AI agents to human agents\. We are talking about a core architectural decision that applies human oversight requirements while also improving the product\. ## Why compliance\-by\-design is becoming the standard Ervin believes compliance\-by\-design is quickly becoming**the****new industry standard**, particularly for teams building enterprise\-grade AI systems: > For developers and ML engineers at Infobip, compliance\-by\-design means several practical things\. It means every AI agent we build has a defined architecture where an orchestrator coordinates sub\-agents, each with explicit scope, tools, and behavioral rules\. It also**changes how engineering teams think about data**\. “It means our engineers think about data lineage and provenance from the moment they design a training pipeline, not because someone from legal asked them to, but because the architecture demands it,” Ervin points out\. To support that approach, Infobip**invested heavily in tooling and analytics infrastructure**that now serves both operational and regulatory purposes, Ervin said: > Our Insights and Analytics platform is our compliance infrastructure\. When a regulator asks ‘show me how this AI system made this decision,’ we need to answer that question with structured evidence, not anecdotes\. ## Risk assessment depends on the use case Internally, the company approaches risk assessment through a framework closely aligned with the**AI Act’s four\-tier classification model**: unacceptable, high, limited, and minimal risk\. However, Ervin notes that Infobip applies this framework at the feature level rather than only at the system level: > This is important because a platform like Infobip’s serves vastly different use cases\. An AI gamification tool for lead generation on WhatsApp is a fundamentally different risk profile than an AI agent that handles authentication\. The company**evaluates risk based on several factors**, including the sensitivity of the data involved, the autonomy of the AI component, and the intended use case, Ervin explains: > Our internal process follows a lifecycle approach\. During identification, we map known and foreseeable risks, including risks from reasonably foreseeable misuse\. During estimation, we assess probability and severity\. During mitigation, we implement design controls, testing procedures, and human oversight\. **Monitoring continues after deployment**through analytics infrastructure designed for drift detection, incident investigation, and performance tracking\. For enterprise customers, risk assessment also becomes a collaborative process between Infobip and client compliance teams\. > A bank using our AI agents to automate customer support has different risk considerations than a retail brand using the same technology for product recommendations\. The platform is the same; the risk profile is not\. ## August 2026 is approaching… As August 2026 closes in, Ervin says the conversation has shifted: > The question is no longer whether to integrate compliance into product development\. The question is whether you’ve built the infrastructure to do it at speed\.

Similar Articles

A Primer on the EU AI Act: What It Means for AI Providers and Deployers

OpenAI Blog

OpenAI announces its decision to sign the EU AI Act's Code of Practice for General Purpose AI, which takes effect August 2, 2025, demonstrating commitment to compliance through industry-leading safety measures including its Preparedness Framework, System Cards, and Red Teaming Network.

The EU Code of Practice and future of AI in Europe

OpenAI Blog

OpenAI announces its intention to sign the EU's Code of Practice for General Purpose AI and launches the 'OpenAI for Countries European Rollout' to support Europe's AI development. The move aims to balance regulatory compliance with fostering innovation and economic growth across the European continent.

OpenAI’s EU Economic Blueprint

OpenAI Blog

OpenAI presents an EU Economic Blueprint proposing four pillars to drive AI-fueled growth in Europe: establishing foundational resources (chips, data, energy, talent), streamlining regulatory frameworks, maximizing AI adoption across sectors, and ensuring responsible development aligned with European values. The blueprint includes concrete initiatives like a 300% computing capacity increase by 2030, a €1 billion AI Accelerator Fund, and training 100 million Europeans in AI skills.

Auxilius.ai

Product Hunt

Auxilius.ai is a product that converts compliance requirements into code using agentic AI, streamlining compliance automation for enterprises.

Why responsible AI development needs cooperation on safety

OpenAI Blog

OpenAI publishes a policy research paper identifying four strategies to improve industry cooperation on AI safety norms: communicating risks/benefits, technical collaboration, increased transparency, and incentivizing standards. The analysis addresses how competitive pressures could lead to under-investment in safety and proposes mechanisms to align incentives toward safe AI development.