Share This Article
In less than 50 days, relevant provisions of the EU AI Act will become applicable making even more paramount for businesses to ensure compliance in the exploitation of artificial intelligence systems.
Starting August 2, 2025, a critical group of obligations under the AI Act will become legally binding. These include:
-
The designation of notifying and notified authorities for high-risk AI systems
-
Requirements for General Purpose AI (GPAI) providers
-
Foundational AI governance rules
-
Penalties (with the exception of fines for GPAI model providers)
-
Confidentiality obligations in the post-market monitoring context
These measures impact not only AI developers and providers, but also deployers, meaning any company that integrates or uses AI systemsโespecially generative AIโin their operations. With these obligations fast approaching, organizations must urgently ensure that their AI Act compliance strategy is fully implemented.
Why the AI Act Changes Everything
The AI Act represents a shift from soft law to enforceable obligations. It introduces a clear distinction between prohibited, high-risk, and general-purpose AI systems, each carrying specific duties. This structured approach requires companies to assess not only the AI tools they build, but also those they purchase, license, or simply integrate.
Whatโs more, regulators will now have the ability to audit companiesโ AI practices, impose corrective measures, and initiate investigations into systems that pose significant risk to fundamental rights or safety.
A Structured Approach to AI Governance
To meet the demands of AI Act compliance, businesses need a robust governance model. It must align legal, technical, and operational stakeholders around a unified AI strategy.
1. Strategic Oversight from the Top
Governance begins with leadership. Executive teams should define how the organization uses AI, with guiding principles like trust, transparency, and risk minimization. These principles must then be translated into detailed internal policies and protocols by legal, risk, and compliance departments.
This top-down approach ensures that AI is aligned with the companyโs values while remaining adaptable to ongoing regulatory change.
2. Internal Governance Committees
Rather than relying on a single AI officer, companies are increasingly appointing multidisciplinary AI committees. These typically include representatives from legal, compliance, cybersecurity, data governance, and IT.
Their role is to evaluate risk, oversee internal use cases, and manage relationships with external AI vendors. These committees serve as the operational heart of AI Act compliance, making sure each deployment meets both legal and ethical standards.
3. Mapping and Classifying AI Use Cases
A surprising number of tools in everyday useโfrom automated HR platforms to creditworthiness scoring systemsโqualify as AI under the Actโs broad definition. Many of these may fall under the โhigh-riskโ category without the organization realizing it.
The first step toward compliance is identifying every AI system in use and classifying it correctly. Failing to do so could result in deploying systems that trigger obligations the company is unaware ofโparticularly problematic in regulated industries or cross-border operations.
4. AI Act Compliance Policies
To avoid regulatory risks, every organization must develop an internal policy for AI, verify that it complies with the Act, and ensure that all internal and external processes align with their AI Act compliance strategy. If a business does not operationalize AI Act compliance across departments, it risks failing AI Act compliance even when it believes it is fully compliant with the Act.
5. Risk Management and Controls
Once AI systems are mapped, they must be paired with controls that mitigate their associated risks. This includes:
-
Human oversight
-
Explainability mechanisms
-
Bias detection
-
Resilience and security protocols
-
Vendor accountability through contracts
For high-risk AI systems, these controls are no longer optionalโthey are legal requirements under the AI Act.
6. Continuous Monitoring
AI is not static, and neither are the risks it introduces. Companies need to maintain a dynamic governance model, which includes:
-
Revalidating systems when software or data changes
-
Tracking updates in the legal framework
-
Revisiting risk assessments periodically
This process must be documented and auditable to meet the post-market monitoring and accountability expectations of regulators.
The Strategic Value of Compliance
Organizations that treat AI Act compliance as a check-the-box exercise risk falling behind. Instead, those that build governance into their operations can gain long-term advantagesโreducing exposure, improving internal efficiency, and increasing stakeholder trust.
Furthermore, with the EU AI Act expected to set the standard for AI regulation globally, companies that achieve compliance now will be better equipped to navigate upcoming frameworks in other jurisdictions.
With the countdown underway, the question is not whether the EU AI Act applies to your businessโit likely doesโbut whether your business is prepared to meet its obligations.
On the topic, you can read the latest issues of our AI law journal available HERE and the presentation of our AI compliance tool available HERE.