Share This Article
The European Union has finally reached a political deal on the changes to the AI Act, ending weeks of uncertainty around one of the most controversial reforms of the EU’s digital rulebook.
After overnight negotiations, the European Parliament and the Council agreed on a compromise package that substantially reshapes how parts of the AI Act will apply in practice, particularly for industrial AI systems and products already regulated under sector-specific legislation.
The agreement is the first real political acknowledgment that the original AI Act created major implementation problems for companies operating across regulated sectors such as manufacturing, machinery, automotive, medical devices, and connected products.
And it arrives just months before the original 2 August 2026 deadline for high-risk AI obligations.
The AI Act Changes Were Driven by One Core Problem
The central issue behind the negotiations was not whether Europe should regulate AI. The real question was whether the AI Act, as originally drafted, had become unworkable for businesses already subject to extensive sectoral regulation. This became particularly evident for AI systems embedded into products already governed by European product safety frameworks.
Manufacturers were facing the prospect of complying simultaneously with:
- the AI Act;
- the Machinery Regulation;
- medical device rules;
- automotive safety frameworks;
- cybersecurity obligations;
- and broader product compliance requirements.
For many businesses, the result was not legal certainty, but overlapping and potentially duplicative compliance obligations. That is precisely where the negotiations became politically explosive.
Machinery Rules Become the Biggest Winner of the Deal
The most significant aspect of the compromise concerns machinery. Under the AI Act agreement, machinery products will largely avoid direct overlap with the AI Act where equivalent obligations already exist under the Machinery Regulation.
Instead of imposing parallel compliance frameworks, AI-related health and safety requirements for machinery will now primarily be addressed through sectoral product legislation. This is a major shift in regulatory strategy.
The European Commission will still retain powers to adopt delegated acts introducing AI-specific health and safety requirements where needed, but the compromise clearly reflects pressure from industrial stakeholders — particularly Germany — to avoid a regulatory duplication scenario.
And Germany pushed extremely hard for this outcome. German Chancellor Friedrich Merz openly criticized what he described as an excessively restrictive regulatory framework for industrial AI, arguing that European competitiveness was at risk if industrial sectors were burdened with overlapping AI compliance obligations. In practice, the deal signals that Europe is starting to accept something businesses have been warning about for months:
AI regulation cannot operate in isolation from existing product regulation.
Different Compliance Deadlines Confirm the Shift
Another key part of the agreement is the postponement of certain AI Act obligations. The compromise introduces differentiated timelines depending on the type of AI system involved.
Under the new framework:
- high-risk AI systems involving biometrics, education, employment, law enforcement, border management, and critical infrastructure will apply from 2 December 2027;
- AI systems embedded into products will instead become subject to obligations from 2 August 2028.
This distinction is extremely important.
It effectively acknowledges that product-integrated AI systems raise significantly more operational complexity than standalone AI applications. Many companies were still struggling to understand how AI Act conformity assessments would interact with existing product certification processes.
The reform is an attempt to redesign the operational interaction between the AI Act and Europe’s broader regulatory ecosystem.
The Deal Goes Beyond Industrial AI
While the debate largely focused on industrial sectors, the compromise also introduces several other important changes to the AI Act. One of the most politically visible reforms is the explicit prohibition of so-called “nudifier” applications and AI systems capable of generating child sexual abuse material or sexually explicit deepfake content involving identifiable individuals.
The prohibition will apply from 2 December 2026.
This reflects increasing political concern around generative AI misuse and synthetic content risks. The agreement also shortens the implementation timeline for transparency obligations relating to AI-generated content. Providers will now have only three months — instead of six — to implement transparency solutions once the relevant obligations become applicable.
Importantly, the compromise also restores the obligation to register high-risk AI systems in the EU database after earlier discussions had questioned whether that requirement would survive.
That point is particularly relevant because the registry is likely to become one of the main enforcement and transparency tools for regulators.
The Personal Data Point Could Become Highly Relevant
One aspect of the agreement that deserves particular attention from privacy professionals is the new wording relating to the use of personal data for bias detection and correction. The compromise expressly allows organizations to process personal data where strictly necessary to identify and mitigate bias in AI systems, provided appropriate safeguards are implemented.
This change could become one of the most operationally important elements of the reform. Many companies have been struggling with a fundamental tension: how do you properly test AI systems for discriminatory outcomes without processing sensitive or representative personal data?
The agreement appears designed to create greater legal certainty on that point. But it also raises obvious questions regarding the interaction with GDPR principles, particularly purpose limitation and data minimization.
And this is another example of the broader issue emerging across the AI Act debate: AI regulation is increasingly colliding with existing European legal frameworks.
What Happens Next
The agreement is still provisional. It now needs formal endorsement by both the European Parliament and the Council before final adoption and publication in the Official Journal of the European Union.
The institutions are expected to finalize the process before 2 August 2026 to avoid legal uncertainty linked to the original entry into application of the high-risk AI provisions. But politically, the direction is now clear.
The European Union is not abandoning the AI Act. However, it is already recalibrating it before the most burdensome obligations have even entered into force. And that alone says a great deal about how difficult AI regulation becomes once it moves from political ambition to operational reality.
For businesses, the message is equally clear. The AI Act is no longer a static piece of legislation. It is rapidly evolving into a dynamic compliance framework where the interaction between AI rules and sector-specific regulation may become just as important as the AI Act itself.
What Companies Should Do Now
The biggest mistake companies can make at this stage is treating the AI Act changes as a reason to pause compliance projects. The opposite is true. The deal confirms that the regulatory framework is becoming more complex, not less. Businesses now need to reassess whether their AI systems fall directly under the AI Act, under sector-specific legislation, or under a combination of both.
For many organizations, particularly in manufacturing, automotive, medtech, connected products, and industrial technology, this exercise can no longer be handled only by legal teams in isolation.
Companies should now:
- map AI systems embedded into products and services;
- identify whether sectoral legislation may partially replace AI Act obligations;
- review conformity assessment and product compliance processes;
- reassess contractual allocation of compliance responsibilities across the supply chain;
- evaluate transparency obligations for AI-generated content;
- and align AI governance with GDPR, cybersecurity, product safety, NIS2, and DORA requirements where applicable.
Another critical point is governance. The deal shows that the AI Act will continue evolving through delegated acts, implementing acts, and regulatory guidance. That means compliance cannot be approached as a one-off legal exercise completed before a deadline.
Businesses instead need governance structures capable of continuously monitoring regulatory developments and adapting internal controls accordingly. This is particularly important because many of the practical compliance expectations under the revised framework will likely emerge only over the next 12 to 24 months through secondary legislation and regulatory interpretation.
In practice, the companies that will manage the AI Act transition more effectively are not necessarily those waiting for complete legal certainty, but those already building operational AI governance frameworks that can evolve alongside the regulation.
For more updates on the EU AI Act, AI governance, and technology regulation, visit the AI section of GamingTechLaw.com.

