Share This Article
The European Parliament committees IMCO and LIBE have now formally supported the postponement of certain obligations under the EU AI Act, according to the latest official press release. The proposal focuses on delaying the application of specific requirements—particularly those affecting high-risk AI systems—with the stated objective of ensuring that both companies and supervisory authorities are adequately prepared for implementation.
This development is significant.
It represents one of the first concrete adjustments to the AI Act timeline and signals that the transition from legislation to implementation is proving more complex than anticipated.
What exactly has been agreed
The committees’ position supports a targeted postponement, rather than a general delay of the AI Act.
The rationale is clear:
-
companies need more time to operationalize compliance
-
regulators need to build enforcement capabilities
-
additional clarity is required on how certain provisions should be applied in practice
In particular, the postponement concerns obligations linked to high-risk AI systems, which remain the most demanding and complex part of the AI Act framework.
Why high-risk AI remains the central challenge
The AI Act is built around a risk-based approach, with high-risk AI systems subject to stringent requirements, including:
-
risk management systems
-
data governance and quality controls
-
technical documentation and record-keeping
-
human oversight mechanisms
-
conformity assessments
However, in practice, companies are encountering difficulties in applying these rules.
The main challenge is not compliance itself, but qualification.
Determining whether a system qualifies as high-risk often requires:
-
interpreting broadly drafted legal provisions
-
assessing use cases that fall into grey areas
-
understanding how AI components interact within complex products or services
This creates legal uncertainty, which in turn slows down implementation.
A regulatory reality check
The postponement reflects a broader reality: regulatory ambition has outpaced operational readiness.
This applies not only to companies, but also to regulators.
Supervisory authorities across the EU are still in the process of:
-
developing technical expertise
-
coordinating enforcement approaches
-
issuing guidance to ensure consistent interpretation
Without this preparation, there is a tangible risk that enforcement of the AI Act would become fragmented across Member States, undermining the objective of harmonization.
The risk for businesses: a false sense of security
One of the key risks associated with this postponement is how it will be interpreted by the market.
There is a natural tendency to see delays as additional time to prepare.
In reality, the situation is more nuanced.
AI adoption within organizations is accelerating rapidly and often occurs in a decentralized manner, driven by business needs rather than compliance considerations.
This creates a structural risk.
By the time legal or regulatory issues are identified:
-
AI systems may already be embedded in core processes
-
remediation may require significant operational changes
-
costs may increase substantially
-
reputational exposure may become material
In this context, postponing regulatory obligations does not reduce risk.
It may, in fact, increase it.
AI governance as the real differentiator
This is where the concept of AI governance becomes central.
The postponement does not change the direction of travel.
The AI Act will apply, and expectations will remain high.
The real differentiator for organizations will be their ability to:
-
identify and map AI systems early
-
assess legal and ethical risks before deployment
-
implement scalable governance frameworks
-
ensure accountability across functions
Companies that adopt a proactive approach will be better positioned not only for compliance, but also for managing broader business risks.
A shift in the regulatory narrative
This development also signals a shift in how the AI Act is being approached at the political level.
The focus is no longer only on adopting rules, but on ensuring that those rules are implementable and enforceable in practice.
This raises important questions about the future of the framework:
-
will further adjustments be introduced?
-
how will consistency across Member States be ensured?
-
to what extent will guidance shape the practical application of the rules?
These questions will be central in the coming months.
Conclusion
The European Parliament’s support for postponing certain AI Act obligations should not be interpreted as a weakening of the regulatory framework. It is better understood as a necessary adjustment to align legal requirements with operational and enforcement realities. For businesses, the key message is clear: Waiting is not a strategy. The postponement provides time—but also increases scrutiny on how that time is used.
On a similar topic, you can read the article “AI Liability under the Defective Products Directive: Software and AI as Products from 2026“.

