Share This Article
On 26 March 2026, the European Parliament approved the changes to the EU AI Act which marks a critical step in reshaping the regulatory framework for artificial intelligence under the Digital Omnibus package.
With this vote, the Parliament has formally opened the trilogue phase with the Council of the EU and the European Commission. More importantly, it confirms a direction that is now becoming clear: the objective is not to weaken the AI Act, but to make it more operational without altering its underlying logic.
EU AI Act Update: Delayed Obligations for High-Risk AI Systems
One of the most significant elements of these changes to the EU AI Act is the postponement of obligations for high-risk AI systems.
European institutions have aligned on the need to delay enforcement. However, the Commission’s proposal to condition early application on the adoption of harmonised standards and supporting tools has not been accepted.
Instead, the Parliament and the Council opted for fixed deadlines:
- 2 December 2027 for standalone high-risk AI systems under Annex III
- 2 August 2028 for AI systems embedded in products regulated under Annex I
This approach is not merely about timing. It is fundamentally about ensuring legal certainty and predictability—two elements that are essential for businesses navigating a highly complex and evolving regulatory landscape.
Nudification Apps: Prohibition Confirmed with Operational Flexibility
Another key aspect of the EU AI Act update concerns so-called “nudification” systems.
The Parliament proposes to explicitly prohibit AI systems capable of generating sexually explicit content depicting identifiable individuals without their consent.
However, the wording introduces two important clarifications:
- The development of such capabilities is not prohibited per se
- The ban does not apply where effective technical safeguards prevent misuse
This creates a delicate balance. On the one hand, there is room for technological innovation. On the other, the risk exposure remains extremely high, especially considering potential fines of up to € 35 million or 7% of global annual turnover.
The real challenge will be the assessment of the adequacy of technical safeguards, which is likely to become a key enforcement battleground.
Streamlining Rules for Regulated Products
The update of the EU AI Act also addresses AI systems integrated into products already subject to EU harmonisation legislation (Annex I, Section A).
The Parliament proposes aligning these systems, from a compliance perspective, with those listed under Annex B.
The objective is clear: reduce regulatory duplication resulting from the overlap between the AI Act and sector-specific legislation such as:
- medical devices
- radio equipment
- toys
- in vitro diagnostic devices
This does not mean a reduction in obligations for high-risk AI systems. Instead, it reflects an effort to simplify compliance pathways while maintaining regulatory standards.
Companies will still be required to assess and ensure compliance—but within a more coherent framework.
AI Literacy: A More Flexible but Still Critical Obligation
Another important development in this update of the EU AI Act concerns AI literacy.
The Parliament appears to follow the direction suggested by the EDPB and EDPS by maintaining the obligation, but in a more flexible form.
Providers and deployers will be required to support the improvement of AI literacy among employees and other relevant stakeholders, albeit with less prescriptive requirements.
This should not be underestimated.
A lack of adequate AI expertise can directly impact a company’s ability to comply with other obligations under the AI Act—and, more broadly, its capacity to effectively leverage AI technologies.
Registration Obligations and Use of Sensitive Data
On registration obligations, the Parliament aligns with the Council in maintaining the requirement to register high-risk AI systems in the EU database—even where providers consider that the system does not fall within that category under Article 6(3).
However, the information burden is reduced, reflecting a more pragmatic approach.
Similarly, regarding the processing of sensitive data for bias detection and correction:
- The range of authorised entities is expanded
- The processing is strictly limited to cases where necessary to address risks affecting health, safety, fundamental rights, or discrimination
- The principle of strict necessity remains unchanged
This reflects a cautious approach, consistent with the position of European data protection authorities.
Conclusion: Simplification Without Lowering the Bar
The message emerging from this EU AI Act update is clear.
Both the Parliament and the Council are working toward simplification—not deregulation.
The ongoing adjustments are aimed at making the framework more workable, not less demanding. In practical terms, this means:
- less regulatory friction
- but not less accountability
For companies, this is a critical point.
The potential delay of deadlines and the rationalisation of obligations should not be interpreted as an opportunity to wait. On the contrary, it is an opportunity to build robust and integrated AI governance frameworks in anticipation of increasing regulatory expectations.
For EU lawmakers, the challenge is equally complex: striking the right balance between ambition and enforceability.
Failure to reach an agreement would have immediate consequences. In that scenario, existing obligations would remain unchanged—and requirements for standalone high-risk AI systems could start applying as early as August, without any postponement.
This is precisely why businesses cannot afford to take a passive approach.
Preparation must start now.
Because when the AI Act becomes fully operational, the real risk will not be non-compliance—it will be being unprepared for a regulatory system that is already in motion.
If you want to know more about how the EU approach to AI is differentiating from the US approach, you can read this article “AI Act vs US AI Policy Framework: Global AI Governance at a Crossroads“.

