Share This Article
The comparison between the EU AI Act and the US approach on artificial intelligence regulation is no longer just a comparative legal topic—it is rapidly becoming the defining factor shaping how artificial intelligence is developed, deployed, and governed globally.
What we are witnessing today is not merely regulatory evolution. It is the emergence of two competing models of AI governance that are already influencing investment decisions, compliance strategies, and technological design choices across industries. And for companies operating internationally, this divergence is no longer theoretical. It is operational.
AI Act vs US AI Policy Framework: two models, one global impact
The recently approved US AI Policy Framework reflects a deeper structural divide with the EU approach under the AI Act.
On one side, the European Union has introduced the AI Act, the first comprehensive, binding framework regulating artificial intelligence across sectors. On the other, the United States continues to rely on a combination of policy guidance, executive action, and sectoral enforcement. This divergence is not accidental. It reflects fundamentally different approaches to risk, innovation, and the role of regulation in shaping technological development. And the implications are already visible.
The EU AI Act: regulatory ambition meets implementation reality
The EU AI Act represents a landmark in technology regulation. It introduces a horizontal, risk-based framework that classifies AI systems into four categories:
- unacceptable risk (prohibited systems),
- high-risk systems,
- limited-risk systems,
- minimal-risk systems.
However, what is particularly relevant today is not just the architecture of the AI Act—but how it is evolving.
Recent developments under the Digital Omnibus package, and the positions adopted by the European Parliament committees IMCO and LIBE, as well as the Council, indicate a clear trend: the EU is recalibrating the AI Act to ensure it is enforceable in practice.
Among the most relevant updates:
- the postponement of key obligations for high-risk AI systems, with timelines potentially shifting to 2027–2028;
- the strengthening of registration requirements in the EU database;
- stricter conditions for processing special categories of personal data;
- the introduction of new prohibited practices, including AI systems generating non-consensual intimate content.
This is not a step back.
It is regulatory refinement.
The EU is effectively acknowledging that overly complex compliance obligations risk undermining the effectiveness of the framework itself.
A compliance-by-design model
The EU approach embeds compliance directly into the lifecycle of AI systems.
For high-risk AI, companies are required to implement:
- risk management systems covering the entire lifecycle,
- data governance measures ensuring quality and bias mitigation,
- detailed technical documentation,
- human oversight mechanisms,
- conformity assessments prior to market access.
This is what can be defined as compliance by design. But this model comes at a cost.
It inevitably increases:
- time to market,
- operational burden,
- and governance complexity.
For many businesses, particularly those scaling AI solutions globally, this creates friction between innovation and compliance.
The US AI Policy Framework: flexibility and enforcement
If we shift the lens to the United States, the contrast is immediate. There is no equivalent to the AI Act.
Instead, the US relies on a fragmented but flexible ecosystem that includes:
- the National AI Policy Framework,
- executive orders (including those addressing AI safety and security),
- guidance from federal agencies such as the FTC and NIST,
- sector-specific regulation (e.g., healthcare, finance).
The underlying logic is fundamentally different. The US model prioritizes:
- innovation,
- speed to market,
- and technological leadership.
Regulation is largely ex post.
This means that enforcement typically occurs after harm or risk materializes, often through:
- consumer protection law,
- unfair or deceptive practices enforcement,
- competition law interventions.
This scenario creates a more agile environment for developers—but shifts legal and reputational risk downstream.
AI Act vs US AI Policy Framework: operational consequences for businesses
For global companies, the AI Act vs US AI Policy Framework divergence translates into a concrete operational challenge. They cannot choose one model. They must navigate both simultaneously. This requires a fundamental shift in how AI governance is structured internally.
In practice, companies need:
- centralized AI governance frameworks capable of managing multi-jurisdictional obligations;
- flexible compliance architectures adaptable to both ex ante and ex post regulatory models;
- robust risk assessment processes integrated into product development.
The absence of such governance is no longer sustainable. In many organizations, AI tools are being adopted at speed and embedded into core business processes without a full assessment of:
- regulatory exposure,
- data protection and other regulatory risks,
- liability implications.
And this is where the real issue emerges. When risks are identified after deployment:
- remediation becomes significantly more expensive,
- operational disruption increases,
- reputational damage becomes a tangible threat.
From a legal standpoint, this is the moment where lack of governance translates into real liability.
Is regulatory divergence slowing innovation?
There is a common narrative that regulation—particularly in Europe—slows innovation. But this argument oversimplifies the issue. The absence of regulation does not eliminate risk. It reallocates it. Often to companies. And often in less predictable ways.
In reality, well-designed regulatory frameworks can enhance innovation by providing:
- legal certainty,
- clear compliance pathways,
- and trust in AI systems.
The real challenge is not regulation per se. It is fragmentation. Because fragmentation increases compliance costs, creates uncertainty, and forces companies to build parallel governance models.
What comes next for global AI governance
Looking ahead, several developments will shape the trajectory of AI governance.
In the EU:
- further adjustments under the Digital Omnibus package,
- the gradual implementation of the AI Act,
- the development of technical standards and guidance.
In the US:
- continued reliance on policy frameworks and agency enforcement,
- potential sector-specific legislative initiatives,
- increasing coordination between federal and state authorities.
At the global level, we are likely to see:
- regulatory competition between jurisdictions,
- gradual convergence driven by market pressure,
- and the emergence of de facto global standards driven by large technology providers.
The AI Act vs US AI Policy Framework debate ultimately leads to a broader conclusion. The competitive advantage in AI is no longer determined solely by technological capability. It is increasingly defined by governance. Companies that invest early in structured, scalable AI governance frameworks will not only mitigate legal risks—but also gain a strategic advantage in navigating regulatory complexity.
Those that do not will find themselves reacting to regulation rather than shaping it. And in a fragmented regulatory landscape, that is a position that is becoming increasingly difficult to sustain.

