Share This Article
The GPAI Code approved by the European Commission on 10 July 2025 is more than a symbolic move—it’s a powerful indicator of how the EU expects AI model developers to behave under the looming obligations of the AI Act. Although voluntary, this Code of Practice is poised to become a key instrument for navigating the compliance landscape surrounding general-purpose AI models.
By laying out principles for transparency, safety, and intellectual property compliance, the GPAI Code provides a structured, EU-endorsed path forward for companies that want to build and deploy AI models responsibly—and strategically.
Why the GPAI Code Approved Status Changes the Game
With the GPAI Code approved, a new soft law standard now exists to guide providers of foundational AI models—such as large language models and multimodal systems—through their responsibilities under Articles 53 and 55 of the EU AI Act.
Its approval signals that EU authorities, including the European AI Office and national regulators, are ready to treat adherence to the Code as a trusted shortcut to compliance. If adopted, the Code can significantly reduce regulatory friction: enforcement bodies are expected to focus on whether a company fulfills the Code’s terms rather than conducting case-by-case investigations.
Put simply, signing up gives developers a strategic advantage. Failing to do so might expose them to greater legal uncertainty and higher scrutiny.
Transparency: Documentation Becomes Your First Line of Defense
Transparency is the cornerstone of the approved GPAI Code. It demands that developers compile a detailed Model Documentation Form, outlining everything from data sources and training methodology to licensing terms and security identifiers. This documentation must be shared with downstream users and regulators upon justified request.
Critically, the Code emphasizes data lineage and provenance—meaning developers must explain how the data was gathered (e.g., scraped, licensed, or user-contributed) and document any filtering or preprocessing techniques applied.
This matters especially for models approaching systemic impact. Even open-source models must comply with transparency obligations if they are later deemed high-risk. The GPAI Code ensures transparency is not just a gesture of public relations, but a legal and operational necessity.
Safety & Risk: From Theoretical Talk to Practical Safeguards
With the GPAI Code approved, EU regulators now expect developers to go beyond risk awareness and put structured safety frameworks in place.
This includes:
- Conducting systemic risk assessments at every model development stage;
- Defining risk tiers and linking them to pre-set mitigation plans;
- Ensuring independent audits validate safety measures;
- Establishing continuous post-deployment monitoring.
The Code effectively introduces a lifecycle approach to risk—emphasizing anticipation, prevention, and accountability. Should a major malfunction or unintended use arise, developers must promptly notify the EU AI Office and national authorities and implement corrective actions.
By operationalizing safety, the Code reflects a growing maturity in how Europe treats AI governance: not as a compliance checkbox, but as an ongoing obligation.
Copyright Compliance: No More Excuses
One of the most anticipated aspects of the GPAI Code approved is its strong stance on intellectual property rights. It requires developers to:
- Adopt formal internal copyright policies;
- Avoid scraping content protected by paywalls or access restrictions;
- Exclude data from blacklisted pirate sites;
- Prevent AI systems from replicating protected works in outputs;
- Establish complaint channels for rights-holders to raise concerns.
The Commission’s goal is not just to minimize copyright disputes—but to embed respect for IP directly into the AI development pipeline. This positions Europe at the forefront of balancing innovation with content ownership.
What Happens Now?
The GPAI Code approved status is just the beginning. In the coming weeks, we expect:
- Formal endorsement by EU Member States;
- New guidelines on key definitions—clarifying who qualifies as a GPAI provider, what constitutes systemic risk, and how collaborative development projects are treated under the Act;
- A push for broad industry adoption.
Early signatories may benefit from fewer inspections, reduced documentation burdens, and perhaps even leniency in enforcement decisions—while those who don’t engage may find themselves navigating an uphill compliance battle.
The Code Is Voluntary, But The Pressure Isn’t
The GPAI Code approved by the European Commission is a turning point in AI governance. It doesn’t carry the force of law—but its political, legal, and reputational weight makes it hard to ignore.
Companies serious about AI governance, ethical development, and EU market access should treat this as a strategic imperative. The question is no longer whether you’ll need to comply with the AI Act—it’s whether you want to do so on your own, or with the support and clarity the Code provides.
And if your company is building or deploying general-purpose AI in Europe? The time to decide is now.
On a similar topic, you can read the article AI Act Compliance Deadline Approaching: Are You Ready? and our DLA Piper’s AI law journal Diritto Intelligente.