Share This Article
On 17 September 2025, the Italian Senate approved a landmark law on artificial intelligence (AI), making Italy the first EU country to enact a national law specifically regulating AI while aligning with the EU AI Act.
This law, Senate Act No. 1146-B, brings with it ambition, opportunity—and significant compliance challenges, especially for lawyers, public bodies, and professionals. Below, we explore what the law says, how it relates to the EU framework, and what legal professionals must do.
Italy AI Law: What the New National Regulation Covers
Italy’s new AI law:
-
Requires traceability, human oversight, and safety for AI decisions across sectors including healthcare, workplace, education, justice, public administration, and sport.
-
Imposes parental consent for children under 14 to access AI systems.
-
Criminalizes misuse of AI-generated or manipulated content (e.g., deepfakes) with penalties of 1–5 years in prison if harm is caused; also tougher penalties for identity theft, fraud, etc.
-
Clarifies copyright rules: works made with AI can be protected only if there is a demonstrable, creative human contribution; also limits AI-driven text & data mining to non-copyrighted materials or authorised scientific research.
-
Assigns oversight of the law to AGID (Agenzia per l’Italia Digitale / Agency for Digital Italy) and ACN (National Cybersecurity Agency).
-
Authorises up to €1 billion of funding to support companies in AI, cybersecurity, telecom etc.
This is a broad, ambitious package—but one that overlaps with European law in ways that practitioners must carefully navigate.
AGID and ACN Explained: The New Italian AI Authorities
The law gives regulatory powers to:
-
AGID – Agenzia per l’Italia Digitale (Agency for Digital Italy), which coordinates digital transformation of the public sector, ICT infrastructure, and innovation policy. Under the law, AGID acts as a notifying authority, accrediting conformity bodies, promoting AI innovation, and supporting enforcement.
-
ACN – Agenzia per la Cybersecurity Nazionale (National Cybersecurity Agency), which oversees cybersecurity, inspections, and sanctioning of AI systems, especially in critical sectors.
These are government agencies, not fully independent regulators like the Italy’s Data Protection Authority. That choice has raised concerns, since the EU AI Act focuses strongly on fundamental rights, and independent oversight may have provided stronger guarantees of impartiality.
How Italy’s AI Law Aligns With and Differs From the EU AI Act
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024. Its obligations are phased:
-
Since February 2025: bans on prohibited AI practices and AI literacy duties.
-
Since August 2025: rules for general-purpose AI (GPAI) models (transparency, training data summaries, cybersecurity).
-
From August 2026: obligations for high-risk AI systems (conformity assessments, quality management, human oversight).
-
By 2027: product-embedded AI systems fully regulated.
Italy’s national law is designed to complement, not contradict, this framework. Where it overlaps, companies must treat Italian provisions as additional layers—for example, the criminal deepfake rules or the under-14 parental consent obligation.
Obligations for Intellectual Professions: Lawyers, Notaries, and Accountants
A distinctive feature of the Italian law is the regulation of intellectual professions such as lawyers, notaries, accountants, architects.
Key duties:
-
AI only as support – Professionals may use AI tools only to support, not replace, their intellectual work. The professional must always remain the decision-maker.
-
Engagement letter disclosure – Lawyers and other professionals must inform clients in writing, usually in the engagement letter, if AI will be used, explaining what type of AI, how it will be used, what data may be processed, and what safeguards apply.
-
Human oversight and liability – Final responsibility remains with the professional. They must validate AI outputs and ensure they are consistent, fair, and lawful.
-
Ethics and training – Professionals must consider ethical issues (bias, transparency, fairness) and be ready to prove that they exercised oversight.
This is one of the most concrete obligations in the law—and directly impacts how Italian lawyers draft their client contracts and manage client relationships.
RACI Matrix and AI Governance: A Practical Tool for Legal Compliance
With multiple regulators (AGID, ACN, Garante, sector authorities), compliance can get complex. This is where the RACI matrix comes in.
RACI stands for:
-
Responsible: who executes the task
-
Accountable: who has ultimate ownership/sign-off
-
Consulted: who provides input
-
Informed: who is updated on progress
For AI projects, this tool helps legal teams define:
-
Who is responsible for AI disclosures in client letters
-
Who is accountable for regulatory filings
-
Who must be consulted (data protection officer, compliance officer, unions)
-
Who must be informed (management, clients, auditors)
This clarity reduces compliance risks and demonstrates good governance if regulators knock at the door.
AI in Healthcare and Research: Public Interest vs. Data Protection Risks
The law explicitly supports the use of AI in medical diagnostics and research, citing “relevant public interest” as a basis for broader data use.
But companies must also comply with:
-
GDPR and its strict data protection principles
-
The upcoming European Health Data Space Regulation, which creates specific rules for secondary use of health data
-
Medical devices regulation, where AI functions may trigger CE marking obligations
In practice, this means healthcare providers and life sciences companies must run multi-framework risk assessments for AI deployments.
Workplace AI and Employee Monitoring: What Employers Must Know
The Italian law requires transparency and oversight for AI systems in the workplace.
Employers must consider:
-
Italy’s Workers’ Statute and privacy rules, which limit employee monitoring
-
The Garante Privacy’s guidance on metadata and workplace surveillance
-
The EU AI Act’s training and literacy duties for employees affected by AI
For example, using AI to assess productivity, monitor keystrokes, or predict absenteeism may be high-risk and subject to strict oversight. Employers should update collective agreements, policies, and retention schedules.
Copyright and AI-Generated Works: Human Creativity Remains Essential
The law confirms that copyright applies only to works with human creativity. AI-only outputs cannot be copyrighted.
For companies:
-
Keep records of human involvement in creative projects
-
Document dataset provenance for training/finetuning
-
Respect opt-outs for text-and-data mining
-
Secure licenses for copyrighted data
This aligns Italy with European jurisprudence and international cases like Thaler v. Copyright Office in the US.
AI in Public Administration and Justice: Transparency and Human Oversight
Public administration and the judiciary must apply explainability and human final decision-making.
Procurement contracts will increasingly require:
-
AI risk assessments
-
Documentation of conformity with EU AI Act obligations
-
Monitoring of outcomes and appeal mechanisms for citizens
For suppliers, this means preparing detailed compliance files as part of bids.
EU AI Act Timeline vs. Italy AI Law: Key Compliance Deadlines
-
February 2025: EU prohibitions and literacy obligations start
-
August 2025: GPAI rules apply; Italy’s law also in force
-
August 2026: High-risk system rules
-
2027: Product-related rules fully phased
Companies must treat the EU AI Act as the backbone and layer Italian obligations on top, especially in criminal law and child protection.
Practical Compliance Checklist for Lawyers and Businesses
-
Update engagement letters with AI disclosures
-
Map all AI tools in use across the organisation
-
Ensure human oversight and audit trails for all AI outputs
-
Train staff to meet AI literacy obligations
-
Review procurement to demand GPAI documentation from providers
-
Prepare conformity assessments for high-risk systems
-
Align workplace AI with labour and privacy law
-
Document copyright provenance for AI training datasets.
Risks, Limits, and Future Revisions of the Italian AI Law
While ambitious, the law faces challenges:
-
Lack of independent regulators may weaken enforcement credibility
-
Overlap with EU AI Act risks duplication of compliance duties
-
Unclear definitions (e.g., intellectual professions, demonstrable creativity) may lead to litigation
-
Future revisions will be inevitable once EU Act obligations fully apply by 2026–2027
In short: the law is a bold first step, but not the final word.
Listen to the full discussion on our podcast
For a deeper dive into Italy’s AI Law, its impact on lawyers, and the practical steps companies must take, listen to the full episode of my podcast:
On a similar topic, you can read the article How Can Your Organization Arrange AI Governance Properly? and our DLA Piper’s AI law journal Diritto Intelligente.