Share This Article
The European Commission has launched a consultation on high-risk AI systems to support the adoption of the implementing act under Article 6(2) of the AI Act which is a crucial step for any sort of business.
As the AI Act begins its phased adoption across the European Union, one of its most critical componentsโthe classification of high-risk AI systemsโis now under the spotlight. The European Commission is gathering feedback through a public consultation on high-risk AI that will shape how the most stringent regulatory requirements are applied in practice.
Background: The AI Act and the High-Risk Category
The AI Act, which entered into force on 1 August 2024, establishes the first comprehensive EU-wide framework for the regulation of artificial intelligence. It aims to create a single market for safe and trustworthy AI while safeguarding fundamental rights, democracy, and the rule of law.
The Act adopts a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk, and minimal risk. Among these, high-risk AI systems are subject to the most stringent obligations. They include AI systems that either:
-
Act as safety components in products regulated under EU law (Article 6(1) and Annex I), or
-
Pose significant risks to health, safety, or fundamental rights in sensitive areas such as education, employment, law enforcement, and public services (Article 6(2) and Annex III).
These systems must comply with detailed technical and organisational requirements, including risk management, transparency, human oversight, and conformity assessments prior to market deployment.
What Is the Consultation on High-Risk AI About?
Pursuant to Article 6(5) of the AI Act, the European Commission is tasked with adopting guidelines by 2 February 2026 that explain how to implement Article 6 in practice. This includes how to interpret the classification criteria and how to apply the exemptions under Article 6(3). The Commission must also provide practical examples of AI systems that are and are not to be classified as high-risk.
To support this process, the Commission has launched a consultation on high-risk AI, open for six weeks (from 6 June to 18 July 2025). The results will inform both the classification guidelines and the obligations that apply across the AI value chain.
Who Should Participate?
The consultation is targeted, but broadly inclusive. It welcomes feedback from:
-
AI system providers and deployers
-
Industry bodies and associations
-
Public authorities and regulators
-
Academia and independent experts
-
Civil society organisations
Respondents can choose which parts of the survey to answer, and are strongly encouraged to provide practical examples and real-life scenarios that can inform the final guidelines.
Structure of the Consultation Questionnaire
The consultation is divided into five key sections:
1. Article 6(1) โ AI in Regulated Products
Covers questions on AI systems embedded in regulated products (e.g., machinery, medical devices) and the concept of โsafety componentsโ under Annex I.
2. Article 6(2) โ Sectoral Use Cases in Annex III
Focuses on use cases in areas such as biometric identification, education, employment, law enforcement, and public services. It also addresses exemptions under Article 6(3) for systems that, despite being listed, may not pose significant risk.
3. General Questions on Classification
Includes questions on the โintended purposeโ of AI systems, overlaps between Annex I and III, and the treatment of general-purpose AI systems.
4. Requirements and Value Chain Obligations
Seeks input on the technical and procedural obligations for high-risk AI, including quality management systems, conformity assessments, and the roles of various actors under Article 25 of the AI Act.
5. Annual Review of Annex III and Article 5
Gathers feedback for the mandatory annual review of the list of high-risk use cases and prohibited AI practices.
Why This Consultation on High-Risk AI Is Crucial
The stakes are high. AI systems classified as high-risk will be required to meet comprehensive standards covering:
-
Data governance and quality
-
Human oversight mechanisms
-
Transparency obligations
-
Robustness, accuracy, and cybersecurity
-
Conformity assessment before market deployment
For providers, this means implementing quality management systems and ensuring full compliance before placing a system on the market. Deployers, in turn, are responsible for monitoring usage, ensuring appropriate oversight, and providing transparency to affected individuals.
By shaping the consultation on high-risk AI, stakeholders have the opportunity to:
-
Influence the scope and applicability of high-risk classification
-
Avoid disproportionate regulatory burdens
-
Clarify the interaction between the AI Act and other EU regulations
-
Shape future enforcement strategies and legal certainty
Timeline and Next Steps
๐
Consultation deadline: 18 July 2025
๐ Implementation guidelines due: 2 February 2026
๐ Full compliance for high-risk AI systems required by: 2 August 2026
๐ Access the consultation here: https://ec.europa.eu/eusurvey/runner/AIhighrisk2025
Conclusion
The consultation on high-risk AI marks a crucial milestone in the operationalization of the EU AI Act. Whether businesses are developing AI, deploying it across critical sectors, or planning to exploit it, this is your opportunity to shape the AI Actโs future trajectory. At DLA Piper, we are assisting clients on the topic, feel free to reach out to us if you want to discuss.
On the topic, read DLA Piper’s AI Law Journal available HERE and other articles on the topic available HERE.