Artificial intelligence in mental health raises complex legal issues under the EU AI Act, MDR, and GDPR, from medical device qualification to liability.
An AI risk assessment is the process of mapping where risks emerge during the lifecycle of an AI system, classifying them by severity and probability, and prioritizing which ones to mitigate first, all within the compliance framework imposed by the EU AI Act.
On 17 September 2025, the Italian Senate approved a landmark law on artificial intelligence (AI), making Italy the first EU country to enact a national law specifically regulating AI while aligning with the EU AI Act.
The relationship between AI, personal data, and the GDPR is under intense scrutiny — and an upcoming judgment from the Court of Justice of the European Union (CJEU) could redefine how businesses approach compliance with substantial positive or negative effects, depending on the outcome.
AI systems compliance assessment is rapidly becoming a critical component for businesses as Artificial Intelligence (AI) transitions from pilot projects to full-scale integration in core operations.