The draft EU Artificial Intelligence Regulation raised major concerns as to whether the European Union is heading in the right direction to regulate AI.
Following the announcements of the last months, a draft of the EU Artificial Intelligence Regulation has now been published by the European Commission and a debate mounted as to the suitability of this option to foster the exploitation of AI within the European Union in the coming years.
The main terms of the EU Draft Artificial Intelligence Regulation
The AI draft regulation resembles many terms that are familiar to data protection experts. It provides for fines based on 4% of turnover, the establishment of national regulators, and a European board that will sort disputes between authorities and coordinate their operation, the extraterritorial effect of the applicable rules, sensitive processing, and breach notifications.
Moreover, it splits AI systems into three broad categories
- Prohibited AI systems which include highly invasive massive surveillance systems as well as systems that are aimed to manipulate human behavior or exploit their vulnerabilities;
- Heavily regulated “high-risk” AI systems that include a broad category, including artificial intelligence used for credit scoring and risk assessment, biometric identification, and recruitment process. They are allowed, but are subject to a stringent certification and monitoring process, with third-party compliance assessments in case of processing of biometric data, and the obligation for the manufacturer to track, justify and predict any outcome of the system; and
- Less heavily regulated “other” AI systems, which are subject to light obligations, including transparency requirements.
What would be the impact of the AI Regulation on the EU market?
There is no doubt that the goal pursued by the European Union is to create a regulatory framework empowering the growth of AI. Traditionally, the EU and American/Asian approaches have been diametrically divergent.
The EU regulators believe that the lack of regulations prevents the growth of a market since it does not grant certainty to investors and companies exploiting it. The GDPR is one of the clearest expressions of this mindset. Some authorities might argue that they were right on the point since the GDPR is becoming a benchmark reproduced in several jurisdictions aiming to regulate data protection worldwide. But the debate and the endless negotiations that arose, for instance, around the approval of the ePrivacy Regulation show that, when it comes to more technical rules, businesses are concerned about the negative impact on their operation.
The paramount question is then whether Elon Mush is right in saying that, in the absence of regulations on artificial intelligence, AI might become a threat to humankind. Indeed, the current draft of the AI regulation will definitely trigger higher costs in developing and certifying technologies, which will lead to delays in their launch on the market and the decision of some businesses not to invest in the market, given the heavy monitoring and potential sanctions.
At the same time, would the Artificial Intelligence Regulation be sufficient to prevent the uncontrolled exploitation of AI? The draft regulation is very specific in identifying some types of technologies. The risk is that it will need to be updated since its approval, as it risks to be the case with the ePrivacy Regulation.
Striking the right balance between underregulating and overregulating and adopting a piece of legislation that can quickly adjust to the evolution of technology, granting at the same time a sufficient level of legal certainty is difficult to find. However, it is a challenge that the EU needs to overcome for its survival and avoid that each country will adopt its own approach, preventing the European Union from becoming the hub of the future growth of artificial intelligence.
Image credit Mike MacKenzie