/ AI and ML privacy issues / Artificial intelligence and machine learning: what privacy issues with the GDPR?

Artificial intelligence and machine learning: what privacy issues with the GDPR?

artificial intelligence

Artificial intelligence and machine learning technologies might find a considerable hurdles in privacy obligations provided by the GDPR.

Updated on 2 November 2017 after the publication of the draft guidelines on Automated individual decision making and Profiling by the Article 29 Working Party

Below is my view on a very hot topic at the moment and you can also watch a summary (in Italian) in my video below

What are artificial intelligence technologies?

The definition from Wikipedia is that

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

Therefore the main features of AI are

  • the collection of large amounts of information, also from the environment around them and
  • the ability to take autonomous decisions/actions aimed at maximizing the chances of success.

The perfect example of an artificial intelligence is a self driving car which needs to take autonomous decision based on whatever happens on a street. And a confirmation of the current concerns (and prejudices) around AI is a new study from Germany’s Federal Highway Research Institute which found that the autopilot feature of the Tesla Model S constitutes a “considerable traffic hazard“.

This finding was unsurprisingly highly criticised by Tesla CEO, Elon Musk, who said in a tweet that those reports were “not actually based on science” and repeated that “Autopilot is safer than manually driven cars.

But it is not necessary to consider self-driving cars to deal with the issue above. It is sufficient to have a machine learning technology that is able to collect information about individuals, create a profile of these individuals, placing them into for instance a “credit score” cluster and on the basis of such classification take decisions as to whether or not a mortgage or a loan shall be granted. This unveils new privacy related issues that become more relevant following the adoption of the EU General Data Protection Regulation, especially after the publication of the guidelines on automated individual decision making and profiling by the Article 29 Working Party.

The prohibition of automated decisions

The EU Privacy Regulation provides that individuals

shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“.

The prohibition applies to decisions that are based “solely” on automated processing, but as stressed by the article 29 Working Party, the human oversight of the conclusion reached by the machine needs to meaningful. Otherwise it would be just a way of by-passing the prohibition.

Exceptions to such rule apply when an automated decision

  1. is either provided by the law, such as in the case of fraud prevention or money laundering checks,
  2. or is necessary for the performance of or entering into a contract,
  3. or is based on the individual’s prior consent.

The applicability of the 3 exceptions above is not straight-forward. And for instance the fraud prevention and money laundering checks run by means of a machine learning technology might be considered to go beyond what is strictly provided by the law.

Likewise, according to the EU data protection authorities, the interpretation of “necessity” for the entrance into a contract has to be interpreted narrowly. In particular,

the controller must be able to show that this profiling is necessary, taking into account whether a less privacy-intrusive method could be adopted“.

However, the same data protection authorities mention as an example when the exception would apply the case in which the technology enables

to deliver decisions within a shorter time frame and improves the efficiency of the process

Therefore, at least according to the current draft of the guidelines of the Article 29 Working Party, efficiency reasons are deemed to be sufficient to justify the usage automated decision systems, provided that there are no less privacy-instrusive methods reaching the same result.

Is consent a viable option? What happens in case of health related data?

Automated decision systems can be used also with the prior consent of individuals. But

who would ever grant his consent to be subject to an automated decision?

My personal view is that this option is viable only in case of usage of such technologies for marketing purposes. In that case, individuals will be required to grant their consent to profiling which will be performed also by means of automated decision systems.

The problem arises though when automated decision systems are used to process special categories of data, such as health related data. In that case, the GDPR does not provide for the exception to the prohibition linked to the necessity for the performance of or entering into a contract. If you think about insurance companies that need to automatically process health data to assess the insurance risk, the freedom for individuals to decide whether or not they want to give their consent to the automatic processing of their health data might have a massive cost for them.

This might be sorted by means of a local law limiting the scope of prohibition provided by the GDPR, but such circumstance would create the same level of inconsistency among EU Member States that the European General Data Protection Regulation was willing to avoid!

You need to explain the logic followed by the AI and the ML technology!

The drafting of a privacy information notice which was a sort of commodity work before the GDPR is becoming like playing one of the highest levels of “tetris”… The GDPR requires in relation to machine learning, artificial intelligence and automated decision systems to provide details on

  • the usage of such technonologies;
  • the significance and envisaged consequences for the individual; and
  • meaningful information about the logic involved“. 

According to the article 29 Working Party the explaination of the logic involved would include details on the rationale behind, or the criteria relied on in reaching the decision, without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm.

The clarification on the level of details to be disclosed is important because otherwise individuals might understand the logic followed by the machine and act in a manner so that they can take unfair advantages. However, the above also means that it is not possible to adopt a privacy information notice that would cover any type of machine learning or artificial intelligence technology. The privacy information notice shall outline the main characteristics considered in reaching the decision, the source of this information and their relevance.

Individuals can object to the automated decision

Even when the automated decision is necessary to the performance of a contract or was performed following the consent of the relevant invidual, individuals will still have the right to obtain human intervention to express their point of view and to contest the decision which is commonly known as the right to receive a justification of the automated decision.

The most frequent example is when a mortgage or a recruiting application is turned down since, according to the system, the applying individual does not meet some parameters.This means that a procedure shall be put in place to manually review the matter. However, the main issue arises when AI becomes so complex and its decisions are based on such a large number of data that is not actually possible to give a justification of a specific decision.

The solution might be that artificial intelligence whose decisions might impact individuals shall be structured in a way that it will be possible to track the reasoning of the decision. But this also depends on what level of justification would be sufficient to meet the criteria set out in the EU Privacy Regulation. Is it sufficient to say that the applicant for a mortgage did not meet the creditworthiness parameters? Or it will be required to identify the specific parameter and if the parameter has become relevant only because it was linked to a number of other parameters?

Is all data collected by the AI or ML legally processed?

An additional privacy issue is whether all the information about an individual which is used by an artificial intelligence system has been obtained with consent of that individual or on the basis of a different legal ground and is used for the purposes for which it was initially collected.

Indeed, AI is by definition based on the processing of a very large amount of data from different sources. And individuals might object to decisions taken on them also because they are based on data illegally processed.

What happens in case of wrong decisions?

The complexity of an artificial intelligence is expected to escalate in the coming years. And such complexity might make more difficult to determine when a cyber attack has occurred and therefore a data breach notification obligation is triggered. This is a relevant circumstance since the EU General Data Protection Regulation introduces the obligation to notify an unauthorised access to personal data to the competent privacy authority and to the individuals whose data was affected.

We recently saw the case involving the UK telecom provider Talk Talk that was sanctioned by the Information Commissioner with a fine of £ 400,000 for not having prevented a cyber-attack which led to the access to data of over 150,000 customers. But what would have happened if Talk Talk was not able to determine whether a cyber attack had occurred and all of sudden its system starts taking “unusual” decisions? Given the potentially massive fines provided by the EU Privacy Regulation, this is a relevant issue.

And the common issue of smart technologies such as those of the Internet of Things, but also AI relates to the difficulty to identify the entity liable for a malfunctioning or a data breach.

A data protection impact assessment is an obligation and becomes a protection for your business

The GDPR provides that a data protection impact assessment is necessary when there is

a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;

It is important to stress that the provision above does not refer only to evaluations that are “solely” based on automated processing. Therefore a DPIA will be necessary in case of any automated profiling run by means of AI, ML or other technologies able to produce effects on individuals, even if there is a human intervention in evaluating the findings of the machines.

This reppresents a quite burdensome obligation, but especially in the light of the principle of accountability, it is a quite relevant protection in case of claims. Indeed, a privacy impact assessment will show that the controller considered all the factors involved and put in place adequate protections of individuals’ privacy rights.

This article is part of my series of blog posts on the major changes introduced by the EU Data Protection Regulation, You can review the other posts of this series below

#1 Which companies shall care about it?

#2 Will fines be really massive?

#3 Did you run a privacy impact assessment?

#4 New risks for tech suppliers

#5 What changes with the one stop shop rule?

#6 How the new privacy data portability right impacts your industry

#7 What privacy issues for artificial intelligence and machine learning?

#8 How to get the best out of data?

#9 Are you able to monitor your suppliers, agents and shops?

#10 What liabilities for the data protection officer?

#11 Are you able to handle a data breach?

#12 Privacy by design, how to do it?

#13 How data on criminal convictions of employees become a privacy risk

#14 Red flag from privacy authorities on technologies at work

#15 Need a GDPR compliant data processing agreement?

#16 Is your customers’ data protected from your employees?

#18 Data retention periods, an intrigued rebus under the GDPR

#19 Legitimate interest and privacy consent, how to use them?

If you found this article interesting, please share it on your favourite social media!

@GiulioCoraggio

Follow me on LinkedIn – Facebook Page – Twitter – TelegramYouTube  Google+

WRITTEN BY GIULIO CORAGGIO

IT, gaming, privacy and commercial lawyer at the leading law firm DLA Piper. You can contact me via email at giulio.coraggio@gmail.com or giulio.coraggio@dlapiper.com or via phone at +39 334 688 1147.

Send Us A Message Here

Your email address will not be published. Required fields are marked *