Share This Article
A recent court case of a person wrongly accused by an AI system revamped concerns on the reliability of artificial intelligence data and potential ethical side effects due to biases.
The individual wrongfully accused by AI data
There was a significant discussion during the last summer on the case of Robert Julian-Borchak, who was wrongfully arrested based on a flawed match from an AI facial recognition system.
After the legal investigation, it resulted that the artificial intelligence system worked quite well in identifying white men. But the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.
And indeed, in 2019, the AI artificial intelligence algorithms used to identify Mr. Julian-Borchak were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces.
Data are pivotal for the usage of AI in legal proceedings and investigations
The reason for this fault is that an AI system needs to be fed by data that heighten the level of accuracy of their machine learning technology. If there are no sufficient data, artificial intelligence cannot attentively distinguish profiles and, in the case of facial recognition tools, individuals.
The same conclusion was led by a recent study of the European Commission named “Study on the use of innovative technologies in the justice field” which reviews the current status of artificial intelligence and blockchain in the justice field. And the conclusion reached by researchers is that the capstone of the growth of their usage depends on the ability to collect a high volume of data, linking them from different sources in a traceable manner and in compliance with data protection laws.
AI systems already take automated decisions in several fields. But, under privacy laws, the relevant data protection information notice needs to lay out the criteria that drive automated decisions, and individuals have the right to object to automated decisions to have the conclusion manually reviewed. Besides, data controllers need to nail down in the record of data processing the categories of processed data, the purposes of the data processing, and the relevant legal basis of the data processing. Yet, they need to make sure that AI systems do not process data in excess of what they are legally allowed to do. And, these aspects are also relevant to limit potential liabilities of the entity relying on an artificial intelligence system whose decisions might be otherwise easily challenged.
All the above requires that entities can track and control the operation of artificial intelligence systems, which might not always be possible.
AI decisions might not be biased and not compliant with ethical principles
During the last weeks, everyone is talking about the Netflix documentary “The Social Dilemma” that raised concerns about some ethical challenges connected to the operation of social media. As social media AI systems, also artificial intelligence technologies exploited as part of legal and criminal investigations, run statistical analysis of data originated from different sources to provide mathematical results that are elaborated through complex machine learning algorithms.
One of the most controversial tools used in this field is a “criminal risk assessment algorithm“. They assess historical data relating to criminal precedents to create a defendant’s profile and generate a recidivism score. In some jurisdictions, a court then factors that score into other countless decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before their trial, and the severity of their punishments.
However, a statistical analysis could amplify and perpetuate embedded biases. At this stage, an ethical review of AI systems comes into play.
The recent Ethics Guidelines for trustworthy AI and the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment issued by the European Commission are relevant to the topic. But, are these measures enough if companies are not accountable for compliance with these rules?
My view on the current status of AI, potential biases, and ethical rules
I dissent from the view of the chairman of the European Data Protection Supervisor, Wojciech Wiewiórowski, that in a recent article on artificial intelligence held
Let’s not rush AI, we have to get it straight so that it is fair and that it serves individuals and society at large.
In my view, the European economy cannot afford to be left behind. The case of Robert Julian-Borchak wrongfully arrested based on a flawed match from an AI facial recognition system, shows that – in some fields – artificial intelligence cannot be the sole decision-maker, without any subsequent manual review. And it also underscores once again that the lack of binding rules on ethical principles is a fast-growing issue.
It never happened in the history that legal rules were able to keep up with technological developments. But any solution aimed at limiting such effects will unwittingly undermine the European economy during a period when it is grappling to remain competitive, otherwise, it will be eaten up by American or Chinese giants.
A common set of EU rules based on general principles and a central body able to ensure consistency in their enforcement across the European Union may be the solution. Though EU Member States shall forego some of their local sovereignty for the common interest of European citizens and companies.
What do you think about the above? On the same topic, you may find interesting the article “Artificial intelligence – Not the evil but the New Electricity!“.
Image Credit Ivan Rigamonti