10 Dec Can only AI successfully regulate Artificial Intelligence?
AI and the magic formula on how to regulate it are continuously invoked, but can traditional regulations set rules for artificial intelligence? What’s missing?
One of my favorite movies of my childhood was “Back to the Future” and you may remember the Wild Gunman scene in Back to the future II
The comment from the kids was
“You mean you have to use your hands? that’s like a baby’s toy”
The main difference between how to operate a videogame with a traditional Atari 2600 joystick controller and the way you can do it through your brain is that through the Atari joystick you can see at every moment what is being done by the player, while if the game is operated by the players’ brain, this is all invisible to our eyes and to any sort of control.
This appeared to be the future, but artificial intelligence systems are going beyond human nature in a manner that cannot be controlled through both tools that are of the age of Atari 2600 joystick controller and most recent technologies.
How AI is going beyond what human nature can understand
A fI had the privilege of running a presentation at the Digital Legal Day of the German-Italian Chamber of Commerce on “AI and Human laws” with Fabio Moioli, Head Consulting & Services at Microsoft. In a few minutes, Fabio gave an effective snapshot on how our life is changing due to artificial intelligence and how it will change in the near future due to the limitless potentials of AI.
One of the ground-breaking events that he outlined was when the AI system, AlphaZero, manufactured by Google’s DeepMind, defeated the GO world champion in 2016.
For those that do not know the GO game, this is a Chinese game which is deemed to be the most complex game in the world, due to its vast number of variations in individual games. And the defeat was so relevant, because Google had not just given a large amount of literature on the GO game to AlphaZero. It had given a limited amount of “direct” instructions, and mostly used deep learning where AlphaGo played itself in hundreds of millions of games so that it could understand the game intuitively.
The playing strategy followed by AlphaZero was not based on logic. And indeed, during the game, it made a move that appeared illogical and someone thought there was a “bug” in the system. But then the illogical move appeared to be a winning move that led to AlphaZero’s victory, leaving the world champion so disappointed that he had to exit the room…
The word “intuition” is the keyword in analyzing the evolution of AI, since it means that machines are going beyond any type of reasoning and added a component that CANNOT be logicly explained.
Do we need to urgently regulate AI?
The fast evolution of artificial intelligence systems led a super genius like Elon Musk to call for urgent regulations on AI in an extremely interesting and “scary” interview run by Joe Rogan.
His view is that “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry“, but
“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late, [—] AI is a fundamental risk to the existence of human civilization.“
This is happening when the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe adopted the first European charter setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems. You can read the LawBytes update from Tommaso Ricci explaining the contents of the charter (“EU Electronic Communications Code 📲 and AI 🤖 ethical charter“), but essentially it sets the 5 ethical principles to regulate AI below:
- Respect of fundamental rights;
- Quality and security;
- Transparency, impartiality and fairness; and
- Under user control.
The issue that I see with these principles is that they have been drafted having in mind “traditional” conducts to be regulated, behaviours that can be visible to our eyes so that, if there is a violation, we can challenge it.
How to control the unexplainable artificial intelligence?
The title of this paragraph was inspired by an interesting speech from Andrew Burt at the University of Chicago named “Regulating Artificial Intelligence: How to Control the Unexplainable“.
It is possible to react to the “unexplainable” just prohibiting, rather than regulating, AI. This is for instance what European data protection regulators have been trying to do with the GDPR. You can read my article on the topic “Artificial intelligence – What privacy issues with the GDPR?“. But essentially the European privacy regulation prohibits the usage of AI to take automated decisions that produce legal effects concerning individuals or similarly significantly affects them, unless this is
- provided by the law, such as in the case of fraud prevention or money laundering checks,
- is necessary for the performance of or entering into a contract,
- is based on the individual’s prior consent.
Those exceptions are interpreted narrowly, but the major obstascle is that individuals need to be given with a right to object to automated decisions which is commonly known as the right to receive a justification and the privacy information notice shall outline the criteria according to which automated decisions will be taken by the machine.
However, explained above, AI cannot be justified sometimes. This is for instance the case neural networks that are a type of artificial intelligence able to mimic the human brain, adapting to changing input so the network generates the best possible result without needing to redesign the output criteria.
Artificial intelligence is going to disrupt any market
The conclusion cannot be to ban artificial intelligence since for instance the same Elon Musk who is urging to regulate AI founded Neuralink that is an American neurotechnology reported to be developing implantable brain-computer.
According to a study from GlobalData almost every industry will be disrupted by AI.
A number of new entrants are coming into the market, but also companies like Google, Amazon, Microsoft and IBM are heavily investing in AI and might become the new competitors of traditional businesses.
How shall we regulate AI properly?
AI cannot be ignored as it will become part of our lives, so how to regulate it? Traditional regulatory approaches risk to be
- either ineffective since without the support of technology as part of investigations, it will be impossible to identify misconducts;
- or inefficient since they might limit the growth of AI technologies, discriminating some countries to the benefit of low regulated nations.
Some of these issues above had been discussed as part of a consultation of the European Commission on how to regulate the Internet of Things (Read on the topic “How the IoT will change with new European regulations?“). But my view is that it is necessary to focus on 3 main aspects to regulate AI properly:
1. Liability rules need to be affordable
No software is without bugs and artificial intelligence is expected to considerably reduce costs and accidents, if compared to any process which is manually handled. If we expect AI to be “perfect” before being used, this will never happen, while humans have never been perfect and their errors are for instance the main source of cyber-attacks.
Liability rules shall make those that benefit from AI technologies accountable for them. But such rules cannot provide sanctions/fines/penalties that can’t be afforded by businesses since this will hinder the exploitation of these technologies.
Countries that understand the relevance of artificial intelligence and might create funds to support potential victims of AI’s errors, just as it happens with funds created for victims of car accidents.
2. Ethical rules have to be objective
I previously published an article on the topic (Read “What ethics for IoT and artificial intelligence?“). Most of the major companies investing in AI created an internal ethical committee, but ethical rules need to be coded in details so that the compliance with them can be verified. Also, such committees shall have an actual role within companies, rather than just being an internal consulting body with no control on the business of companies.
Compliance with ethical rules shall also be audited by and reported to competent authorities, as otherwise the compliance with such principles will become only a sort of “advertising campaign“. On the contrary, the results achieved in ensuring compliance with ethical rules could become a competitive advantage in a business that will exponentially rely on trust between companies and their customers (Read on the topic “Trust is the backbone of IoT, and there is no shortcut to success“).
3. AI can be regulated only with AI
As it happens with cybercrimes occurring on the Internet that can be investigated only supporting investigations with technical tools, the same happens with artificial intelligence. However, as emphasized by Elon Musk in his comment above, the difference is that we cannot regulate AI waiting for the first misconducts, but shall adopt a proactive approach, setting rules and starting investigations at this stage, when a number of technologies are being developed.
Such rules and investigations shall be run with the support of AI, since only AI can understand how to regulate AI and identify potential misconducts.
What is your view on the above? Some of the topics addressed in this article have been touched in my previous posts listed below
- Artificial intelligence – What privacy issues with the GDPR?
- How the IoT will change with new European regulations?
- What ethics for IoT and artificial intelligence?
- Trust is the backbone of IoT, and there is no shortcut to success
If you found this article interesting, share it on your favourite social media and register to our newsletter. Also don’t forget to try Prisca our GDPR chatbot described HERE