Artificial intelligence is a massive opportunity but triggers some risks which cannot be sorted through over-regulation that might damage the market.
Three simultaneous technologic revolutions unleashing AI
One of the main topics of the World Economic Forum 2017 was artificial intelligence (AI). I found fascinating the interview to Ginni Rometty, the Chairwoman, President and CEO of IBM. She said that we are in a unique period since there three technologic revolutions happening simultaneously which make this point in time different from ever before
- The rise of cloud computing;
- The growth of data; and
- The increase in mobility.
Because of these three revolutions, there is a massive amount of information that cannot be dealt with by humans, but we need systems that can deal with such data, reason around it, and learn. This scenario led to the rise of artificial intelligence.
I have read several articles on how artificial intelligence might represent a threat for workers. It is recent the news about the Japanese insurance firm Fukoku Mutual Life Insurance that is making 34 employees redundant and replacing them with IBM’s Watson Explorer AI. But the point raised by Ms. Rometty is that AI does not replace humans, it does something that humans cannot physically do since no human would be able to deal with such massive amount of information and AI can reach results from which anyone can benefit.
AI and robots are exponentially becoming part of our daily life, and humans can not always control the potentials of such technologies. If you think about Google DeepMind project where AI is not programmed/taught to solve problems but needs to learn itself how to solve problems. This circumstance means that we will reach a stage when machines take decisions whose reasoning cannot be explained by humans!
The call for regulations on artificial intelligence
Ms. Rometti herself mentioned as part of her interview that a 4th revolution is around security and privacy, and how such issues might still derail the revolution the three components mentioned above have combined to create.
And on this topic, it might not be a coincidence that the Legal Affairs Committee of the European Parliament approved a report calling the EU Commission for the introduction of a set of rules on robotics. Such provisions include
1. Who is liable, and how to calculate damages?
The Committee is for the introduction of strict liability rules for damages caused by requiring only proof that damage has occurred and the establishment of a causal link between the harmful behavior of the robot and the loss suffered by the injured party.
This approach would not sort the issue around the allocation of responsibilities for “autonomous” robots like Google DeepMind that did not receive instructions from the producer. And this is the reason why the Committee is proposing the introduction of a compulsory insurance scheme for robots producers or owners (e.g., in the case of producers of self-driving cars). The issue is whether such an obligation would represent an additional cost that either customer would bear or would even prevent the development of technologies.
Robots treated as humans?
What sounds quite unusual and honestly a bit “scary” is that the Committee also calls for the introduction of a “legal status” for robots of electronic persons “with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently“.
The report does not sufficiently clarify how such legal status should work in practice, but it seems like we are already attempting to distinguish the liability of the artificial intelligence itself separate from the one of its producer/owner. This assessment shall be on a case by case basis with autonomous robots, but civil law rules need to evolve to accept such principles.
Are ethical rules needed?
The Committee stressed the need for guiding ethical framework for the design, production, and use of robots. This scenario would operate in conjunction with a code of conduct for robotics engineers, of code for research ethics committees when reviewing robotics protocols and of model licenses for designers and users.
I already discussed in a previous post about the ethical issues around artificial intelligence. My prediction is that most of the companies investing in the area shall sooner rather than later establish an internal ethical committee. But the issue is whether statutory laws on ethics are necessary since they might limit the growth of the sector.
Privacy as a “currency” cannot affect individuals
It is the first time that I see privacy associated with a “currency.” However, it is true that we provide our personal data to purchase services. And the matter is even more complicated in case of sophisticated robots whose reasoning cannot be mapped. Such circumstance might trigger data protection issues that I already discussed in a previous post. But it is essential that the Committee called for guarantees necessary to ensure privacy and security also through the development of standards.
The reaction from the industry
The European Robotics Association immediately reacted to this report stating in a position paper that
“Whereas it is true that the “European industry could benefit from a coherent approach to regulation at European level” and companies would profit from legal certainty in some areas, over-regulation would hamper further progress. This poses a threat to the competitiveness not only of the robotics sector but also of the entire European manufacturing industry“.
It can be hard to set such specific rules on technologies that are rapidly evolving. The concern is that regulations might risk restricting investments in the sector, while in my view, we should welcome rules that create more certainty and foster innovation.
On the same topic, you can read “Can only AI successfully regulate Artificial Intelligence?“.