How to take the best from artificial intelligence, limiting potential legal issues of AI and exploiting the technology to its best for the company in my predictions for 2019? Is the AI the new electricity?
1. AI from fiction will become a reality
The novel featured a world where robots are part of our everyday lives, and this no longer appears to be far to happen. AI systems like the Google Assistant, Amazon Alexa, Apple Siri, and Microsoft Cortana are always with us in our smartphones and control a continually increasing number of devices in our homes.
But the increase is occurring also in the industrial sector. According to a survey run by GlobalData on over 3,000 companies in early 2018, 54% have already prioritized investments in chatbots, machine learning, and deep learning technology. But, more importantly, the findings suggest that penetration of AI will snowball, with more than 66% of respondents indicating that AI investments will be a priority by the end of 2019.
The potentials of AI are limitless, and for instance, Twitter recently announced that it had flagged 95% of the nearly 300,000 terrorism-related accounts it took down in the previous six months through algorithms rather than humans and has removed 75% of the suspicious accounts before their first content posting.
Governments understood the relevance of AI for their countries. For example, France launched an AI strategy which provides for investments of € 1.5 billion. The US Department of Health and Human Services ran a pilot using AI to process thousands of public comments on regulatory proposals. The UK’s Department for Work and Pensions deployed an AI system to handle incoming correspondence and the Italian Ministry of Economy and Finance implemented an AI-driven helpdesk to deal with citizens’ calls.
Our personal experience also confirms the results of the survey mentioned above. We advised clients on legal issues of several AI projects. Those related among others to the usage of facial recognition to identify customers and potential fraudsters, machine learning and chatbot technologies to automate relationships with customers in the contracting and customer support process and of IoT systems as part of both industry 4.0 projects and of smart home, connected car and telemedicine projects.
Our experience shows though that companies are trying to exploit 4.0 and AI technologies, but, still, have a “3.0 approach” in the sense that they often underestimate that such technologies
- lead to a change in the model of business of companies that unveils new legal risks (e.g. in terms of potential liabilities) which require new legal competencies and a cultural shift in the company’s management and legal department (Read on the topic “Blockchain, Artificial Intelligence and IoT – Ready for new models of business?“);
- require a more in-depth assessment of how to minimize risks and maximize benefits, also through the usage of data that increasingly become an asset of the company that needs to be protected and exploited and using a careful selection of suppliers and negotiation of agreements with them; and
- need the support of third-party providers since, as already experienced with the Internet of Things technologies, the costs and efforts of creating your technologies might be excessive and offer lower results than those achievable through the cooperation with external suppliers (Read on the topic “You can’t do I(o)T alone“).
As Satya Nadella, Microsoft CEO, anticipated in 2015,
“Every business will become a software business“
and this is definitely one of our predictions for 2019.
2. AI regulations and their enforcement will become an urgency
The evolution of artificial intelligence systems led Elon Musk to call for urgent regulations on AI in a fascinating and “scary” interview run by Joe Rogan.
His view is that
“Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry“
“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late, [—] AI is a fundamental risk to the existence of human civilization.“
Isaac Asimov with his “Three Laws of Robotics” represents the first attempt towards regulating artificial intelligence. And recently regulators are at least trying to tackle such necessity. Initiatives such as the recent draft Ethics Guidelines for trustworthy AI and the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment issued by the European Commission and the 2017 resolution of the European Parliament containing recommendations to the European Commission on civil law rules on robotics are relevant. Indeed, they provide some suggestions for achieving proper regulatory solutions, including for instance compulsory insurance for self-employed agents’ use.
International cooperation is necessary to regulate AI. And this is the path followed by the EU Member States that signed in April 2018 a Declaration of cooperation on AI whereby they agreed to work together on the most critical issues raised by AI, aiming to jointly deal with social, economic, ethical and legal questions.
The current attempts to regulate AI appear though more like the identification of general principles of ethical behavior rather than actual regulations, and that they lack any binding effect and enforcement actions.
It is hard to say whether such a gap will be filled in during 2019. But there is no doubt that if regulators do not take a firm move towards binding regulations that can limit potential misuse of AI and IoT, without preventing their growth, we risk that their development
- will be hindered in countries like those of the European Union where for instance the GDPR already considerably constrains the exploitation of technologies able to make automated decisions based on personal data (Read on the topic “Artificial intelligence – What privacy issues with the GDPR?“) and
- will lead to significant negative consequences and potential risks on matters like the allocation of liability for damages, if product liability rules are not “upgraded” to an environment where AI does not just follow instructions from the relevant manufacturer but performs independent reasoning that even its manufacturer might find hard to explain.
Besides, it is not possible to control AI through “traditional” technologies and actions. Even police authorities will need to use AI to control it and enforce measures against it (Read on the topic “Can only AI successfully regulate Artificial Intelligence?“).
3. Ethical rules will become essential for AI
The first Asimov law of Robotics is that “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” but such law might fall short with high complex AI technologies. As it stressed in the movie version of I, Robot, the potential diversion between analytical and ethical reasoning can become a significant issue.
Rational decisions depend on the likely outcome of an event, but ethics needs to drive decisions of artificial intelligence technologies since logical choices do not necessarily match socially acceptable positions.
The most common example is the one of a self-driving car which decides to run over some pedestrians without turning since a potential turn would lead to a higher risk of injuring both the pedestrians and the passengers of the vehicle. But another relevant example would be the decision of companies to invest in the usage of AI in sectors that might be profitable but could present considerable risks for humans if the technology goes out of control.
AI should be implemented with care and consideration to avoid misuse and unintended consequences. Governments have a unique role in ensuring the economic and social impacts of AI to be managed appropriately and set the ethical and legislative frameworks for AI to be used safely in our communities.
The importance of an ethical approach in improving new technologies was particularly stressed by Tim Cook, Apple CEO, during his speech at the 40th International Conference of Data Protection and Privacy Commissioners. From Cook’s point of view:
“Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies. […] Technology is capable of doing great things. But it doesn’t want to do great things. It doesn’t want anything. That part takes all of us.“
Companies like Microsoft identified six ethical values – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. Also, large IT corporations are establishing ethical committees.
My top 3 best practices on the future of artificial intelligenceRegulators will need to understand that the compliance with such ethical rules cannot be left to the discretion of companies that on the contrary shall
- be obliged by applicable laws to comply with them;
- be required to prove compliance with them; and
- be accountable for them, with also potential sanctions in case of a breach.
A famous quote from Isaac Asimov is:
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.“
AI will grow in 2019, and the predictions are that regulators, as well as police and judicial authorities, need to create an appropriate environment to ensure the proper exploitation of such technology.
At the same time, any business is expected to rely on artificial intelligence. As outlined in a famous quote from Andrew Ng,
artificial intelligence is the New Electricity
in the sense that it will lead to a new industrial revolution so that any company will rely on it. And those companies that will not change are unlikely to survive!
These predictions an excerpt from our book 2019 Intellectual Property and Technology Predictions and you can find a presentation where I discussed about the topic below