IoT & AI

Elon Musk, Jack Ma and the need to regulate the inexplicable side of AI

Elon Musk warns on the potential risks of AI if we do not control it now

The debate between Elon Musk and Jack Ma shows two different approaches to AI, but if Musk is right, we need to regulate artificial intelligence urgently.

Jack Ma, the founder of Alibaba and Elon Musk, the CEO – among others – of Tesla debated a few days ago at the AI Summit in Shanghai of artificial intelligence and its impact on human beings, expressing very distant positions.

Artificial intelligence systems will outperform humans?

Their positions are the following:

Elon Musk: Computers are much smarter than humans on so many dimensions.

Jack Ma: Computers may be clever, but human beings are much smarter. We invented the computer—I’ve never seen a computer invent a human being

This contrast of views and Elon Musk’s alarm message is in line with his position in previous participation in Joe Rogan’s podcast where he stated

Normally the way regulations are set up is when a bunch of bad things happens, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry, [but —–] AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late, [—] AI is a fundamental risk to the existence of human civilization.

Elon Musk’s opinion is that we are underestimating the possibilities of AI. We think that it is only a new technology that we will use for our benefit, that there are characteristics of human beings that will never be reproducible by a machine. For this reason, we think that regulations will be necessary only when there are the first incidents…

Are we properly evaluating the impact of AI?

Maybe this is a human reaction to the risk: we ignore it and not think about it.

However, artificial intelligence systems have already demonstrated that they can reach a level of complexity that reproduces unique human characteristics, such as intuition. The victory of the AI system, AlphaGo, produced by Google’s DeepMind division, against the Chinese game world champion GO in 2016 is considered a historic event in the evolution of robots.

Artificial intelligence was, in fact, able to perform moves that were not logical, and for this reason, they initially thought that there had even been a bug in the software. Later it was discovered that the move was correct and it was considered the move of “genius”.

The following year, Google released an advanced version of the same system called, AlphaGo Zero, which was able to defeat AlphaGo in all 100 games played. And the peculiarity of the new software was that AlphaGo Zero relied totally on deep learning technology. This means that it hadn’t received any instructions, but it improved by playing against himself.

The consequence of this is that not even the manufacturers of the artificial intelligence systems can predict, control, and explain the behavior of the machine because it does not act based on their instructions. And so they may not be able to guarantee and be responsible for its actions.

How to regulate the inexplicable of AI?

Artificial intelligence cannot be ignored. It will become part of every business, and – as Elon Musk points out – if we do not regulate AI quickly, we may lose control of it. Machines could become a threat to human beings.

Liability rules must be sustainable, though.

The rules on strict product liability make producers responsible for the goods and services that they produce, regardless of their negligence or willful misconduct. These rules cannot, however, provide for liabilities and obligations for artificial intelligence systems that companies could not afford. This circumstance would hinder the exploitation of technology because of the unpredictability of its actions.r

Some countries could, however, create funds to compensate potential victims of AI errors. This approach is just like the funds generated for victims of motor vehicle accidents.

Similarly, it is necessary to create objective ethical rules.

The leading companies investing in the AI have set up an internal ethics committee. But ethical rules must be detailed so that the compliance with them can be verified. Moreover, these committees must play an effective role within companies, rather than being a simple internal advisory body without any control over the activities of companies.

Compliance with the ethical rules must also be verified and reported to the competent authorities, and sanctions must be provided for in the event of breaches. Otherwise, compliance with these principles will only become a kind of “advertising campaign“. On the contrary, the results obtained in ensuring compliance with ethical rules could become a competitive advantage.

Finally, as with computer crimes that occur on the Internet, and that can be investigated by supporting investigations with technical tools, the same must happen with artificial intelligence. But, as pointed out by Elon Musk, the difference is that we cannot regulate the AI while waiting for the first misconducts. We must intervene now, establishing rules and starting investigations when products using artificial intelligence are under development.

Such rules and investigations must be conducted with the support of AI since only artificial intelligence can understand how to regulate the AI and identify potential misconduct.

On the topic above, you may find interesting the article Can only AI successfully regulate Artificial Intelligence?

Don't miss our weekly insights

Tags
Show More

Giulio Coraggio

I am the head of the Italian Technology sector and the global head of the IoT and Gaming and Gambling groups at the world-leading law firm DLA Piper. IoT and artificial intelligence influencer and FinTech and blockchain expert, finding solutions to what's next for our clients' success.

Related Articles

Back to top button
Close