Share This Article
Artificial intelligence (AI) developments have led to significant advances in several fields and introduced new cyber risks requiring more robust cybersecurity defenses.
The “dual” nature of AI allows for both harmless and malicious applications, both peaceful and military, making attention to the risks associated with these technologies essential.
The enthusiasm associated with AI tools accelerated dramatically in early 2023 thanks to the release of publicly accessible solutions that quickly went viral due to their ability to generate original textual or visual content that captured the public’s attention.
As companies seek to enhance their processes and products by integrating third-party solutions through the use of API programming interfaces or by developing their own, it is critical to proceed with caution and assess the potential risks associated with the use of these technologies, but also the opportunities arising from the application of AI to protect cybersecurity.
The cyber risks of artificial intelligence
AI systems have known vulnerabilities, such as the possibility of corruption or manipulation of input datasets. This can lead to biased results that are not representative of reality. Besides, AI systems are generally enabled to make inferences and perform actions in an automated manner without constant human involvement, resulting in a reduced chance of detecting any vulnerabilities that unscrupulous competitors or cybercriminals can exploit to compromise business systems. Yet, the reasons why a machine learning or AI program makes particular inferences and decisions are not always immediately apparent to those involved in oversight. The decision models and underlying data are not necessarily transparent or quickly interpretable (although significant efforts are underway to improve the transparency of such tools). Even if a violation is detected, its purpose or cause may be difficult to reconstruct. Finally, there are risks from the abuse of the technology itself. By exploiting generative language patterns, it is increasingly easy for attackers to create highly sophisticated and convincing phishing emails that can fool even the most experienced users.
Such conduct violates the terms and conditions of using the best-known generative language models. With the release of various updates, such services increasingly limit the possibility of bypassing their security controls. Still, at the same time, lesser-known and less controlled parallel services are being launched on the market. This is why it is beneficial for companies to have policies governing the proper use of AI tools by their staff to take advantage of the opportunities made available by this technology while limiting the risks of leaking confidential data or potentially harmful uses to the company.
The opportunities of artificial intelligence in cybersecurity
From another perspective, however, organizations can rely on AI capabilities to update their cybersecurity practices and protect their AI-powered systems. AI improves existing threat detection and response capabilities, enabling new preventive defense capabilities to be developed. Through AI, companies can streamline and improve the operating security model by reducing time-consuming and complex manual inspection and intervention processes and redirecting human efforts toward supervisory and troubleshooting tasks. In particular, AI can enhance current cybersecurity systems and practices in three main directions:
- Prevention and protection: AI offers a way to automate the threat detection process- augmenting, rather than replacing, the human analyst through machine learning and deep learning techniques. Many of AI’s applications for threat detection and prevention use a version of machine learning called “unsupervised learning,” through which collected datasets are used to find patterns that, in turn, are used to detect anomalies, such as unusual file moves or changes.
- Detection: AI enables a shift from static detection methods (Signature Based Intrusion Detection System) that detect cybersecurity violations by analyzing the system for characteristic signs of cyber breaches to more dynamic and continuously improving methods. A.I. algorithms can detect any abnormal change without an advanced definition of what is abnormal. In this way, AI represents a potent threat qualification and investigation tool that is particularly useful for monitoring high-risk investigations, such as those in high finance. Indeed, artificial intelligence can recognize significant changes in user behavior that may pose a security risk; and
- Reaction: thanks to AI, the workload for cybersecurity analysts can be reduced. For example, by intelligently automating routine, repetitive manual tasks, such as searching for signs of compromise within log files, human resources can focus on higher-value tasks and prioritize risk areas. In addition, AI-powered response systems can proactively intervene and dynamically segregate networks to isolate valuable information in secure locations or redirect attackers away from vulnerabilities or essential data.
How to develop and deploy AI in cybersecurity
According to Microsoft, in the world, 1,287 password attacks occur per second. At the same time, according to ENISA’s Threat Landscape Report 2022, the proliferation of bots creating virtual personas can easily compromise the process of creating regulations and community interaction by flooding government agencies with fake content and comments.
In such a hostile environment, companies of all sizes must develop a cybersecurity strategy with a solid foundation that takes into account the specific requirements depending on the operating sector: for example, Legislative Decree No. 65/2018, which was implemented in Italy NIS 1 Directive, requires companies within its scope to take appropriate and proportionate technical and organizational measures concerning cyber risk management, while also having to prevent and minimize the impact of any security incidents suffered. Also, the recently approved NIS 2 Directive (Read on the topic “NIS2 Directive published – New cybersecurity obligations for many companies“) introduces more detailed and stringent cybersecurity obligations. It expands the scope to include companies that offer digital or healthcare services, for example. Besides, the DORA Regulation introduces specific cybersecurity obligations for banks, financial institutions, insurance companies, and crypto providers coupled with the obligation to implement a cyber risk monitoring and review framework that requires direct monitoring and liability of board members for cyber attacks (Read on the topic “DORA Regulation into force: new cybersecurity obligations for banks, insurance companies, and financial institutions“)
The design of effective enterprise security solutions goes through the development of IT security governance, management, and legal compliance processes based on a structured and systematic acquisition and analysis of information on applicable regulatory requirements and possible cyber threats to guide, design, verify and monitor appropriate countermeasures. Cyber Threat Intelligence activities must also continue throughout the lifecycle of enterprise information systems as essential drivers for the proper evolution and an enabling factor in implementing effective defense and prevention measures.
On a similar topic, the article “ENISA Report on Cybersecurity of Artificial Intelligence warns on the lack of standardization on AI” may be interesting.