Artificial intelligence as an antidote to cybercrime

AI-generated summary

The rapid advancement of artificial intelligence (AI) presents a dual-edged sword, particularly in cybersecurity. While AI has the potential to revolutionize the field positively, there are significant concerns about its misuse. On the harmful side, AI enables cybercriminals to quickly identify vulnerabilities and flaws in systems, accelerating attacks that once required considerable time and resources. This is especially perilous for critical infrastructure such as pipelines, power grids, and water treatment plants, where AI-driven cyberattacks can cause severe physical damage, like fires or explosions, making recovery far more complex than conventional attacks.

Conversely, AI also serves as a powerful defense mechanism. Agencies like the US National Security Agency (NSA) employ AI tools to detect and thwart cyberattacks before they succeed. By analyzing behavioral patterns, AI can distinguish between legitimate users and hackers masquerading as authorized personnel, enhancing the ability to identify and mitigate threats swiftly. This ethical application of AI in cybersecurity shifts the balance, transforming it from a potential weapon into a vital shield against cybercrime. As cybersecurity expert Soledad Antelada notes, while cybercriminals seek maximum damage with minimal effort, AI can both facilitate and counteract these threats, underscoring the importance of ethical AI use in safeguarding digital infrastructure.

On the one hand, artificial intelligence powers cyberattacks. On the other, it opens new ways to stop the most critical ones.

We are living in a time when the accelerated expansion of artificial intelligence causes certain misgivings. It is a technology destined to transform the world, but we are concerned about whether it will do so for the better or worse. That is precisely what the ethics of artificial intelligence deals with, which is committed to a good use of this technology.

In this Jedi dichotomy between light and the dark side, between a beneficial or harmful use of artificial intelligence is its application to cybersecurity.

On the one hand, artificial intelligence can multiply the damage of cyberattacks. On the other hand, it can become the perfect ally to detect them even before they are consumed. There are already examples of this that seem to be taken from a spy movie.

Sabotage that accelerates…

The dark side of using artificial intelligence in cybersecurity has a lot to do with how easy it is for criminals to find vulnerabilities and implementation flaws.

In fact, artificial intelligence is very useful to accelerate a type of attack that previously took a lot of time and resources to carry out. The algorithms are capable of analyzing thousands of lines of code for bugs that can be exploited.

Once detected, it is much easier to sneak in and take control of the target you have targeted.

… and aggravate

This possibility is especially dangerous in the case of critical infrastructure such as pipelines, water treatment plants, ports, electricity distribution networks or control centers.

To this must be added an additional risk. While a normal cyberattack can cause the computer system to go down for only a few hours, the use of artificial intelligence tools can make it much easier for cyberattacks to cause physical damage to the infrastructure, much more difficult to solve.

For example, these attacks can lead to an engine overheating that eventually causes a fire or explosion. An MIT lab has already successfully simulated some.

In addition, cyberattacks powered by artificial intelligence are so fast and sophisticated, according to experts, that they are more complex to detect and mitigate.

The Cyberjedi Arrive

The NSA itself, the US National Security Agency, acknowledges that it already uses this technology to detect attempted attacks on critical infrastructure in the country.

Does all this mean that artificial intelligence is doomed to be Darth Vader’s plaything? No, far from it. The ethics of artificial intelligence take care of this. In fact, among its objectives is to turn these tools into an antidote to cybercrime.

The agency confirms that both foreign-backed hackers and independent criminals are already using artificial intelligence in their attacks. They also target critical infrastructure to cause disruptions at any time they want and that they do not do so with conventional malware , but look for vulnerabilities to appear as authorized users.

However, artificial intelligence, machine learning, and big data also help the NSA detect malicious activity and stop it before it culminates. The reason: the accounts of hackers posing as authorized users don’t behave like a normal commercial operator would.

An ethical approach

The difference in behavior between criminals and real users is a huge advantage for tools that are very skilled at detecting patterns.

In fact, they are much better than conventional techniques for capturing anomalous behavior and, with it, users who are not who they say they are. It is an ethical approach to artificial intelligence that can take defense against cyberattacks to a new dimension.

This is a perfect time to remember the words of a Jedi lady of cybersecurity, Soledad Antelada, a member of Google’s Security Technical Program Manager. He said them on the Innoverse podcast of the Bankinter Innovation Foundation: “cybercriminals are very lazy, their goal is to cause maximum damage with minimum effort”.

Under this philosophy, the use of artificial intelligence can help them accelerate and sophisticate their attacks with minimal effort. But, fortunately, it can also knock them down.