AI and cybersecurity: The promise of artificial intelligence

,

How is artificial intelligence (AI) changing the cybersecurity landscape? Will AI make the cyber world more secure or less secure? I was able to explore these questions at the panel discussion during the “Potsdam Conference for National Cybersecurity 2024” together with Prof. Dr. Sandra Wachter, Dr. Kim Nguyen, Dr. Sven Herpig. Does AI deliver what it promises today? And what does the future look like with AI?

HPI Security Panel
Cybersecurity is already difficult enough for many companies and institutions. Will the addition of artificial intelligence (AI) now make it even more dangerous for them or will AI help to better protect IT systems? What do we know? And what risks are we looking at here? Economic opportunities and social risks are the focus of both public attention and currently planned legislation. The EU law on artificial intelligence expresses many of the hopes and fears associated with AI.

Hopes and fears

We hope that many previously unresolved technical challenges can be overcome. Business and production processes should be accelerated, and machines should be able to handle increasingly complex tasks autonomously. AI can also offer unique protection in the military sector, saving many lives, for example in the form of AI-supported defense systems such as the Iron Dome.

On the other, darker side of AI are threats such as mass manipulation through deepfakes, sophisticated phishing attacks or simply the fear of job losses that goes hand in hand with any technical innovation. More and more chatbots are replacing service employees, image generators are replacing photographers and graphic designers, text generators are replacing journalists and authors, and generated music is replacing musicians and composers. In almost every profession, there is a fear of being affected sooner or later. This even applies to the IT sector, where a rich choice of jobs was previously perceived as a certainty. These fears are often very justified, but sometimes they are not.

In the area of cyber security, however, it is not yet clear to what extent autonomous AI can create more security and replace the urgently needed security experts or existing solutions. This applies to both attackers and defenders. Of course, the unfair distribution of tasks remains: While defenders want (and need) to close as many security gaps as possible, a single vulnerability is enough for the attackers to launch a successful attack. Fortunately, defenders can fall back on tools and mechanisms that automate a lot of work, even today. Without this automation, the defenders are lost. Unfortunately, AI does not yet help well enough. This is demonstrated by the ever-increasing damage caused by conventional cyber attacks, even though there are supposedly already plenty of AI defenses. On the other hand, there is the assumption that attackers are becoming ever more powerful and threatening thanks to AI.

For more cyber security, we need to take a closer look. We need a clearer view of the facts.

Where do we stand today?

So far, we know nothing about technical cyber attacks generated by artificial intelligence. There are currently no relevant, verifiable cases, only theoretically constructed scenarios. This may change, but as things stand today, this is the case. We don’t know of any AI that could currently generate sufficiently sophisticated attacks. What we do know is that phishing is very easy to implement with generative language models and that these spam and phishing emails appear to us to be more skillful, at least anecdotally. Whether this causes more damage than the already considerable damage, on the other hand, is not known. It is already terrible enough today, even without AI. However, we know that phishing is only ever the first step in accessing a vulnerability.

Member of the Greenbone Board Elmar Geese at the Potsdam Conference for national cybersecurity at Hasso-Plattner-Institute (HPI), picture: Nicole Krüger

How can we protect ourselves?

The good news is that an exploited vulnerability can almost always be found and fixed beforehand. Then even the best attack created with generative AI would come to nothing. And that’s how it has to be done. Because whether I am under threat from a conventional attack today or an AI in my network the day after tomorrow, a vulnerability in the software or in the security configuration will always be necessary for an attack to succeed. Two strategies then offer the best protection: firstly, being prepared for the worst-case scenario, for example through backups together with the ability to restore systems in a timely manner. The second is to look for the gaps yourself every day and close them before they can be exploited. Simple rule of thumb: every gap that exists can and will be exploited. 

Role and characteristics of AI

AI systems are themselves very good targets for attacks. Just like the internet, they were not designed with “security by design” in mind. AI systems are just software and hardware, just like any other target. Only in contrast to AI systems, conventional IT systems, whose functionality can be more or less understood with sufficient effort, can be repaired in a manner comparable to surgical interventions. They can be “patched”. This does not work with AI. If a language model does not know what to do, it does not produce a status or even an error message, it “hallucinates”. However, hallucinating is just a fancy term for lying, guessing, inventing something or doing strange things. Such an error cannot be patched, but requires the system to be retrained, for example, without being able to clearly identify the cause of the error.

If it is very obvious and an AI thinks dogs are fish, for example, it is easy to at least recognize the error. However, if it has to state a probability as to whether it has detected a dangerous or harmless anomaly on an X-ray image, for example, it becomes more difficult. It is not uncommon for AI products to be discontinued because the error cannot be corrected. A prominent first example was Tay, a chatbot launched unsuccessfully twice by Microsoft, which was discontinued even faster the second time than the first.

What we can learn from this: lower the bar, focus on trivial AI functions and then it will work. That’s why many AI applications that are coming onto the market today are here to stay. They are useful little helpers that speed up processes and provide convenience. Perhaps they will soon be able to drive cars really well and safely. Or maybe not.

The future with AI

Many AI applications today are anecdotally impressive. However, they can only be created for use in critical fields with a great deal of effort and specialization. The Iron Dome only works because it is the result of well over ten years of development work. Today, it recognizes missiles with a probability of 99% and can shoot them down – and not inadvertently civilian objects – before they cause any damage. For this reason, AI is mostly used to support existing systems and not autonomously. Even if, as the advertising promises, they can formulate emails better than we can or want to ourselves, nobody today wants to hand over their own emails, chat inboxes and other communication channels to an AI that takes care of the correspondence and only informs us of important matters with summaries.

Will that happen in the near future? Probably not. Will it happen at some point? We don’t know. When the time perhaps comes, our bots will be writing messages to each other, our combat robots will be fighting our wars against each other, and AI cyber attackers and defenders will be competing against each other. When they realize that what they are doing is pointless, they might ask themselves what kind of beings they are hiring to do it. Then perhaps they will simply stop, set up communication lines, leave our galaxy and leave us helpless. At least we’ll still have our AI act and can continue to regulate “weak AI” that hasn’t made it away.