Botnet

Why the cyber security industry isn’t ready for AI

Most organisations unprepared for AI attacks, RSA Conference 2024 hears
Pro
Image: Shutterstock via Dennis

25 June 2024

AI isn’t new to cyber security – most automated security tools rely on AI and machine learning in some capacity – but generative AI has everyone talking and worried. 

If cyber security professionals have yet to address the security implications around generative AI, they are already behind. 

“The train has already left the station,” said Patrick Harr, CEO of SlashNext, in a conversation at RSA Conference 2024 in San Francisco. 

 

advertisement



 

AI-generated threats have already impacted three-quarters of organisations, yet 60% admitted they aren’t prepared to handle AI-based attacks, according to a study conducted by Darktrace.

AI-powered cyber attacks are revealing the gaps in the cyber security talent availability. Organisations are already concerned about the skills gap, especially in areas like cloud computing, zero trust implementation, and AI/ML capabilities. 

With the growing threat AI poses, cyber security teams no longer have the luxury of waiting a few years to fill those talent gaps, Clar Rosso, CEO with ISC2 told an RSAC audience. 

Right now, 41% of cybersecurity professionals have little to no experience in securing AI and 21% said they don’t know enough about AI to mitigate concerns, according to ISC2 research. 

It’s no wonder, then, that these same professionals said that by 2025, AI will be the industry’s biggest challenge.

Why the industry isn’t ready… yet

Organisations have used AI to detect cyber threats for years. But what has changed the conversation is generative AI. 

For the first time, thinking about AI moves beyond the corporate network and beyond the threat actor; it now includes the customer. 

As organisations rely on AI for consumer interaction through tools like chatbots, security teams have to rethink their approach to security detection and incident response that centres around interactions between AI and a third party end user. 

The problem is governance around generative AI. Cybersecurity teams – and organisations overall – don’t have a clear understanding on what data is being trained on AI, who has access to these training modules and how AI fits into compliance. 

In the past, if a third party asked for information about the company that may have been deemed sensitive, no one would have given it out; it would have been a potential security risk. Now that information is built into the AI response model, but who is responsible for the governance of that information is undefined. 

As cyber security teams focus on how to thwart threat actors, they are missing the risks around the data they are sharing willingly.

“From a security standpoint, to safely adopt a technology, we need to understand what the ML model is, how is it connected to the data, is it pretrained, is it continuously learning, how do you drive importance?” said Nicole Carignan, VP of strategic cyber AI at Darktrace, during a conversation at RSAC. 

Building the security team’s expertise

It’s important to remember that generative AI is only one type of AI and, yes, its use cases are finite. Knowing what the AI tools are good at will help security teams begin to build skills and tools to address the AI threat landscape.

However, organisations need to be realistic. The skills gap isn’t going to magically shrink in two or five years just because the need is there. 

As the security team catches up with the skills they need, managed service providers can step in. The benefit to using an MSP to manage AI security is the ability to see beyond a single organisation’s network. They can observe how AI threats are manipulated in different environments.

But organisations will still want to train their internal AI systems. In this situation, it is best for the security team to start in a sandbox using synthetic data, said Narayana Pappu, CEO at Zendata. This will allow security practitioners to test their AI systems with safe data. 

No matter the skills inhouse, eventually, managing AI threats will come down to how AI is used in security toolkits. Security professionals will need to rely on AI to help implement basic security hygiene practices and add layers of governance to ensure compliance regulations are met. 

“We still have a lot to learn about AI. It’s our job to educate ourselves,” said Rosso.

Read More:


Back to Top ↑

TechCentral.ie