Artificial Intelligence

Security concerns creep into generative AI adoption

Developers concerned AI's offensive capabilities yet to be countered
Pro
Image: Shutterstock via Dennis

1 April 2024

Generative AI lures enterprise leaders with potential advantages, such as expanding data analytics, speeding up work and reducing administrative burdens. But cyber security leaders are wary the novel tech can introduce new problems in an already precarious environment.

As vendors and CEOs push for CIOs to embark on swift implementation, cyber security pros are finding themselves in a familiar position – keep up or get pushed out.

“The CISOs that I talk with are more like holding up their hand saying, ‘Wait, stop, this is moving too fast,’ and historically cybersecurity people have always been the ones to question whether a new technology is ready for prime time,” said Ed Skoudis, president of the SANS Technology Institute.

 

advertisement



 

Despite concerns, CISOs are eager to join CIOs in formulating plans and strategies focused on the technology, a potential net positive for the organisation. 

CIOs with the knowledge that AI platforms have flaws will be better off than those in blissful ignorance. Businesses can proactively address vulnerabilities by deploying automated tools to scan models, keeping cyber professionals in the loop and searching for signs of malicious activity. 

Cyber expertise is becoming especially critical as the menu of generative AI procurement options expands. 

CIOs can leverage platforms like Hugging Face, Vertex AI, Bedrock and others to build on top of AI models and train them with internal data. Teams could also go straight to off-the-shelf tools like GitHub Copilot, ChatGPT Enterprise or Gemini. Customised models from OpenAI’s GPT store, plugins and other vendors bring even more options.

As more generative AI platforms and tools connect to proprietary, internal data, CIOs that lean on their cyber counterparts could uncover areas of improvement in policies and plans – as well as vulnerabilities in tools and models – that otherwise wouldn’t have been identified. 

The generative AI ecosystem

There’s an evolving conversation in cybersecurity regarding whether the use of AI and generative AI will benefit offense or defense more. 

“I do think there’s potential for it helping defense in the next year or two or three, but right now it’s helping the offense much, much more,” Skoudis told CIO Dive. 

In recent weeks, multiple reports have provided cautionary tales for enterprises, from AI worms that can spread and steal data to API vulnerabilities. 

JFrog, a DevOps platform provider, said it had uncovered around 100 malicious models on Hugging Face. The provider mostly serves developers and data scientists as an open source platform, but as generative AI hype seeped into budgets, the company has moved closer to enterprise IT with partnerships connecting it to Amazon, Google Cloud and IBM solutions. Hugging Face did not respond to a request for comment.

“One of the potential threats is code execution, which means that a malicious actor can run arbitrary code on the machine that loads or runs the model,” David Cohen, senior security researcher at JFrog, said in a February blog post. “This can lead to data breaches, system compromise or other malicious actions.”

More recently, Salt Security identified security flaws within ChatGPT plugins that allowed third party access to accounts and sensitive data, according to a report published last week.  ChatGPT plugins let the chatbot perform tasks on behalf of users, such as retrieving data from Google Drive or committing code to GitHub repositories. 

While in some cases beneficial, these plugins also introduce a new attack vector, enabling bad actors to gain control of an organization’s account on a third-party website or allow access to sensitive information and data stored within third-party applications, Salt Security research found.

Aviad Carmel, security researcher at Salt Security and one of the leads on the plugin research, said in the team’s analysis there were instances where an organisation could connect a plugin to its GitHub account that holds its source code, then an attacker could gain access via prompting. 

“That would be devastating and that’s actually the impact,” Yaniv Balmas, VP of research at Salt Security, said. “The conclusions of our research are very much relevant to every other generative AI ecosystem.”

Security in software development

The cyber security conversation should span the generative AI lifecycle, from procurement to use cases. Companies interested in speeding up software development with generative AI can start by taking a look at the training sets that power coding assistants. 

“If you’re a developer and you’re writing software, the sources that these LLMs are using to predict what type of software you want to create based on a prompt is pulled from, the most part, public documentation like Stack Overflow and blogs online talking about programming,” Randall Degges, head of developer relations and community at Snyk, said. 

A lot of the coding help on the internet is not the best example of product-ready, secure code, Degges said, and AI coding tools that are powered by internal data will only reinforce the existing standard. 

As more companies add coding tools to their tech stack, monitoring tools to ensure results are meeting expectations is key. 

“If you have a code base with a lot of issues, stylistic problems, when you ask generative assistants to add additional code to it, a lot of the time, it produces equally bad code because it has all that context that it’s using as the basis,” Degges said. 

On the other hand, CIOs that provide generative coding tools with a good code base will find their teams’ generated code is better when using the same tools, according to Snyk’s research. 

Tech chiefs can protect businesses by having automated and human reviews of generated code. More than half of organisations found security issues with AI-generated code sometimes or frequently, according to a Snyk survey published in January. Nearly nine in 10 developers are concerned about the security implications of using AI coding tools.

“It’s really hard to build production-secure code, which is why engineering is such an in-demand thing,” Degges said. App developers and programmers, which included software engineers, quality assurance testing and database developers, had the highest boost in annual compensation year-on-year among IT roles, increasing by around 47%, according to Skillsoft’s IT Skills and Salary report in December. 

“In the last year and a half or so, every developer who’s staying up to date with this stuff, their workflows have probably changed three or four times just because the tooling has advanced, the ways you interact have changed and the level of usefulness has dramatically changed,” Degges said.

News Wires

Read More:


Back to Top ↑

TechCentral.ie