cybersecurity

NSA sounds alarm on AI’s cyber security risks

Organisations warned to adopt a 'security-aware' culture
Pro
Photo by Pixabay via Pexels

22 April 2024

The rapid adoption of artificial intelligence tools is potentially making them “highly valuable” targets for malicious cyber actors, the National Security Agency warned in a recent report.

Bad actors looking to steal sensitive data or intellectual property may seek to “co-opt” an organisation’s AI systems to achieve, according to the report. The NSA recommended adopting defensive measures such as promoting a “security-aware” culture to minimide the risk of human error and ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.

“AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” NSA cyber security director Dave Luber said in a press release.

 

advertisement



 

The report comes amid growing concerns about potential abuses of AI technologies, particularly generative AI, including the Microsoft-backed OpenAI’s wildly popular ChatGPT model.

In February, OpenAI said it terminated the accounts of five state-affiliated threat groups who were using the startup’s large language models to lay the groundwork for malicious hacking efforts. The company acted in collaboration with Microsoft threat researchers. 

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a blog post. “On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.”

The threat activity uncovered by OpenAI and Microsoft could just be a precursor for state-linked and criminal groups to rapidly deploy generative AI to strengthen their attack capabilities, cyber security and AI analysts told website Cybersecurity Dive. 

Malicious actors targeting AI systems may use attack vectors unique to AI, as well as standard techniques used against traditional information technology systems, the NSA said.

“In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations, and monitoring for issues,” the agency’s report said.

The NSA said its guide, while intended for national security purposes, “has application for anyone bringing AI capabilities into a managed environment, especially those in high-threat, high-value environments.” 

The report was developed in partnership with other government agencies across the world, including the US Cybersecurity & Infrastructure Security Agency, as well as the UK National Cyber Security Centre and the Canadian Centre for Cyber Security.

News Wires

Read More:


Back to Top ↑

TechCentral.ie