Summary: The National Security Agency (NSA) has issued a cybersecurity information sheet (CIS) providing best practices for deploying secure and resilient AI systems, marking the first release from the NSA’s Artificial Intelligence Security Center (AISC).
Threat Actor: None mentioned.
Victim: None mentioned.
Key Point :
- The CIS emphasizes the need for AI systems to be secure by design and offers various best practices, including adopting a zero trust mindset, actively monitoring AI model behavior, and requiring threat models from developers.
- Other recommendations include hardening and updating the IT deployment environment, validating AI systems before deployment, and maintaining awareness of current and emerging threats in the AI field.
- The CIS was developed in collaboration with several cybersecurity agencies, and the AISC plans to develop further guidance on AI security topics.
The National Security Agency (NSA) issued a cybersecurity information sheet (CIS) on Monday to share best practices for deploying secure and resilient AI systems.
The guidance marks the first release from the NSA’s Artificial Intelligence Security Center (AISC), which the agency stood up last year to promote the secure development, integration, and adoption of AI technologies within national security systems (NSS) and the Defense Industrial Base (DIB).
The CIS is geared towards NSS owners and DIB companies that will be deploying AI systems developed by an external entity.
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis,” NSA Cybersecurity Director Dave Luber said in an April 15 press release.
The CIS stresses that AI systems are software systems, and organizations should therefore prefer systems that are secure by design.
The guidance offers a wide range of best practices, including that organizations adopt a zero trust mindset, actively monitor the AI model’s behavior, and require the primary developer of the AI system to provide a threat model for their system.
Other key recommendations include hardening and updating the IT deployment environment; validating the AI system before deployment; using robust logging, monitoring, and user and entity behavior analytics to identify threats; and maintaining awareness of current and emerging threats, especially in the rapidly evolving AI field.
“In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations, and monitoring for issues. By taking the steps outlined in this report to secure the deployment and operation of AI systems, an organization can significantly reduce the risks involved,” the CIS says.
The CIS was developed in partnership with the Cybersecurity and Infrastructure Security Agency, the FBI, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom National Cyber Security Centre.
The AISC – which is part of NSA’s Cybersecurity Collaboration Center – plans to work with global partners to develop a series of guidance on AI security topics as the field grows, “such as on data security, content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery.”
This is the NSA’s most recent CIS after releasing one last week with recommendations for maturing data security and enforcing access to data in transit and at rest.
Source: https://meritalk.com/articles/nsa-shares-best-practices-for-secure-ai-systems/
“An interesting youtube video that may be related to the article above”