NIST Releases Four Draft Publications Focused on AI Security

The National Institute of Standards and Technology (NIST) released four draft publications designed to help organizations improve the safety, security, and trustworthiness of artificial intelligence (AI) systems. The new guidance from NIST follows President Biden’s 2023 Executive Order on AI security and the release of the NIST AI Risk Management Framework (AI RMF) earlier in 2024.

The new publications focus on multiple aspects of AI technology, including managing the risks of generative AI, transparency in digital content created or altered by AI, and developing global AI standards.

All the new NIST publications are initial public drafts, and NIST is soliciting public comments on each through June 2, 2024. Instructions for submitting comments can be found in the respective publications, which are linked in the descriptions below.

AI RMF Generative AI Profile

The AI RMF Generative AI Profile (NIST AI 600-1) is intended to be a companion resource to the AI RMF. It was designed to help organizations identify and manage the unique risks created by generative AI. Generative AI is the technology that enables chatbots and text-based image and video creation tools.

The risks identified as unique or exacerbated by generative AI in this publication are:

  • Easier access to data on chemical, biological, radiological, or nuclear weapons, or other dangerous biological materials
  • The production of false content
  • The production of violent, inciting, radicalizing, and threatening content
  • Reduced security of personal data
  • Environmental damage
  • Bias or deceptive behaviors from AI systems
  • The spread of disinformation or misinformation
  • Increased vulnerability to cyber-attacks, including hacking, phishing, and malware
  • Integrity and security of intellectual property
  • The production of obscene content
  • Public exposure to toxic or hate speech. Homogenization of data inputs that reduces the quality of outputs
  • Non-transparent or untraceable integration of upstream third-party components

Secure Software Development Practices for Generative AI and Dual-Use Foundation Models

NIST Special Publication (SP) 800-218A is a companion to the Secure Software Development Framework (SSDF) (SP 800-218). The SSDF focuses on software code security. This new resource expands the SSDF to address concerns around malicious data used to train AI models negatively impacting generative AI systems. It provides guidance on protecting the quality of training data and the data collection process.

Reducing Risks Posed by Synthetic Content

The Reducing Risks Posed by Synthetic Content publication (NIST AI 100-4) is focused on the potential harms and risks of synthetic content, which is content created or altered by AI. The publication provides guidance for detecting, authenticating, and labeling synthetic content. Methods that are explored in-depth include digital watermarking, metadata recording (metadata can provide information about content’s origin), and strategies for identifying AI-generated images, video, text, and audio.

A Plan for Global Engagement on AI Standards

A Plan for Global Engagement on AI Standards (NIST AI 100-5) was designed to create worldwide AI standards, cooperation and coordination, and information sharing. Public feedback is requested for issues critical to AI standardization. Topics the document identifies as “Urgently needed and ready for standardization” include:

  • Terminology and taxonomy
  • Testing, evaluation, verification, and validation
  • Mechanisms for enhancing awareness and transparency about the origins of digital content
  • Risk-based management of AI systems
  • Security
  • Transparency among AI actors about system and data characteristics

NIST GenAI

Alongside the four draft publications, NIST also announced the NIST GenAI, an umbrella program that supports various evaluations for research in generative AI by providing a platform for testing and evaluation. These efforts will inform the work of the U.S. AI Safety Institute at NIST.

The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies. These evaluations identify strategies to ensure digital content is used responsibly. Objectives of the NIST GenAI evaluation include:

  • Evolving benchmark dataset creation
  • Facilitating the development of content authenticity detection technologies for different modalities (text, audio, image, video, code)
  • Conducting a comparative analysis using relevant metrics
  • Promoting the development of technologies for identifying the source of fake or misleading information

Registration opens in May for participation in the pilot evaluation, which will seek to understand how human-produced content differs from synthetic content.

CompliancePoint has helped businesses in various industries achieve compliance with the NIST standards that fit their needs. To learn more about our services, reach out to us at connect@compliancepoint.com.

Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.