Early AI Security Standards: ISO/IEC 42001 & NIST AI RMF

Artificial Intelligence (AI) is a rapidly evolving technology that is expected to make major impacts on our business and personal lives. As use of the technology continues to expand, security, privacy, and ethical concerns also continue to grow. AI security standards designed to help organizations with security and address ethical concerns have made their way to the market, most notably ISO/IEC 42001 and the NIST AI Risk Management Framework (AI RMF).

Here’s a look at the requirements and objectives of both AI security standards.

ISO/IEC 42001

The International Organization for Standardization (ISO) published ISO 42001 in December 2023. It is a certifiable framework designed to mitigate risk associated with the development, implementation, and management of an AI management system (AIMS). ISO/IEC 42001 applies to organizations of all sizes and industries that are developing, providing, or using AI-based products or services.

ISO/IEC 42001 Requirements

ISO 42001 has a similar structure to other ISO standards (ISO 27001, ISO 9001, etc.). Clauses 4-10 detail what is required of an AIMS to achieve certification. Meeting these requirements will give organizations confidence their AI programs are secure while still being able to realize the benefits of AI technology.

4. Context of the organization

4.1 Understanding the organization and its context
4.2 Understanding the needs and expectations of interested parties
4.3 Determining the scope of the AI management system
4.4 AI management system

5. Leadership

5.1 Leadership and commitment
5.2 AI policy
5.3 Roles, responsibilities, and authorities

6. Planning

6.1 Actions to address risks and opportunities
6.2 AI objectives and planning to achieve them
6.3 Planning of changes

7. Support

7.1 Resources
7.2 Competence
7.3 Awareness
7.4 Communication
7.5 Documented information

8. Operation

8.1 Operational planning and control
8.2 AI risk assessment
8.3 AI risk treatment
8.4 AI system impact assessment

9. Performance evaluation

9.1 Monitoring, measurement, analysis, and evaluation
9.2 Internal audit
9.3 Management review

10. Improvement

10.1 Continual improvement
10.2 Nonconformity and corrective action

ISO/IEC 42001 contains four annexes that provide detailed guidance on how organizations can comply with the standard.

Annex A: Provides a comprehensive list of the standard’s controls and their objectives.
Annex B: Provides guidance for the implementation of the controls and data management processes.
Annex C: Addresses AI objectives and risk sources.
Annex D: Addresses the use of AI systems across different domains and sectors.

ISO/IEC 42001 Certification Process

The certification process for ISO/IEC 42001 works the same as other ISO standards. An accredited third-party certification body will execute an audit to determine if your AIMS meets the standard’s requirements. Certification is valid for three years. To maintain certification for the three years, annual supervision audits must be performed by a certification body.

NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) collaborated with public and private stakeholders to develop the NIST AI RMF. It is a voluntary framework designed to help organizations better manage the risks to people and organizations that stem from the use of AI.

The NIST AI RMF is meant to be used in conjunction with existing risk management frameworks that organizations may already be using. It’s a living document that users should expect to evolve along with AI technology.

Risks and Potential Harm

The AI RMF creates a path for organizations to reduce threats to civil liberties and other potential negative impacts of AI, while still allowing organizations to harness its potential. The framework identifies the following three categories of potential harm that need to be considered when using AI:

Harm to People

  • Harm to an individual’s rights, liberties, safety, or economic opportunity
  • Discrimination against a community
  • Loss of access to education and democratic participation

Harm to an Organization

  • Damage to business operations
  • Security breaches or financial loss
  • Reputational harm

Harm to an Ecosystem

  • Harm to the global financial system or supply chain
  • Environmental harm

Characteristics of Trustworthy AI systems

The AI RMF identifies seven characteristics of a trustworthy AI system and provides guidance for achieving each.

  1. Valid and reliable: Validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended.
  2. Safe: AI systems need to protect the safety of people, property, and the environment.
  3. Secure and resilient: AI systems that can withstand unexpected adverse events or unexpected changes in their environment or use.
  4. Accountable and transparent: Transparent AI systems provide appropriate access to information based on a person’s role and are accountable for their decisions.
  5. Explainable and Interpretable: Allows those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs.
  6. Privacy-Enhanced: Practices that address freedom from intrusion, limiting observation, and a person’s consent to use personal information.
  7. Fair – with Harmful Bias Managed: AI systems should account for equality and equity by addressing issues such as harmful bias and discrimination.

AI RMF Core

The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI systems. The four functions of Govern, Map, Measure, and Manage serve as the foundation of the AI RMF Core. The Govern function applies to all stages of an organization’s AI risk management processes and procedures. The MAP, MEASURE, and MANAGE functions can be applied in AI system-specific contexts and at specific stages of the AI lifecycle. Each function consists of categories and subcategories.

Govern

The Govern function creates a culture of AI risk management by aligning policies, procedures, organizational principles, resources, and strategic priorities. The Govern categories are:

GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
GOVERN 2: Accountability structures are in place so the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.
GOVERN 3: Prioritize workforce diversity, equity, inclusion, and accessibility processes in the mapping, measuring, and managing of AI risks throughout the lifecycle.
GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk.
GOVERN 5: Processes are in place for robust engagement with relevant AI actors.
GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.

Map

The Map function identifies the context in which an AI system will operate, the potential harm it could cause, and maps associated risks.

MAP 1: Establish and understand context.
MAP 2: Categorize of the AI system.
MAP 3: Understand AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks.
MAP 4: Map risks and benefits for all components of the AI system including third-party software and data.
MAP 5: Characterize impacts to individuals, groups, communities, organizations, and society.

Measure

The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. The Measure categories are:

MEASURE 1: Identify and apply appropriate methods and metrics.
MEASURE 2: Evaluate AI systems for trustworthy characteristics.
MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.
MEASURE 4: Gather and assess feedback about the efficacy of measurement.

Manage

The Manage function entails allocating risk resources to mapped and measured risks regularly and as defined by the Govern function. The Manage categories are:

MANAGE 1: Prioritize, respond to, and manage AI risks based on assessments and other analytical output from the Map and Measure functions.
MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.
MANAGE 3: Manage AI risks and benefits from third-party entities.
MANAGE 4: Regularly document and manage risk treatments, including response and recovery, and communication plans for the identified and measured AI risks.

NIST AI RMF Resources

NIST has produced a series of resources to help organizations better utilize the AI RMF. The AI RMF Playbook provides action items to achieve the framework’s desired outcomes. Crosswalks are available to help use AI RMF alongside other standards. NIST also produced the explainer video below.

Benefits of Compliance

Implementing policies, procedures, and security controls to comply with NIST AI RMF or certify against ISO 42001 will allow your organization to get in front of future regulations addressing AI security. Both frameworks provide a systematic approach that gives organizations, along with their partners and customers, confidence that the risks and potential harm of AI are being mitigated. Being quick to comply with ISO/IEC 42001 or NIST AL RMF conveys a message to your customers and prospects that you are serious about the risks and responsibilities that come with AI, providing an advantage over competitors.

CompliancePoint has helped businesses in a variety of industries achieve their ISO and NIST goals. To learn more about how we can help your organization become an early adopter of these AI security standards, reach out to us at connect@compliancepoint.com.

Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.