What is AIUC-1

AIUC-1 is a new certifiable standard for AI agents from the Artificial Intelligence Underwriting Company. An AIUC-1 certificate demonstrates that an organization takes AI governance seriously and has implemented and tested risk-management controls.

What Organizations Should Consider AIUC-1 Certification

Organizations that may benefit from AIUC-1 certification include those that are:

  • Adopting or developing agentic AI systems built on generative models
  • Applying agentic AI in high-risk use cases, such as customer-facing agents where brand is on the line, agents with access to confidential data, and agents handling critical workflows
  • Looking to build trust and demonstrate AI security to potential enterprise customers, internal executives, or board members

Core Risks Covered in AIUC-1

AIUC-1 includes six core enterprise risks.

Data Privacy and Risks

The following set of controls is designed to protect data privacy:

  • Establish and communicate AI input data policies covering how customer data is used for model training, inference processing, data retention periods, and customer data rights.
  • Establish AI output ownership, usage, opt-out, and deletion policies for customers and communicate these policies.
  • Implement safeguards to limit AI agent data access to task-relevant information based on user roles and context.
  • Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information.
  • Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources.
  • Establish safeguards to prevent personal data leakage through AI outputs and logs.
  • Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights.

Security

These controls focus on preventing unauthorized access to AI systems:

  • Implement an adversarial testing program to validate system resilience against adversarial inputs and prompt injection attempts in line with adversarial threat taxonomy.
  • Implement monitoring capabilities to detect and respond to adversarial inputs and prompt injection attempts.
  • Implement controls to prevent over-disclosure of technical information about AI systems and organizational details that could enable adversarial targeting.
  • Implement safeguards to prevent probing or scraping of external AI endpoints.
  • Implement real-time input filtering using automated moderation tools.
  • Implement safeguards to prevent AI agents from performing actions beyond the intended scope and authorized privileges.
  • Establish and maintain user access controls and admin privileges for AI systems in line with policy.
  • Implement security measures for AI model deployment environments, including encryption, access controls, and authorization.
  • Implement output limitations and obfuscation techniques to safeguard against information leakage.

Safety

Controls to keep customers safe by mitigating harmful AI outputs and protecting brand reputation:

  • Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage.
  • Conduct internal testing of AI systems before deployment across risk categories for system changes requiring formal review or approval.
  • Implement safeguards or technical controls to prevent harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception.
  • Implement safeguards or technical controls to prevent out-of-scope outputs (political discussion, healthcare advice, etc.).
  • Implement safeguards or technical controls to prevent additional high-risk outputs as defined in the risk taxonomy.
  • Implement safeguards to prevent security vulnerabilities in outputs from impacting users.
  • Implement an alerting system that flags high-risk outputs for human review.
  • Implement monitoring of AI systems across risk categories.
  • Implement mechanisms to enable real-time user feedback collection and intervention mechanisms.
  • Appoint third parties to evaluate system robustness to harmful outputs, including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception at least every three months.
  • Appoint third parties to evaluate system robustness to out-of-scope outputs at least every three months.
  • Appoint third parties to evaluate system robustness to additional high-risk outputs as defined in the risk taxonomy at least every three months.

Reliability

Controls for preventing unreliable AI outputs that cause customer harm:

  • Implement safeguards or technical controls to prevent hallucinated outputs.
  • Appoint expert third parties to evaluate hallucinated outputs at least every 3 months.
  • Implement safeguards or technical controls to prevent tool calls in AI systems from executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope.
  • Appoint expert third parties to evaluate tool calls in AI systems, including executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope, at least every 3 months

Accountability

Controls that help organizations enforce strong governance and oversight:

  • Document AI failure plan for AI privacy and security breaches, assigning accountable owners and establishing notification and remediation with third-party support as needed (legal, PR, insurers. Etc.).
  • Document an AI failure plan for harmful AI outputs that cause significant customer harm, assigning accountable owners and establishing remediation with third-party support as needed.
  • Document AI failure plan for hallucinated AI outputs that cause substantial customer financial loss, assigning accountable owners and establishing remediation with third-party support as needed.
  • Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence.
  • Establish criteria for selecting a cloud provider, and circumstances for on-premises processing, considering data sensitivity, regulatory requirements, security controls, and operational needs.
  • Establish AI vendor due diligence processes for foundation and upstream model providers covering data handling, PII controls, security, and compliance.
  • Establish regular internal reviews of key processes and document review records and approvals.
  • Implement systems to monitor third-party access.
  • Establish and implement an AI acceptable use policy.
  • Document AI data processing locations.
  • Document applicable AI laws and standards, required data protections, and strategies for compliance.
  • Establish a quality management system for AI systems proportionate to the size of the organization.
  • Maintain logs of AI system processes, actions, and model outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior.
  • Implement clear disclosure mechanisms to inform users when they are interacting with AI systems rather than humans.
  • Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems.

Society

Controls to prevent AI from enabling catastrophic societal harm:

  • Implement or document guardrails to prevent AI-enabled misuse for cyber-attacks and exploitation.
  • Implement or document guardrails to prevent AI-enabled catastrophic system misuse.

The AIUC-1 Certification Process

Organizations seeking AIUC-1 certification must work with an accredited auditor. Organizations with AI governance in place may be able to secure certification in 5-10 weeks.

Not all controls will be in scope for every AI system. For example, an internal-facing agent with limited data access will need to comply with fewer controls than an externally facing customer service agent with access to sensitive data. Key considerations when crafting an AIUC-1 include:

  • Capabilities of the AI agent
  • AI architecture
  • Organizational ambition

Organizations will work with their auditor to complete a scoping questionnaire.

Learn more about AIUC-1 scoping.

The AIUC-1 certificate is valid for twelve months. Technical testing is required at least every three months to keep the certificate valid.

How AIUC-1 Compares to Other AI Frameworks

AIUC-1 aligns with or operationalizes other AI frameworks and regulations, including ISO 42001, NIST AI RMF, and the EU AI Act. AIUC-1 will be updated to reflect changes in the other frameworks.

Here are links to crosswalking information for the frameworks:

AIUC 1 is designed to complement existing AI governance frameworks and regulations, not replace them. While standards such as ISO 42001 focus on establishing an AI Management System, NIST’s AI Risk Management Framework provides voluntary guidance, and the EU AI Act introduces regulatory obligations, AIUC 1 uniquely offers a certifiable, technically focused assurance model for agentic AI systems.

AIUC 1 translates high-level governance and regulatory expectations into concrete testable controls across security, privacy, safety, reliability, accountability, and societal impact, making it particularly well-suited for organizations deploying autonomous or customer-facing AI agents. For organizations already aligning with ISO 42001, NIST AI RMF, or preparing for the EU AI Act, AIUC 1 serves as an operational bridge between policy, risk management, and independent technical validation.

How we can Help

At CompliancePoint, we leverage our cybersecurity, data privacy, and regulatory compliance expertise to provide AI Risk Management Services that help organizations to leverage AI while mitigating the associated risks. We can help your business comply with laws and standards such as the EU AI ActISO 42001, and NIST AI RMF. To learn more about our services, reach out to us at connect@compliancepoint.com.

The experts at CompliancePoint are here to help you avoid breach of data, loss of ability to process or handle 3rd party data, loss of business customers or partners or regulatory fines. Find out how.

AIUC-1 Frequently Asked Questions

AIUC-1 emphasizes ethical considerations, risk awareness, and governance principles associated with AI technologies. Participants learn how to identify potential compliance, privacy, and bias-related risks in AI systems. This foundation helps organizations adopt AI in alignment with legal requirements and best practices.

AIUC-1 and the NIST AI Risk Management Framework (AI RMF) serve complementary purposes. While the NIST AI RMF provides a detailed, structured framework for managing AI risks at an organizational level, AIUC-1 focuses on building a foundational understanding of AI concepts, risks, and responsible use. AIUC-1 helps learners grasp the principles that frameworks like NIST AI RMF are built upon, making it a practical starting point before deeper implementation efforts.

AIUC-1 provides foundational knowledge that supports understanding the key principles behind the EU AI Act, including risk classification, governance, and responsible AI practices. While it is not a legal compliance certification, it helps participants recognize regulatory expectations and identify potential risk areas. This makes it a strong starting point for organizations preparing for EU AI Act compliance efforts.