HITRUST AI Security and AI Risk Management: Which Option Fits Your Assessment?
Artificial intelligence is quickly becoming part of the operational fabric for healthcare organizations, SaaS providers, and business associates. From automation and analytics to customer-facing tools, AI is now influencing how sensitive data is processed and decisions are made.
As adoption increases, organizations are facing a common challenge:
How do you demonstrate trust, security, and responsible AI practices to customers and regulators?
HITRUST has expanded its assessment framework to address this challenge by introducing two AI-focused options that can be incorporated into validated assessments. Understanding the difference between these options is key to selecting the right approach for your organization.
HITRUST’s Two AI Paths: Security Assurance vs. Risk Governance
HITRUST’s AI approach is intentionally flexible. Rather than mandating a single AI assessment model, HITRUST offers two distinct paths, each designed to support different assurance goals.
AI Security Assessment and Certification (ai1 / ai2)
The AI Security Assessment is designed for organizations that want validated, security-focused assurance over AI-enabled systems.
This option is added to a HITRUST validated assessment by selecting the “Security for AI Systems” compliance factor in MyCSF and can be paired with:
- e1 or i1 assessments (resulting in an ai1 designation)
- r2 assessments (resulting in an ai2 designation)
Key considerations:
- The AI Security Assessment is not standalone; it is integrated into an existing validated assessment.
- It introduces a defined set of AI-specific security requirements, tailored based on scope.
- Certification outcomes are tied to both the AI security requirements and the underlying e1/i1/r2 assessment results.
- Participation is optional but is particularly valuable when AI capabilities are part of the in-scope environment.
Best fit for:
Organizations that deploy AI-enabled platforms or services and need to provide independent, third-party assurance to customers regarding how AI systems are secured.
AI Risk Management Assessment and Insights (non-certified)
HITRUST’s AI Risk Management option takes a different approach. Rather than focusing on certification, it emphasizes governance, oversight, and lifecycle risk management for AI.
This option aligns with widely recognized guidance, including:
- The NIST AI Risk Management Framework (AI RMF)
- ISO/IEC 23894:2023 for AI risk management
Instead of producing a certification, the output is structured insights and reporting that organizations can use to:
- Assess AI risk maturity
- Strengthen internal governance programs
- Support regulatory readiness
- Communicate responsible AI practices to stakeholders
Best fit for:
Organizations that want visibility into AI risk and governance practices, but are not yet pursuing, or do not require, AI-specific certification.
Choosing the Right Option for your HITRUST Assessment
When planning an e1, i1, or r2 assessment, the decision to include AI options should be based on scope, maturity, and assurance objectives, not simply the presence of AI tools.
A practical way to evaluate fit:
- Do you have AI-enabled systems in scope? If no, AI options may not be necessary today.
- Do customers or partners expect formal assurance over AI security? If yes, the AI Security Assessment (ai1/ai2) may be appropriate.
- Are you focused on strengthening governance and understanding AI-related risk? If yes, AI Risk Management insights may be the better starting point.
- Do you want both governance visibility and external assurance? Some organizations pursue both options, using AI risk insights to inform governance programs and AI security certification to support customer trust.
AI Does Not Automatically Expand Assessment Scope
One important clarification: AI options are not automatically applied to HITRUST assessments. They are optional, applied intentionally, and scoped based on relevance. This ensures organizations can address AI responsibly without unnecessarily increasing assessment complexity or cost.
Preparing for AI in a HITRUST-aligned Future
AI expectations from regulators, customers, and partners continue to evolve. HITRUST’s AI options provide a structured way to respond, whether your goal is to demonstrate security assurance, risk governance, or both.
Organizations that evaluate AI applicability early in the assessment lifecycle are best positioned to:
- Control scope
- Streamline evidence collection
- Communicate AI assurance clearly to stakeholders
How CompliancePoint can Help
CompliancePoint supports organizations in:
- Determining whether AI is in scope for HITRUST assessments
- Selecting the right AI option(s) based on business and regulatory needs
- Integrating AI considerations into e1, i1, and r2 workflows
- Preparing defensible documentation and validation-ready evidence
If you’re planning an upcoming HITRUST assessment and want to understand how AI fits into your assurance strategy, we’re here to help. Reach out to us at connect@compliancepoint.com to learn more about our services.
Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.
