Not inclusive of risks introduced by simply using AI
Behaviors of end-users of AI technologies can intentionally or unintentionally lead to security incidents, just like in traditional IT. When members of the organization’s workforce leverage AI in the execution of their duties, controls in the AI usage layer (e.g., training, acceptable use policies) should be implemented to help ensure that AI is being used appropriately. HITRUST CSF requirements to address AI usage risks are being added to v12 of the HITRUST CSF (ETA H2 2025) so that they can be potentially included in all HITRUST assessments (not just assessments performed by AI system deployers).
Focused on a key AI risk (security), but not every AI risk
This certification focuses on mitigating the AI security threats that make up the cybersecurity risk that accompanies the deployment of AI within an organization. Cybersecurity risk is one of many risks discussed in AI risk management frameworks like the NIST AI RMF and ISO/IEC 23894:2023. AI risks that are peers to cybersecurity include those dealing with AI ethics (such as fairness and avoidance of detrimental bias), AI privacy (such as consent for using data to train AI models), and AI safety (i.e., ensuring the AI system does not harm individuals). HITRUST’s AI Risk Management Assessment and Insights Report is designed to help organizations assess and report on the larger AI risk management problem.
Focused on a key part of Trustworthy and Responsible AI, but not all of it
Trustworthy and Responsible AI is a collection of principles that help guide the creation, deployment and use of AI, considering the broader societal impact of AI systems. Pillars of Trustworthy and Responsible AI include explainability, predictability, bias and fairness, safety, transparency, privacy, inclusiveness, accountability… and security. This assessment and certification aim to help organizations deploying AI nail the security pillar, and prove that they’ve done so in a reliable and consistent way.
A complement to, not a complete compliance assessment for, the EU AI Act
The following is not intended to provide legal advice.
The aim of the EU AI Act is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.
The majority of the EU AI Act’s obligations fall on high risk AI systems, which are subject to strict obligations before they can be put on the market, including:
- adequate risk assessment and mitigation systems
- high quality of the datasets feeding the system to minimize risks and discriminatory outcomes
- logging of activity to ensure traceability of results
- detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
- clear and adequate information to the deployer
- appropriate human oversight measures to minimize risk
- a high level of robustness, security and accuracy
Some, but not all of these obligations are security related. For AI security risks specifically, most of these obligations are touched upon to some degree within the HITRUST AI Cybersecurity Certification assessment. However, this isn’t the case for the other focus areas of the EU AI Act (including respect fundamental rights, safety, and ethics).
It is also important to note that HITRUST AI Cybersecurity Certification is being released before the EU AI Act’s granular security guidance (ETA H2 2025). HITRUST has taken care to ensure close alignment with the EU AI Act’s security-specific expectations as is possible in their absence. HITRUST will continue to monitor the development of additional EU AI Act security requirements and adjust this assessment and certification accordingly if needed.
A complement to, not a replacement for, ISO/IEC 42001:2023
See this page for discussion on how this assessment and certification pairs well with ISO/IEC 42001:2023.
Not exhaustive
Although this assessment and certification intends to support organizations in demonstrating the strength of cybersecurity protections for deployed AI systems, it is not exhaustive and does not cover every use case or obligation given the rapidly changing AI technical, legal, and regulatory environment. While using this assessment and certification, organizations should extend cybersecurity, governance and risk management practices beyond the scope of these requirements as needed for their use case or jurisdiction.
Post your comment on this topic.