Search
About the HITRUST AI Security Certification
About the HITRUST AI Security Certification
*This HITRUST AI Security Certification is a key component of HITRUST’s larger AI Assurance Program. Key aspects of this assessment and certification at a glance: No. Consideration Answer at-a-glance 1 What problem does this certification help…
Threat modeling (What are we doing about it?)
AI security requirements included » TOPIC: AI security threat management » Threat modeling (What are we doing about it?)
HITRUST CSF requirement statement [?] (17.03bAISecOrganizational.5) The organization performs threat modeling for the AI system to (1) evaluate its exposure to identified AI security threats, (2) identify countermeasures currently in place to mitigate those threats,…
Limit the release of technical info about the AI system
AI security requirements included » TOPIC: Access to the AI system » Limit the release of technical info about the AI system
HITRUST CSF requirement statement [?] (19.09zAISecOrganizational.1) The organization limits the release of technical AI project details, including specific information on the (1) technical architecture of the overall AI system; (2) datasets used for training, testing,…
TOPIC: AI security threat management
AI security requirements included » TOPIC: AI security threat management
This topic includes the following: Identifying security threats to the AI system: What can go wrong, and where? Threat modeling: What are we doing about it? Security evaluations (e.g., AI red teaming): Are the countermeasures working?
TOPIC: Access to the AI system
AI security requirements included » TOPIC: Access to the AI system
This topic includes the following: Limit the release of technical info about the AI system Model Rate Limiting / Throttling GenAI model least privilege Restrict access to data used for AI Restrict access to AI models Restrict access to interact with the AI…
Crosswalks to other sources of AI guidance
Crosswalks to other sources of AI guidance
The pages dedicated to each AI security requirement in this specification include detailed crosswalks to various AI authoritative sources. Additionally, HITRUST has prepared crosswalks to and commentary on the following additional AI sources: ISO/IEC 23894:…
What problem does this certification help address?
About the HITRUST AI Security Certification » What problem does this certification help address?
*The new HITRUST AI Security Certification proactively addresses questions and concerns over AI security within deployed AI systems, which continue to mount in the third-party risk management space. AI changes the cybersecurity threat landscape Using any new…
What types of AI can certify?
About the HITRUST AI Security Certification » What types of AI can certify?
Deployed applications leveraging any or all of the following (very) broad types of AI can be included in the scope of the HITRUST AI Security Certification: AI type Also known as Description Rule-based AI heuristic models, traditional AI, expert systems, symbolic…
START HERE
START HERE
If you are interested in… Where to go An overview of this new certification Go here The 44 AI security requirements themselves Go here The assessment process, and assessment scoring Go here to visit the HITRUST Assessment…
ISO/IEC 23894:2023
Crosswalks to other sources of AI guidance » ISO/IEC 23894:2023
ISO/IEC 23894:2023 provides guidance on AI risk management. It is not a security-focused standard, but its guidance slightly overlaps with a small number of HITRUST CSF requirements included in the HITRUST AI Security Assessment and Certification given that security is…
Excessive agency
AI security threats considered » TOPIC: Threats inherent to language models » Excessive agency
Description: Generative AI systems may undertake actions outside of the developer intent, organizational policy, and/or legislative, regulatory, and contractual requirements, leading to unintended consequences. This issue is facilitated by excessive permissions,…
Added terms
AI updates to HITRUST’s glossary » Added terms
Term Definition Source Author(s) and/or Editor(s) adversarial examples Modified testing samples which induce misclassification of a machine learning model at deployment time NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and…
Model poisoning
AI security threats considered » TOPIC: Poisoning attacks » Model poisoning
Description: Model poisoning attacks attempt to directly modify the trained AI model to inject malicious functionality into the model. Once trained, a model is often just a file residing on a server. Attackers can alter the model file or replace it entirely with a…
Denial of AI service
AI security threats considered » TOPIC: Availability attacks » Denial of AI service
Description: Attacks aiming to disrupt the availability or functionality of the AI system by overwhelming it with a flood of requests for the purpose of degrading or shutting down the service. Several techniques can be employed here, including instructing the model to…
Compromised 3rd-party training datasets
AI security threats considered » TOPIC: Supply chain attacks » Compromised 3rd-party training datasets
Description: Adversaries may poison training data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open-source dataset. This data may be introduced to a victim system via supply chain compromise.…
Sensitive information disclosed in output
AI security threats considered » TOPIC: Threats inherent to language models » Sensitive information disclosed in output
Description: Without proper guardrails, generative AI outputs can contain confidential and/or sensitive information included in the model’s training dataset, RAG data sources, or data residing in data sources that the AI system is connected to (e.g., through…
Prompt injection
AI security threats considered » TOPIC: Input-based attacks » Prompt injection
Description: When an adversary crafts malicious user prompts as generative AI inputs that cause the AI system to act in unintended ways. These “prompt injections” are often designed to cause the model to bypass its original instructions and follow the…
Data poisoning
AI security threats considered » TOPIC: Poisoning attacks » Data poisoning
Description: Poisoning attacks in which a part of the training data is under the control of the adversary. Source: NIST AI 100-2 Glossary Impact: Affects the integrity of model outputs, decisions, or behaviors. Applies to which types of AI models? Data-driven models…
Model card publication (for model builders)
AI security requirements included » TOPIC: Documenting and inventorying AI systems » Model card publication (for model builders)
HITRUST CSF requirement statement [?] (06.10hAISecSystem.4) The organization publishes model cards for the AI models it produces, which (minimally) include the following elements: (1) model details; (2) intended use; (3) training details (e.g., data, methodology);…
Evasion (including adversarial examples)
AI security threats considered » TOPIC: Input-based attacks » Evasion (including adversarial examples)
Description: Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be…
Model inversion
AI security threats considered » TOPIC: Input-based attacks » Model inversion
Description: A class of attacks that seeks to reconstruct class representatives from the training data of an AI model, which results in the generation of semantically similar data rather than direct reconstruction of the data (i.e., extraction). (Source: NIST AI…
Compromised 3rd-party models or code
AI security threats considered » TOPIC: Supply chain attacks » Compromised 3rd-party models or code
Description: Attacks that take advantage of compromised or vulnerable ML software packages and third-party pre-trained models used for fine tuning, plugins or extensions, including outdated or deprecated models or components. Impact: Use of models or AI…
Review the model card of models used by the AI system
AI security requirements included » TOPIC: AI supply chain » Review the model card of models used by the AI system
HITRUST CSF requirement statement [?] (14.05iAISecOrganizational.1) For externally sourced AI models deployed by the organization in production applications, the organization (1) reviews the model card prior to deployment. Evaluative elements in this requirement…
Model extraction and theft
AI security threats considered » TOPIC: Input-based attacks » Model extraction and theft
Description: Model extraction aims to extract model architecture and parameters. (Source: NIST AI 100-2 Glossary) Adversaries may extract a functional copy of a private model. (Source: MITRE ATLAS ) Impact: Seeks to breach the confidentiality of the model…
How to tailor the HITRUST assessment?
About the HITRUST AI Security Certification » How to tailor the HITRUST assessment?
*If you are unfamiliar with the concept of tailoring HITRUST CSF assessments, please read this page of the HITRUST assessment handbook. In v11.4.0 of the HITRUST CSF and later, a new HITRUST Security for AI Systems compliance factor will be made available. This factor…
Confabulation
AI security threats considered » TOPIC: Threats inherent to language models » Confabulation
Description: The production of confidently stated but incorrect content by which users or developers may be misled or deceived. Colloquially as AI “hallucinations” or “fabrications”. Impact: Inaccurate output (an integrity issue), the impact of which…
What is this assessment and certification… not?
About the HITRUST AI Security Certification » What is this assessment and certification… not?
Not inclusive of risks introduced by simply using AI Behaviors of end-users of AI technologies can intentionally or unintentionally lead to security incidents, just like in traditional IT. When members of the organization’s workforce leverage AI in the execution of…
Shared AI Responsibilities and Inheritance
About the HITRUST AI Security Certification » Shared AI Responsibilities and Inheritance
*See section 12.2 of the HITRUST assessment handbook to learn more about inheritance Shared responsibility for… AI? Risk management, security and assurance for AI systems is only possible if the multiple organizations contributing to the system share…
AI security threats considered
AI security threats considered
*For a more information about how HITRUST incorporates threats into the HITRUST Approach, see Appendix 9 of our Risk Management Handbook. No security control selection and rationalization effort can be performed without first considering the security risk and threat…