HITRUST CSF requirement statement [?] (17.03bAISecOrganizational.4)
The organization identifies the relevant AI-specific security threats (e.g., evasion,
poisoning, prompt injection) to the deployed AI system
(1) prior to deployment of new models,
(2) regularly (at least semiannually) thereafter, and
(3) when security incidents related to the AI system occur.
The organization documents identified AI threat scenarios in a threat register which
minimally contains
(4) a description of the identified AI security threat and
(5) the associated component(s) of the AI system (e.g., training data, models, APIs).
- Evaluative elements in this requirement statement [?]
-
1. The organization identifies the relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system prior to deployment of new models.
2. The organization identifies the relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system regularly (at least semiannually).
3. The organization identifies the relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system when security incidents related to the AI system occur.
4. The organization documents identified AI threat scenarios in a threat register which minimally contains a description of the identified AI security threat.
5. The organization documents identified AI threat scenarios in a threat register which minimally contains the associated component(s) of the AI system (e.g., training data, models, APIs).
- Illustrative procedures for use during assessments [?]
- Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.
- Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.
- Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element.
Example test(s):- For example, select a sample from the threat register to confirm all AI-specific security threats are identified and documented. Further, confirm that the threat register contains a detailed description of the identified AI security threat and the associated component(s) of the AI system (e.g., training data, models, APIs). Further, confirm that the threat register was reviewed and updated if needed at least at the frequency mandated in the requirement statement.
- For example, select a sample from the threat register to confirm all AI-specific security threats are identified and documented. Further, confirm that the threat register contains a detailed description of the identified AI security threat and the associated component(s) of the AI system (e.g., training data, models, APIs). Further, confirm that the threat register was reviewed and updated if needed at least at the frequency mandated in the requirement statement.
- Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
- For example, measures indicate the percentage of the organization’s AI-specific security threats that are correctly documented in the threat register. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system are identified prior to deployment of new models, regularly (at least at the frequency mandated in the requirement statement) thereafter, and when security incidents related to the AI system occur.
- For example, measures indicate the percentage of the organization’s AI-specific security threats that are correctly documented in the threat register. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system are identified prior to deployment of new models, regularly (at least at the frequency mandated in the requirement statement) thereafter, and when security incidents related to the AI system occur.
- Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.
- Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.
- Placement of this requirement in the HITRUST CSF [?]
- Assessment domain: 17 Risk Management
- Control category: 03.0 – Risk Management
- Control reference: 03.b – Performing Risk Assessments
- Specific to which parts of the overall AI system? [?]
-
- N/A, not AI component-specific
- N/A, not AI component-specific
- Discussed in which authoritative AI security sources? [?]
-
- ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
2023, © International Standards Organization (ISO)/International Electrotechnical Commission (IEC)- Where:
- Part 6. Risk management process > 6.4 Risk assessment > 6.4.2 Risk identification > 6.4.2.3 Identification of risk sources
- Part 6. Risk management process > 6.4 Risk assessment > 6.4.2 Risk identification > 6.4.2.4 Identification of potential events and outcomes
- Where:
- OWASP AI Exchange
2024, © The OWASP Foundation
- LLM AI Cybersecurity & Governance Checklist
Feb. 2024, © The OWASP Foundation- Where:
- 3. Checklist > 3.9. Using or implementing large language models > Bullet #9
- 3. Checklist > 3.9. Using or implementing large language models > Bullet #10
- Where:
- Guidelines for Secure AI System Development
Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)- Where:
- 1. Secure design > Model the threats to your system
- 1. Secure design > Model the threats to your system
- Where:
- Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems
Apr 2024, National Security Agency (NSA)- Where:
- Secure the deployment environment > Manage deployment environment governance > Bullet 1, sub-bullet 1
- Secure the deployment environment > Manage deployment environment governance > Bullet 1, sub-bullet 1
- Where:
- Generative AI framework for HM Government
2023, Central Digital and Data Office, UK Government- Where:
- Using generative AI safely and responsibly > Security > Security Risks > Practical security recommendations > Bullet 1
- Using generative AI safely and responsibly > Security > Security Risks > Practical security recommendations > Bullet 1
- Where:
- Securing Machine Learning Algorithms
2021, © European Union Agency for Cybersecurity (ENISA)- Where:
- 4.1- Security Controls > Specific ML > Implement processes to maintain security levels of ML components over time
- 4.1- Security Controls > Specific ML > Implement processes to maintain security levels of ML components over time
- Where:
- ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
- Discussed in which commercial AI security sources? [?]
-
- Databricks AI Security Framework
Sept. 2024, © Databricks- Where:
- Control DASF 38: Platform security — vulnerability management (Operations and Platform)
- Control DASF 38: Platform security — vulnerability management (Operations and Platform)
- Where:
- Google Secure AI Framework
June 2023, © Google- Where:
- Step 4. Apply the six core elements of the SAIF > Extend detection and response to bring AI into an organization’s threat universe > Develop understanding of threats that matter for AI usage scenarios, the types of AI used, etc.
- Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Stay on top of novel attacks including prompt injection, data poisoning and evasion attacks
- Step 4. Apply the six core elements of the SAIF > Contextualize AI system risks in surrounding business processes > Establish a model risk management framework and build a team that understands AI-related risks
- Where:
- HiddenLayer’s 2024 AI Threat Landscape Report
2024, © HiddenLayer- Where:
- Part 4: Predictions and recommendations > 2. Risk assessment and threat modeling
- Part 4: Predictions and recommendations > 2. Risk assessment and threat modeling
- Where:
- Databricks AI Security Framework
- Control functions against which AI security threats? [?]
-
- Control function: Decision support
- Additional information
-
- Q: When will this requirement included in an assessment? [?]
- This requirement will always be added to HITRUST assessments which include the
Security for AI systems
regulatory factor. - No other assessment tailoring factors affect this requirement.
- This requirement will always be added to HITRUST assessments which include the
- Q: When will this requirement included in an assessment? [?]