HITRUST CSF requirement statement [?] (07.06hAISecOrganizational.1)

The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g., 
poisoning, model inversion)
(1) prior to deployment of new models, 
(2) prior to deployment of new or significantly modified supporting infrastructure (e.g., 
migration to a new cloud-based AI platform), 
(3) regularly (at least annually) thereafter.
The organization 
(4) takes appropriate risk treatment measures (including implementing any additional 
countermeasures) deemed necessary based on the results. 

Evaluative elements in this requirement statement [?]
1. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g., 
poisoning, model inversion) prior to deployment of new models.
2. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g., 
poisoning, model inversion) prior to deployment of new or significantly modified supporting 
infrastructure (e.g., migration to a new cloud-based AI platform),
3. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g., 
poisoning, model inversion) regularly (at least annually).
4. The organization takes appropriate risk treatment measures (including implementing 
any additional countermeasures) deemed necessary based on the results.


Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of security reports to confirm that security assessments (e.g., AI red teaming, penetration testing) of the AI system which include consideration of AI-specific security threats (e.g., poisoning, model inversion), are conducted at least annually. Further, confirm that assessments are conducted prior to deployment of new models, and that appropriate risk treatment measures (including implementing any additional countermeasures) deemed necessary based on the results, are implemented.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the frequency and specifications of the security assessments performed on the AI system. Reviews, tests, or audits are completed by the organization to confirm that the requirements for AI system security testing are completed, and measure the effectiveness of the implemented counter measures.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 06 – Compliance
  • Control reference: 06.h – Technical Compliance Checking

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM03: Training data poisoning > Prevention and mitigation strategies > Bullet #7
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #7
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #8

  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.3. AI Asset Inventory > Bullet #4
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #7
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #8
      • 3. Checklist > 3.13. AI Red Teaming > Bullet #1

  • Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems
    Apr 2024, National Security Agency (NSA)
    • Where:
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 4
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 7
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 8
      • Secure AI operation and maintenance > Update and patch regularly > Bullet 1
      • Secure AI operation and maintenance > Conduct audits and penetration testing > Bullet 1

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Integrate poisoning control after the “model evaluation” phase
      • 4.1- Security Controls > Technical > Ensure ML projects follow the global process for integrating security into projects
      • 4.1- Security Controls > Technical > Assess the exposure level of the model used

Discussed in which commercial AI security sources? [?]

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Conduct Red Team exercises to improve safety and security for AI-powered products and capabilities
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Create a feedback loop

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 4. Model robustness and validation

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Attacks on the infrastructure hosting AI services > Mitigations > Security testing and penetration testing

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.