The draft HITRUST CSF requirement statements presented in this section are organized by “topic”. This grouping will not be ported into the HITRUST CSF; instead, the CSF’s existing hierarchy (e.g., categories, domains) will be used. The suggested placement in the existing HITRUST CSF hierarchy for each requirement statement is shown.
The AI security topics used to organize these draft HITRUST CSF requirement statements are as follows:
Topic | Requirement statements |
---|---|
AI security threat management | 3 |
AI security governance and oversight | 3 |
Development of AI software | 6 |
AI legal and compliance | 3 |
AI supply chain | 4 |
Model robustness | 5 |
Access to the AI system | 7 |
Encryption of AI assets | 2 |
AI system logging and monitoring | 3 |
Documenting and inventorying AI systems | 3 |
Filtering and sanitizing AI data, inputs, and outputs | 3 |
Resilience of the AI system | 2 |
Total requirement statements | 44 |
To understand the mitigations commonly used to help mitigate AI security risks and threats to deployed AI systems, HITRUST analyzed the AI security-specific mitigations discussed in the following authoritative and commercial sources. In the HITRUST lexicon, an “authoritative source” is an externally developed, information-protection-focused framework, standard, guideline, publication, regulation or law. The sources listed below that have been harmonized into the HITRUST CSF as of v11.4.0 are indicated. HITRUST may harmonize more of these sources in future versions of the HITRUST CSF at our discretion and based on your feedback.
No. | Source and Link | Published by | Date or Version | Harmonized into the HITRUST CSF as of v11.4.0? |
---|---|---|---|---|
From the European Union Agency for Cybersecurity (ENISA) | ||||
1 | Securing Machine Learning Algorithms | European Union Agency for Cybersecurity (ENISA) | 2021 | No |
From the International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | ||||
2 | ISO/IEC TR 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology | International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | 2022 | No |
3 | ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence | International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | 2020 | No |
4 | ISO/IEC TR 38507:2022: Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations | International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | 2022 | No |
5 | ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management system | International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | 2023 | No (being considered for v12.0) |
From the National Institute of Standards and Technology (NIST) | ||||
6 | NIST AI 100-2:E2023: Adversarial Machine Learning: Taxonomy of Attacks and Mitigations | National Institute of Standards and Technology (NIST) | Jan. 2023 | No |
From the Open Worldwide Application Security Project (OWASP) | ||||
7 | OWASP AI Exchange | Open Worldwide Application Security Project (OWASP) | As of Q3 2024 (living document) | Yes |
8 | OWASP Machine Learning Top 10 | Open Worldwide Application Security Project (OWASP) | v0.3 | Yes |
9 | OWASP Top 10 for LLM Applications | Open Worldwide Application Security Project (OWASP) | v1.1.0 | Yes |
10 | LLM AI Cybersecurity & Governance Checklist | Open Worldwide Application Security Project (OWASP) | Feb. 2024, v1.0 | No |
From commercial entities | ||||
11 | The anecdotes AI GRC Toolkit | Anecdotes A.I Ltd. | 2024 | No |
12 | Databricks AI Security Framework | Databricks | Version 1.1, Sept. 2024 | No |
13 | Google Secure AI Framework | June 2023 | No | |
14 | HiddenLayer’s 2024 AI Threat Landscape Report | HiddenLayer | 2024 | No |
15 | Snowflake AI Security Framework | Snowflake Inc. | 2024 | No |
From others | ||||
16 | Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems | National Security Agency (NSA) | April 2024 | No |
17 | Generative AI Framework for HM Government | Central Digital and Data Office, UK Government | Jan. 2024 | No |
18 | Guidelines for Secure AI System Development | Cybersecurity & Infrastructure Security Agency (CISA) | Nov. 2023 | No |
19 | Managing artificial intelligence-specific cybersecurity risks in the financial services sector | U.S. Department of the Treasury | March 2024 | No |
20 | Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators | US Department of Homeland Security | April 2024 | No |
21 | MITRE ATLAS (mitigations) | The MITRE Corporation | As of Q3 2024 (living document) | No |
Relevant to this analysis: Because these documents were created to satisfy different needs and audiences, they contain recommendations that did not apply to the scope of the HITRUST AI Cybersecurity Certification effort. Namely, we removed from consideration recommendations that:
- did not relate to AI security for deployed systems,
- applied to users of AI systems generally (and not to the deployers of AI systems), as these will be included in version 12 of the HITRUST CSF slated for release in H2 2025, or
- did not mitigate security threats specific to or exacerbated by AI but instead mitigated general cybersecurity security threats to traditional IT systems (such as source code leaks via misconfigured repositories), as these are addressed in the underlying HITRUST CSF e1, i1, or r2 assessment that the HITRUST Cybersecurity Certification for Deployed AI Systems is combined with.
The goal of analyzing these sources was not to ensure 100% coverage the AI mitigations discussed. Instead, comparing these sources against one another helped us:
- Understand the AI security control environment, as well as the applicability of various AI security mitigations to different AI deployment scenarios and model types.
- Minimize any subjectivity or personal bias we brought with us into the effort regarding these topics.
- Identify (by omission, consensus, and direct discussion) the AI security mitigations which are generally are and are not employed by organizations who deploy AI systems.
- Identify the mitigations commonly recommended for identified AI security threats.
Other key inputs into our understanding of the AI security threat landscape included:
- Interviews with the authors of several of the documents listed above, as well as other cybersecurity leaders, on season 2 of HITRUST’s “Trust Vs.” podcast. These recordings are available here as well as podcast directories such as Apple Podcasts and YouTube Music.
- Contributions from HITRUST’s AI Assurance Working Group, described in this press release. HITRUST is grateful to the members of this working group:
- Arun Pamulapti, Databricks
- David Houlding, Microsoft
- Ed Schreibman, AWS
- Emily Soward, AWS
- Gerry Miller, Cloudticity
- Gretchen Block, Optum
- Mitesh Shah, Johnson & Johnson
- Phillip Draughan, Johnson & Johnson
- Teresa Godfroy, Sliverthorn