No security control selection and rationalization effort can be performed without first considering the security risk and threat landscape. The HITRUST CSF requirement statements considered in the HITRUST AI Cybersecurity Certification have been mapped to the following AI security-focused security threats.
AI Security Threat | Applies to non-generative ML models? | Applies to Rule-based models? | Applies to generative AI models? | Mitigated before AI system deployment? | Mitigated after AI system deployment? |
---|---|---|---|---|---|
Availability attacks | |||||
Denial of AI service | Yes | Yes | Yes | Yes | |
Input-based attacks | |||||
Prompt injection | Yes | Yes | |||
Evasion | Yes | Yes | Yes | ||
Model inversion | Yes | Yes | |||
Model extraction and theft | Yes | Yes | Yes | Yes | |
Poisoning attacks | |||||
Data poisoning | Yes | Yes | Yes | ||
Model poisoning | Yes | Yes | Yes | Yes | |
Supply chain attacks | |||||
Compromised 3rd-party training datasets | Yes | Yes | Yes | ||
Compromised 3rd-party models or code | Yes | Yes | Yes | Yes | |
Threats inherent to current-state language models | |||||
Confabulation | Yes | Yes | |||
Sensitive information disclosure from model | Yes | Yes | |||
Excessive agency | Yes | Yes | |||
Copyright-infringing output | Yes | Yes |
To understand the AI security risk and threat landscape specifically, HITRUST harmonized AI security-specific threats discussed in the following authoritative and commercial sources. The sources listed below that have been harmonized into the HITRUST CSF as of v11.4.0 are indicated. HITRUST may harmonize more of these sources in future versions of the HITRUST CSF at our discretion and based on your feedback.
No. | Source and Link | Published by | Date or Version | Harmonized into the HITRUST CSF as of v11.4.0? |
---|---|---|---|---|
From the Open Worldwide Application Security Project (OWASP) | ||||
1 | OWASP Machine Learning Top 10 | Open Worldwide Application Security Project (OWASP) | v0.3 | Yes |
2 | OWASP Top 10 for LLM Applications | Open Worldwide Application Security Project (OWASP) | v1.1.0 | Yes |
3 | OWASP AI Exchange | Open Worldwide Application Security Project (OWASP) | As of Q3 2024 (living document) | Yes |
From the European Union Agency for Cybersecurity (ENISA) | ||||
4 | Securing Machine Learning Algorithms | European Union Agency for Cybersecurity (ENISA) | 2021 | No |
5 | Cybersecurity of AI and Standardization | European Union Agency for Cybersecurity (ENISA) | March 2023 | No |
6 | Multilayer Framework for Good Cybersecurity Practices for AI | European Union Agency for Cybersecurity (ENISA) | June 2023 | No |
From the National Institute of Standards and Technology (NIST) | ||||
7 | NIST AI 600-1:Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile | National Institute of Standards and Technology (NIST) | 2024 | No |
8 | NIST AI 100-2:E2023: Adversarial Machine Learning: Taxonomy of Attacks and Mitigations | National Institute of Standards and Technology (NIST) | Jan. 2023 | No |
From commercial entities | ||||
9 | The anecdotes AI GRC Toolkit | Anecdotes A.I Ltd. | 2024 | No |
10 | Databricks AI Security Framework | Databricks | Version 1.1, Sept. 2024 | No |
11 | Failure Modes in Machine Learning | Microsoft | Nov. 2022 | No |
12 | HiddenLayer’s 2024 AI Threat Landscape Report | HiddenLayer | 2024 | No |
13 | IBM Watsonx AI Risk Atlas | IBM | As of Aug. 2024 (living document) | No |
14 | The StackAware AI Security Reference | StackAware | As of Aug. 2024 (living document) | No |
15 | Snowflake AI Security Framework | Snowflake Inc. | 2024 | No |
From others | ||||
16 | Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators | US Department of Homeland Security | April 2024 | No |
17 | MITRE ATLAS (mitigations) | The MITRE Corporation | As of Q3 2024 (living document) | Yes |
18 | Attacking Artificial Intelligence | Harvard Kennedy School | Aug. 2019 | No |
19 | Engaging with Artificial Intelligence | Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) | Jan. 2024 | No |
20 | CSA Large Language Model (LLM) Threats Taxonomy | Cloud Security Alliance (CSA) | June 2024 | No |
21 | Securing Artificial Intelligence (SAI); AI Threat Ontology | European Telecommunications Standards Institute (ETSI) | 2022 | No |
22 | ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence | International Standards Organization (ISO)/International Electrotechnical Commission (IEC) | 2020 | No |
Relevant to this analysis: Because these documents were created to satisfy different needs and audiences, some discussed threats that did not apply to the scope of the HITRUST AI Cybersecurity Certification effort. Namely, we removed from consideration threats that:
- did not relate to AI security for deployed systems, or
- applied to users of AI systems generally (and not to the deployers of AI systems), as these will be addressed through the addition of new AI usage requirements in version 12 of the HITRUST CSF slated for release in H2 2025.
The goal of analyzing these sources was not to ensure 100% coverage the AI threats discussed. Instead, comparing these sources against one another helped us:
- Understand the AI security threat landscape, attack surface, and threat actors, as well as the applicability of various AI security threats to different AI deployment scenarios and model types
- Minimize any subjectivity or personal bias we brought with us into the effort regarding these topics
- Identify (by omission, minimal coverage, or direct discussion) the AI security threats which are generally not considered high risk or high impact to deployed AI systems
- Identify (by consensus and heavy discussion) the AI security threats which are generally considered high risk or high impact to deployed AI systems
- Identify the mitigations commonly recommended for identified AI security threats
Other key inputs into our understanding of the AI security threat landscape included:
- Interviews with the authors of several of the documents listed above, as well as other cybersecurity leaders, on season 2 of HITRUST’s “Trust Vs.” podcast. These recordings are available here as well as podcast directories such as Apple Podcasts and YouTube Music.
- Contributions from HITRUST’s AI Assurance Working Group, described in this press release. HITRUST is grateful to the members of this working group.
Post your comment on this topic.