Description:
- Attacks that take advantage of compromised or vulnerable ML software packages and third-party pre-trained models used for fine tuning, plugins or extensions, including outdated or deprecated models or components.
Impact:
- Use of models or AI software packages poisoned upstream in the AI supply chain can lead to integrity issues such as biased outcomes, confidentially issues such as redirecting AI system outputs or loss of API keys, or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.
Applies to which types of AI models? Any
- Which AI security requirements function against this threat? [?]
-
- Control function: Corrective
- Control function: Decision support
- Identifying security threats to the AI system
- Threat modeling
- Security evaluations such as AI red teaming
- ID and evaluate any constraints on data used for AI
- ID and evaluate compliance & legal obligations for AI system development and deployment
- Inventory deployed AI systems
- Model card publication
- Linkage between dataset, model, and pipeline config
- Review the model cards of models used by the AI system
- Control function: Detective
- Control function: Directive
- Control function: Preventative
- Control function: Resistive
- Control function: Variance reduction
- Discussed in which authoritative sources? [?]
-
- CSA Large Language Model (LLM) Threats Taxonomy
2024, © Cloud Security Alliance- Where:
- 4. LLM Service Threat Categories > 4.6. Insecure Supply Chain
- 4. LLM Service Threat Categories > 4.7. Insecure Apps/Plugins
- Where:
- Mitigating Artificial Intelligence (AI) Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators
April 2024, © Department of Homeland Security (DHS)- Where:
- Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: AI design and implementation failures > Supply chain vulnerabilities
- Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: Attacks on AI > Adversarial manipulation of AI algorithms or data
- Where:
- MITRE ATLAS
2024, © The MITRE Corporation - Multilayer Framework for Good Cybersecurity Practices for AI
2023, © European Union Agency for Cybersecurity (ENISA)- Where:
- 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Compromise of ML application components
- 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Compromise of ML application components
- Where:
- OWASP 2023 Top 10 for LLM Applications
Oct. 2023, © The OWASP Foundation- Where:
- LLM05: Supply Chain Vulnerabilities
- LLM05: Supply Chain Vulnerabilities
- Where:
- OWASP 2025 Top 10 for LLM Applications
Oct. 2025, © The OWASP Foundation - OWASP Machine Learning Security Top 10
2023, © The OWASP Foundation - OWASP AI Exchange
2024, © The OWASP Foundation- Where:
- Securing Artificial Intelligence (SAI); AI Threat Ontology
2022, © European Telecommunications Standards Institute (ETSI)- Where:
- 6. Threat landscape > 6.4. Threat modeling > 6.4.2.3 > Implementation
- 6. Threat landscape > 6.4. Threat modeling > 6.4.2.3 > Implementation
- Where:
- Securing Machine Learning Algorithms
2021, © European Union Agency for Cybersecurity (ENISA)- Where:
- 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Compromise of ML Application and Components
- 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Compromise of ML Application and Components
- Where:
- CSA Large Language Model (LLM) Threats Taxonomy
- Discussed in which commercial sources? [?]
-
- AI Risk Atlas
2024, © IBM Corporation - Databricks AI Security Framework
Sept. 2024, © Databricks- Where:
- Risks in AI System Components > Algorithms 5.4: Malicious libraries
- Risks in AI System Components > Model 7.3: ML supply chain vulnerabilities
- Risks in AI System Components > Model 7.4: Source code control attack
- Where:
- Failure Modes in Machine Learning
Nov. 2022, © Microsoft- Where:
- Intentionally-Motivated Failures > Attacking the ML supply chain
- Intentionally-Motivated Failures > Backdoor Machine Learning
- Where:
- HiddenLayer’s 2024 AI Threat Landscape Report
2024, © HiddenLayer- Where:
- Part 2: Risks faced by AI-based systems > Supply chain attacks
- Part 2: Risks faced by AI-based systems > Security of public model repositories
- Where:
- Snowflake AI Security Framework
2024, © Snowflake Inc.- Where:
- Self-hosted OSS LLMs Security
- Self-hosted OSS LLMs Security
- Where:
- AI Risk Atlas