Description:
- Adversaries may poison training data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open-source dataset. This data may be introduced to a victim system via supply chain compromise.
Source: MITRE ATLAS
Impact:
- Use of poisoned datasets compromised upstream in the AI supply chain can lead to integrity issues such as biased outcomes or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.
Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)
- Which AI security requirements help prevent, detect, or correct? [?]
-
- AI security threat management
- AI security governance and oversight
- Development of AI software
- AI supply chain
- Model robustness
- Documenting and inventorying AI systems
- Filtering and sanitizing AI data, inputs, and outputs
- Resilience of the AI system
- Discussed in which authoritative sources? [?]
-
- CSA Large Language Model (LLM) Threats Taxonomy
2024, © Cloud Security Alliance- Where:
- 4. LLM Service Threat Categories > 4.2. Data Poisoning
- 4. LLM Service Threat Categories > 4.6. Insecure Supply Chain
- Where:
- Mitigating Artificial Intelligence (AI) Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators
April 2024, © Department of Homeland Security (DHS)- Where:
- Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: AI design and implementation failures > Supply chain vulnerabilities
- Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: Attacks on AI > Adversarial manipulation of AI algorithms or data
- Where:
- MITRE ATLAS
2024, © The MITRE Corporation - Multilayer Framework for Good Cybersecurity Practices for AI
2023, © European Union Agency for Cybersecurity (ENISA)- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Compromise of ML application components
- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Compromise of ML application components
- OWASP AI Exchange
2024, © The OWASP Foundation - OWASP Top 10 for LLM Applications
Oct. 2023, © The OWASP Foundation- Where:
- LLM03: Training Data Poisoning
- LLM05: Supply Chain Vulnerabilities
- Where:
- OWASP Machine Learning Security Top 10
2023, © The OWASP Foundation- Where:
- ML02: Data Poisoning Attack
- ML06: AI Supply Chain Attacks
- ML08: Model Skewing
- Where:
- CSA Large Language Model (LLM) Threats Taxonomy
- Discussed in which commercial sources? [?]
-
Databricks AI Security Framework
Sept. 2024, © Databricks- Where: Risks in AI System Components > Raw data 1.7: Lack of data trustworthiness
2024, © HiddenLayer- Where:
- Part 2: Risks faced by AI-based systems > Supply chain attacks
- Part 2: Risks faced by AI-based systems > Data poisoning in supply chain attacks
- Where: Risks in AI System Components > Raw data 1.7: Lack of data trustworthiness
Post your comment on this topic.