Description: Poisoning attacks in which a part of the training data is under the control of the adversary. Source: NIST AI 100-2 Glossary
Impact: Affects the integrity of model outputs, decisions, or behaviors.
Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)
- Which AI security requirements help prevent, detect, or correct? [?]
-
- AI security threat management
- AI security governance and oversight
- Development of AI software
- AI supply chain
- Model robustness
- Access to the AI system
- Encryption of AI assets
- AI system logging and monitoring
- Documenting and inventorying AI systems
- Filtering and sanitizing AI data, inputs, and outputs
- Resilience of the AI system
- Discussed in which authoritative sources? [?]
-
- AI Risk Atlas
2024, © IBM Corporation- Where: Data poisoning risk for AI
- Where: Data poisoning risk for AI
- Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It
August 2019, Belfer Center for Science and International Affairs, Harvard Kennedy School- Where: Part I. Technical Problem > Poisoning Attacks > Dataset Poisoning
- Where: Part I. Technical Problem > Poisoning Attacks > Dataset Poisoning
- CSA Large Language Model (LLM) Threats Taxonomy
2024, © Cloud Security Alliance- Where: 4. LLM Service Threat Categories > 4.2. Data Poisoning
- Where: 4. LLM Service Threat Categories > 4.2. Data Poisoning
- Cybersecurity of AI and Standardization
March 2023, © European Union Agency for Cybersecurity (ENISA)- Where: 4. Analysis of coverage > 4.1. Standardization in support of cybersecurity of AI – Narrow sense
- Where: 4. Analysis of coverage > 4.1. Standardization in support of cybersecurity of AI – Narrow sense
- Engaging with Artificial Intelligence
Jan. 2024, Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC)- Where: Challenges when engaging with AI > 1. Data poisoning of an AI model
- Where: Challenges when engaging with AI > 1. Data poisoning of an AI model
- ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence
2020, © International Standards Organization (ISO)/International Electrotechnical Commission (IEC)- Where: 8. Vulnerabilities, Risks, and Challenges > 8.2. AI-specific security threats > 8.2.2. Data poisoning
- Where: 8. Vulnerabilities, Risks, and Challenges > 8.2. AI-specific security threats > 8.2.2. Data poisoning
- Multilayer Framework for Good Cybersecurity Practices for AI
2023, © European Union Agency for Cybersecurity (ENISA)- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Poisoning
- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Poisoning
- Mitigating Artificial Intelligence (AI) Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators
April 2024, © Department of Homeland Security (DHS)- Where: Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: Attacks on AI > Adversarial manipulation of AI algorithms or data
- Where: Appendix A: Cross-sector AI risks and mitigation strategies > Risk category: Attacks on AI > Adversarial manipulation of AI algorithms or data
- MITRE ATLAS
2024, © The MITRE Corporation - NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Jan. 2024, National Institute of Standards and Technology (NIST)- Where:
- 2. Predictive AI Taxonomy > 2.3. Poisoning Attacks and Mitigations > 2.3.1. Availability Poisoning
- 2. Predictive AI Taxonomy > 2.3. Poisoning Attacks and Mitigations > 2.3.2. Targeted Poisoning
- 2. Predictive AI Taxonomy > 2.3. Poisoning Attacks and Mitigations > 2.3.3. Backdoor Poisoning
- 3. Generative AI Taxonomy > 3.2. AI Supply Chain Attacks and Mitigations > 3.2.2. Poisoning Attacks
- Where:
- OWASP Top 10 for LLM Applications
Oct. 2023, © The OWASP Foundation- Where: LLM03: Training Data Poisoning
- Where: LLM03: Training Data Poisoning
- OWASP Machine Learning Security Top 10
2023, © The OWASP Foundation- Where:
- ML02: Data Poisoning Attack
- ML08: Model Skewing
- Where:
- OWASP AI Exchange
2024, © The OWASP Foundation - Securing Artificial Intelligence (SAI); AI Threat Ontology
2022, © European Telecommunications Standards Institute (ETSI)- Where: 6. Threat landscape > 6.4. Threat modeling > 6.4.2 > Data acquistion and curation
- Where: 6. Threat landscape > 6.4. Threat modeling > 6.4.2 > Data acquistion and curation
- Securing Machine Learning Algorithms
2021, © European Union Agency for Cybersecurity (ENISA)- Where:
- 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Poisoning
- 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Poisoning > Label Modification
- Where:
- AI Risk Atlas
- Discussed in which commercial sources? [?]
-
- Databricks AI Security Framework
Sept. 2024, © Databricks- Where:
- Risks in AI System Components > 2.2 Data Prep
- Risks in AI System Components > Data Prep 2.3: Raw data criteria
- Risks in AI System Components > Datasets 3.1: Data poisoning
- Risks in AI System Components > Evaluation 6.1: Evaluation data poisoning
- Risks in AI System Components > Model serving – Inference requests 9.9: Input resource control
- Where:
- Failure Modes in Machine Learning
Nov. 2022, © Microsoft- Where: Intentionally-Motivated Failures > Poisoning attacks
- Where: Intentionally-Motivated Failures > Poisoning attacks
- HiddenLayer’s 2024 AI Threat Landscape Report
2024, © HiddenLayer- Where: Part 2: Risks faced by AI-based systems > Data poisoning
- Where: Part 2: Risks faced by AI-based systems > Data poisoning
- Snowflake AI Security Framework
2024, © Snowflake Inc.- Where: Training data poisoning
- Where: Training data poisoning
- StackAware AI Security Reference
2024, © StackAware- Where: AI Risks > Training data poisoning
- Where: AI Risks > Training data poisoning
- Databricks AI Security Framework
Post your comment on this topic.