Description: Attacks aiming to disrupt the availability or functionality of the AI system by overwhelming it with a flood of requests for the purpose of degrading or shutting down the service. Several techniques can be employed here, including instructing the model to perform a time-consuming and/or computationally expensive task prior to answering the request (i.e., a sponge attack).
Impact: Impacts the availability over the overall AI system by rendering it inaccessible to legitimate users through performance degradation and/or system outage.
Applies to which types of AI models? Any
- Which AI security requirements function against this threat? [?]
-
- Control function: Corrective
- Control function: Decision support
- Control function: Detective
- Control function: Directive
- Control function: Preventative
- Control function: Resistive
- Control function: Variance reduction
- Discussed in which authoritative sources? [?]
-
- CSA Large Language Model (LLM) Threats Taxonomy
2024, © Cloud Security Alliance- Where: 4. LLM Service Threat Categories > 4.8. Denial of Service (DoS)
- Where: 4. LLM Service Threat Categories > 4.8. Denial of Service (DoS)
- Cybersecurity of AI and Standardization
March 2023, © European Union Agency for Cybersecurity (ENISA)- Where: 4. Analysis of coverage > 4.1. Standardization in support of cybersecurity of AI – Narrow sense
- Where: 4. Analysis of coverage > 4.1. Standardization in support of cybersecurity of AI – Narrow sense
- MITRE ATLAS
2024, © The MITRE Corporation - Multilayer Framework for Good Cybersecurity Practices for AI
2023, © European Union Agency for Cybersecurity (ENISA)- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Failure or malfunction of an ML application
- Where: 2. Framework for good cybersecurity practices for AI > 2.2. Layer II – AI fundamentals and cybersecurity > Failure or malfunction of an ML application
- NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Jan. 2024, National Institute of Standards and Technology (NIST)- Where: 3. Generative AI Taxonomy > 3.4. Indirect Prompt Injection Attacks and Mitigations > 3.4.1. Availability Violations
- Where: 3. Generative AI Taxonomy > 3.4. Indirect Prompt Injection Attacks and Mitigations > 3.4.1. Availability Violations
- OWASP 2023 Top 10 for LLM Applications
Oct. 2023, © The OWASP Foundation- Where: LLM04: Model Denial of Service
- Where: LLM04: Model Denial of Service
- OWASP 2025 Top 10 for LLM Applications
2025, © The OWASP Foundation- Where: LLM10: Unbounded Consumption
- Where: LLM10: Unbounded Consumption
- OWASP AI Exchange
2024, © The OWASP Foundation - Securing Artificial Intelligence (SAI); AI Threat Ontology
2022, © European Telecommunications Standards Institute (ETSI)- Where: 6. Threat landscape > 6.4. Threat modeling > 6.4.1 > Attacker objectives
- Where: 6. Threat landscape > 6.4. Threat modeling > 6.4.1 > Attacker objectives
- Securing Machine Learning Algorithms
2021, © European Union Agency for Cybersecurity (ENISA)- Where: 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Failure or malfunction of ML application > Denial of service due to inconsistent data or a sponge example
- Where: 3. ML Threats and Vulnerabilities > 3.1. Identification of Threats > Failure or malfunction of ML application > Denial of service due to inconsistent data or a sponge example
- CSA Large Language Model (LLM) Threats Taxonomy
- Discussed in which commercial sources? [?]
-
- Databricks AI Security Framework
Sept. 2024, © Databricks- Where: Risks in AI System Components > Model serving – Inference requests 9.7: Denial of service (DoS)
- Where: Risks in AI System Components > Model serving – Inference requests 9.7: Denial of service (DoS)
- Snowflake AI Security Framework
2024, © Snowflake Inc.- Where:
- Sponge samples
- Fuzzing
- Distributed denial of service on ML model
- Where:
- StackAware AI Security Reference
2024, © StackAware- Where: AI Risks > Resource exhaustion
- Where: AI Risks > Resource exhaustion
- Databricks AI Security Framework