Description: Attacks aiming to disrupt the availability or functionality of the AI system by overwhelming it with a flood of requests for the purpose of degrading or shutting down the service. Several techniques can be employed here, including instructing the model to perform a time-consuming and/or computationally expensive task prior to answering the request (i.e., a sponge attack).

Impact: Impacts the availability over the overall AI system by rendering it inaccessible to legitimate users through performance degradation and/or system outage.

Applies to which types of AI models? Any

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]