Description:

  • Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. (Source: Wikipedia )
  • Evasion attacks attempt fool the AI model through inputs designed to mislead it into performing its task incorrectly.

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Predictive (non-generative) machine learning models as well as rule-based / heuristic AI models.

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]