Description:

  • Adversaries may poison training data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open-source dataset. This data may be introduced to a victim system via supply chain compromise.
    Source: MITRE ATLAS

Impact:

  • Use of poisoned datasets compromised upstream in the AI supply chain can lead to integrity issues such as biased outcomes or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]