Description:

  • Adversaries may poison training data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open-source dataset. This data may be introduced to a victim system via supply chain compromise.
    Source: MITRE ATLAS

Impact:

  • Use of poisoned datasets compromised upstream in the AI supply chain can lead to integrity issues such as biased outcomes or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements help prevent, detect, or correct? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Databricks AI Security Framework
Sept. 2024, © Databricks
  • Where: Risks in AI System Components > Raw data 1.7: Lack of data trustworthiness

HiddenLayer’s 2024 AI Threat Landscape Report
2024, © HiddenLayer
  • Where:
    • Part 2: Risks faced by AI-based systems > Supply chain attacks
    • Part 2: Risks faced by AI-based systems > Data poisoning in supply chain attacks

Feedback

Thanks for your feedback.

Post your comment on this topic.

Please do not use this for support questions.
Feedback portal link

Post Comment