Description:

  • Attacks that take advantage of compromised or vulnerable ML software packages and third-party pre-trained models used for fine tuning, plugins or extensions, including outdated or deprecated models or components.

Impact:

  • Use of models or AI software packages poisoned upstream in the AI supply chain can lead to integrity issues such as biased outcomes, confidentially issues such as redirecting AI system outputs or loss of API keys, or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.

Applies to which types of AI models? Any

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]