Description: Model poisoning attacks attempt to directly modify the trained AI model to inject malicious functionality into the model. Once trained, a model is often just a file residing on a server. Attackers can alter the model file or replace it entirely with a poisoned model file. In this respect, even if a model has been correctly trained with a dataset that has been thoroughly vetted, this model can still be replaced with a poisoned model at various points in the AI SDLC or in the runtime environment.

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Any

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]