Description: Poisoning attacks in which a part of the training data is under the control of the adversary. Source: NIST AI 100-2 Glossary

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]