Description: Poisoning attacks in which a part of the training data is under the control of the adversary. Source: NIST AI 100-2 Glossary

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements help prevent, detect, or correct? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]

Feedback

Thanks for your feedback.

Post your comment on this topic.

Please do not use this for support questions.
Feedback portal link

Post Comment