Description:

  • The production of confidently stated but incorrect content by which users or developers may be misled or deceived. Colloquially as AI “hallucinations” or “fabrications”.

Impact:

  • Inaccurate output (an integrity issue), the impact of which varies greatly depending on the context. The issue is exacerbated through overreliance on the AI system.

Applies to which types of AI models? Generative AI specifically

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Additional information
  • HITRUST is intentionally focusing on the threat of LLM confabulation (which is almost always undesired) instead of hallucination (which is often a feature—not a bug—of stochastic systems).
    • See this document further discussing the difference between these related by distinct concepts in the context of generative AI.
    • This distinction is also addressed in NIST AI 600-1 which states, “Some commenters have noted that the terms hallucination and fabrication anthropomorphize GAI, which itself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human entities.”