Description:

  • Generative AI systems may undertake actions outside of the developer intent, organizational policy, and/or legislative, regulatory, and contractual requirements, leading to unintended consequences. This issue is facilitated by excessive permissions, excessive functionality, excessive autonomy, poorly defined operational parameters or granting the AI system the ability to make decisions or act without human intervention or oversight.

Impact:

  • Heavily dependent on which systems the overall AI system is connected to and can interact with (e.g., messaging systems, file servers, command prompts). Can lead to confidentiality, availability, or integrity issues.

Applies to which types of AI models? Generative AI specifically

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Additional information
  • See this post for an overview of the difference between autonomy and agency, paraphrased as follows:
    • Autonomy, in the context of technology, generally refers to the ability to perform tasks without human intervention. The expansion of autonomy could indeed set the stage for the emergence of agency. When a system gains the capability to perform a network of tasks (constituting a decision situation) autonomously, it could be seen as a foundation upon which agency might build.
    • Agency implies a higher order of function—not just carrying out tasks, but also making choices about which tasks to undertake and when.