Shared responsibility for… AI?

Risk management, security and assurance for AI systems is only possible if the multiple organizations contributing to the system share responsibility for identifying the risks to the system, managing those risks, and measuring the maturity of controls and safeguards.

AI systems are made up of the application leveraging an AI model and very often an AI platform-as-a-service provider who delivers the AI model itself. Additional service providers, such as data brokers, data scientists, and architects support the model and data pipelines. The context of the overall system through which AI capabilities are delivered and consumed is critical to understand. Also critical is the benefit of partnering with high-quality AI service providers that provide clear, objective, and understandable documentation of their AI risks and how those risks, including security, are managed in their platforms.

The rapid pace of AI adoption requires industry leadership to deliver security assurances that scale and bring stakeholders together to demonstrate that the overall, combined AI system can be trusted. HITRUST has years of experience bringing leaders across the private sector together to focus on practical shared responsibility based upon an inheritable control framework proven daily in security compliance and cloud computing. Shared AI assurances between stakeholders are essential to maintaining trust in AI systems based on proven, practical, and achievable approaches. AI systems must be designed, implemented and managed in a secure, trustworthy manner.

How HITRUST helps

Through the HITRUST AI Cybersecurity Certification, HITRUST is extending our proven Shared Responsibility and Inheritance Program to support the needs of organizations adopting and deploying AI technologies. We’re helping simplify the challenge of shared AI responsibilities by bringing together the following:

Inheritability across HITRUST’s AI security requirements

The ability to inherit validation results of AI security requirements is critical to enabling meaningful AI cybersecurity assurances. Several key AI cybersecurity must be performed prior to the actual deployment of the AI application (such during “training time”—when the AI model is being created), and several others are enforced at the AI platform player and therefore are the responsibility of AI platform providers.

Each AI security requirement in this document has been assigned an “inheritability” value. Inheritability and accompanying rationale are shown in the Additional information area of each requirement’s page. Consistent with the approach used for inheriting relevant HITRUST assessment results from cloud service providers, these AI security requirements are either not inheritable, partially inheritable, or fully inheritable from the organization’s AI service provider (e.g., an AI platform-as-a-service provider). Requisite: The organization’s AI service provider must participate in the HITRUST external inheritance program and must have the AI security requirements in an externally inheritable assessment. These inheritability values will be reflected in the HITRUST Shared Responsibility Matrix Baseline for CSF v11.4.0 and later.

Inheritability across these 44 AI security requirements is as follows. The take-away: As the HITRUST AI Cybersecurity Certification is added into the HITRUST assessments performed by CSPs and AI service providers, over half (57%) will be at least partially inheritable.

  • Partially inheritable: 11 / 44 (25%)
  • Fully inheritable: 14 / 44 (32%)

Approach to assigning inheritability of AI security requirements

To assist in assigning inheritability values to AI security requirements, each AI security requirement was categorized into one of the following “AI SRM Types”:

AI SRM Type Rationale Example(s)
Not inheritable
AI.NI.a Implementing and/or configuring the requirement is the AI application provider’s sole responsibility. Designing the AI application such that humans can intervene if needed
AI.NI.b The AI application provider and its AI service providers are responsible for independently performing the requirement outside of the AI system’s technology stack. In other words, it is a dual responsibility. Assigning roles and responsibilities for the organization’s deployed AI systems
AI.NI.c The AI application provider and its AI service providers are responsible for jointly performing the requirement outside of the AI system’s technology stack (e.g., through a jointly executed agreement / contract). In other words, it is a joint responsibility. Contractually agreeing on AI security requirements
Partially inheritable
AI.PI.a Performing the requirement may be a responsibility shared between an AI application provider and their AI platform provider, performed independently on separate layers/components of the overall AI system. Logging AI system inputs and outputs
Fully inheritable
AI.FI.a The requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply. Distorting training data, Adding adversarial examples to training data
AI.FI.b The requirement may be the sole responsibility of the AI platform provider. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider apply. Restricting access to AI models
AI.FI.c The requirement may be the sole responsibility of the AI platform provider and/or AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or AI model creator apply. Training, tuning, and RAG data minimization

Feedback

Thanks for your feedback.

Post your comment on this topic.

Please do not use this for support questions.
Feedback portal link

Post Comment