AI systems are made up of the application leveraging an AI model and very often an AI platform-as-a-service provider who delivers the AI model itself. Additional service providers, such as data brokers, data scientists, and architects support the model and data pipelines. The context of the overall system through which AI capabilities are delivered and consumed is critical to understand.
Consistent with the approach taken in the Microsoft Shared AI Responsibility Model as well as in the book Guardians of AI (R. Diver, 2024), we found it helpful to think of an AI system in terms of the following three layers. This approach can generally be applied to any AI scenario regardless of model type (generative, predictive, rules-based).
AI System Layer | Description | Security Considerations | Addressed in this certification? |
---|---|---|---|
AI Usage Layer | This is where the end-user interacts with the AI application. This is where the AI capabilities are consumed. | Behaviors of end-users of AI technologies can intentionally or unintentionally lead to security incidents, just like in traditional IT. When members of the organization’s workforce leverage AI in the execution of their duties, controls in the AI usage layer (e.g., training, acceptable use policies) should be implemented to help ensure that AI is being used appropriately. | No. HITRUST CSF requirements to address AI usage risks are being added to v12 of the HITRUST CSF (ETA H2 2025) for consideration in all HITRUST assessments. |
AI Application Layer | The AI application provides the AI service or interface that is consumed by the end-user. This layer could be as simple as a command line interface interacting with an AI service provider’s API or as complex as a full-featured web application. This is where the end-user’s inputs that will be passed to the AI model are captured, and this is also where the AI model’s response is displayed to the user. Techniques used to ground AI outputs (e.g., RAG) or extend AI capabilities (e.g., using language model tools such as agents and plugins) occur at this layer. | This layer includes several key security controls that the AI application provider is either fully or partially responsible for (e.g., AI application safety and security system such as input filters). | Yes |
AI Platform Layer | This layer provides AI capabilities to AI applications. In this layer the AI model is served to the AI application, typically through APIs. In addition to the trained model and model-serving infrastructure, this layer also includes the AI engineering tools used to create and deploy the model, any model tuning performed by the AI platform provider, and any specialized AI compute infrastructure leveraged. | Depending on the AI system architecture and AI model used, the AI platform provider may be responsible for several key AI cybersecurity controls residing in this layer (e.g., model safety systems such as output filters implemented by the AI platform provider). Also residing in this layer are the controls performed during model creation and tuning (e.g., dataset sanitization). | Yes |
NOTE: The underlying IT platform and infrastructure that comprises the IT system using the AI model must be scoped into the underlying HITRUST e1, i1, or r2 assessment that the HITRUST AI Security Certification assessment is attached to. In other words, these layers are additive to the IT technology layers typically included in the scope of a HITRUST assessment.