HITRUST AI Security Assessment and Certification Specification
1
  • 1
Table of Contents
HITRUST AI Security Assessment and Certification Specification
  • 1
    • HITRUST AI Security Assessment and Certification Specification — 1
GrabGrab
GrabGrab
  • Content disclaimers
  • START HERE
  • About the HITRUST AI Security Certification
    • What problem does this certification help address?
    • Why HITRUST for AI assurances?
    • Who can obtain this certification?
    • What types of AI can certify?
    • What is this assessment and certification… not?
    • Which AI system layers are considered?
    • Shared AI Responsibilities and Inheritance
    • Is this a stand-alone assessment?
    • How to tailor the HITRUST assessment?
    • Guidance for External Assessors
    • How big / how many requirements?
  • AI security requirements included
    • TOPIC: AI security threat management
      • Identifying security threats to the AI system (What can go wrong and where?)
      • Threat modeling (What are we doing about it?)
      • Security evaluations such as AI red teaming (Are the countermeasures working?)
    • TOPIC: AI security governance and oversight
      • Assign Roles and Resp. for AI
      • Augment written policies to address AI specificities
      • Humans can intervene if needed
    • TOPIC: Development of AI software
      • Provide AI security training to AI builders and deployers
      • Version control of AI assets
      • Inspection of AI software assets
      • Change control over AI models
      • Change control over language model tools
      • Documentation of AI specifics during system design and development
      • Linkage between dataset, model, and pipeline config
      • Verification of origin and integrity of AI assets
    • TOPIC: AI legal and compliance
      • ID and evaluate compliance & legal obligations for AI system development and deployment
      • ID and evaluate any constraints on data used for AI
    • TOPIC: AI supply chain
      • Due diligence review of AI providers
      • Review the model card of models used by the AI system
      • AI security requirements communicated to AI providers
    • TOPIC: Model robustness
      • Data minimization or anonymization
      • Limit output specificity and precision
      • Additional Training Data Measures
    • TOPIC: Access to the AI system
      • Limit the release of technical info about the AI system
      • Model Rate Limiting / Throttling
      • GenAI model least privilege
      • Restrict access to data used for AI
      • Restrict access to AI models
      • Restrict access to interact with the AI model
      • Restrict access to the AI engineering environment and AI code
    • TOPIC: Encryption of AI assets
      • Encrypt traffic to and from the AI model
      • Encrypt AI assets at rest
    • TOPIC: AI system logging and monitoring
      • Log AI system inputs and outputs
      • Monitor AI system inputs and outputs
      • Monitoring for data, models, and configs for suspicious changes
    • TOPIC: Documenting and inventorying AI systems
      • Inventory deployed AI systems
      • Maintain a catalog of trusted data sources for AI
      • AI data and data supply inventory
      • Model card publication (for model builders)
    • TOPIC: Filtering and sanitizing AI data, inputs, and outputs
      • Dataset sanitization
      • Input filtering
      • Output encoding
      • Output filtering
    • TOPIC: Resilience of the AI system
      • Updating incident response for AI specifics
      • Backing up AI system assets
  • AI security threats considered
    • TOPIC: Availability attacks
      • Denial of AI service
    • TOPIC: Input-based attacks
      • Evasion (including adversarial examples)
      • Model extraction and theft
      • Model inversion
      • Prompt injection
    • TOPIC: Poisoning attacks
      • Data poisoning
      • Model poisoning
    • TOPIC: Supply chain attacks
      • Compromised 3rd-party models or code
      • Compromised 3rd-party training datasets
    • TOPIC: Threats inherent to language models
      • Confabulation
      • Excessive agency
      • Sensitive information disclosed in output
      • Harmful code generation
  • Crosswalks to other sources of AI guidance
    • ISO/IEC 23894:2023
    • ISO/IEC 42001:2023
  • AI updates to HITRUST’s glossary
    • Added terms
    • Added acronyms
Download as PDF

TOPIC: Documenting and inventorying AI systems

Monitoring for data, models, and configs for suspicious changes
Inventory deployed AI systems

This topic includes the following:

  • Inventory deployed AI systems
  • Maintain a catalog of trusted data sources for AI
  • AI data and data supply inventory
  • Model card publication
Monitoring for data, models, and configs for suspicious changes
Inventory deployed AI systems
© 2024 HITRUST All rights reserved. Reproduction, re-use, and creation of derivative works are prohibited.