This page contains guidelines for the composition External Assessor teams engaged to perform cybersecurity assessments of AI systems.
Recommended expertise, knowledge, and experience
Given the rapid emergence of AI technologies and the associated data privacy and security risks, independent assessments by qualified external assessors are crucial. The assessment team should have diverse skills, including knowledge of AI systems and industry-specific security requirements. These guidelines outline the essential expertise required to perform such assessments, including a deep understanding of AI technology and cybersecurity principles, risk management skills, and regulatory compliance knowledge.
While no single external assessor is expected to meet all of the following attributes, these attributes should be met by the combined expertise, knowledge, and experience of the collective external assessor team. These attributes are additive to the requisite expertise, knowledge, and experience necessary to competently perform cybersecurity consulting and/or attestations outside of the AI context.
It is the recommendation of this HITRUST External Assessor AI Working Group that these guidelines be annually reviewed and updated by future subcommittees of the HITRUST External Assessor Council, and based on feedback from HITRUST’s quality review of completed AI security assessments to incorporate lessons learned.
Essential expertise
- Understanding of AI technologies and their business context
- Knowledge of AI model types (e.g., open source vs. not, predictive AI vs. generative AI), platforms, patterns (e.g., RAG), and technical architectures
- Knowledge and expertise of the team performing the assessment should align with the complexity of the environment being assessed, and ongoing education should be implemented to continuously understand the latest developments
- Familiarity with AI development frameworks and tools, as well as with the AI software development lifecycle
- Understanding the business drivers for the rapid emergence of AI in the marketplace
- Understanding of the risks associated with the adoption of AI without proper risk mitigation techniques
- Knowledge of AI model types (e.g., open source vs. not, predictive AI vs. generative AI), platforms, patterns (e.g., RAG), and technical architectures
- Cybersecurity expertise
- Risk Management Skills
- Ability to identify and assess risks associated with AI implementations
- Experience in developing risk mitigation strategies for AI initiatives
- Professional certifications relating to AI
- (Not specifically recommended by this working group due to the novelty of the subject matter in the governance, risk, and compliance domain; this will be revisited in future iterations of this document)
Specific knowledge
- AI security threats
- Awareness of common security threats to AI systems
- Understanding of potential vulnerabilities in AI models and datasets
- Understanding of expanded attack surface for AI enabled systems
- Understanding of impacts associated with compromised vulnerabilities
- Regulatory compliance
- Knowledge of relevant AI-specific regulations (e.g., EU AI Act) and standards (e.g., ISO 42001)
- Knowledge/familiarity in ensuring AI systems comply with regulatory requirements
Experience
- Prior assessments
- Experience conducting engagements focusing on security and/or risk assessment of AI systems.
- If unable to meet this attribute due to the novelty of the subject matter in the governance, risk, and compliance domain, consider a letter/attestation to file describing actions taken to overcome this experience shortcoming (e.g., through required pre-engagement training)
- Track record of evaluating risk management controls in AI projects
- Experience conducting engagements focusing on security and/or risk assessment of AI systems.
- Industry experience
- Familiarity with various industries implementing AI technologies
- Understanding of sector-specific security and compliance requirements for AI
Working group membership
These guidelines are a deliverable from the 2024 HITRUST External Assessor AI Working Group, a subcommittee of the 2024 HITRUST External Assessor Council. HITRUST is deeply appreciative of the contributions to from each member of the 2024 HITRUST External Assessor AI Working Group:
- Adrian Leung, PWC
- Andrew Hicks, Frazier & Deeter
- Emily Di Nardo, Baker Tilly
- James Whitfield, A-lign
- Jared Hamilton, Crowe
- Jesse Goodale, LBMC
- Nicole Janko, Coalfire
- Paul Johnson, Wipfli
- Ryan Winkler, 360 Advanced
- Sean Brennan, Grant Thornton
- Sean Dowling, Accorian
- Stephanie Madhok, Accorian
- Uday Ali Pabrai, ecfirst