ISO/IEC 42001:2023 is a management system standard from ISO and IEC. Its stated purpose is to “provide guidance for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization.” The scope of ISO/IEC 42001:2023 is broad. For example, it contains guidance such as “determine whether climate change is a relevant issue” and understanding the “competitive landscape and trends for new products and services using AI systems.”

Because the scope of an organization’s AI management system needs to consider many, many more AI risks than just cybersecurity, ISO/IEC 42001:2023 intentionally does not go very deep into AI cybersecurity. Instead of going deep into any single area (cybersecurity or otherwise), it instead aims to “provides guidelines for the deployment of applicable controls” while “avoid[ing] specific guidance on management processes.”

ISO/IEC 42001:2023’s introduction explains that “the organization can combine generally accepted frameworks, other international standards, and its own experience to implement crucial processes.” Meaning, ISO/IEC 42001:2023 is designed to be compatible with and implemented alongside more prescriptive frameworks such as the HITRUST CSF. By working alongside the organization’s AI management system, this HITRUST AI cybersecurity assessment and certification helps organizations nail AI security—and to prove that they have done so in a reliable and consistent way.

To assist adopters of both ISO/IEC 42001:2023 and the HITRUST AI Cybersecurity Certification certification, mappings between the two documents’ content have been captured where possible. These mappings are shown in the following pages of this document. Note that the mapped HITRUST AI cybersecurity requirements support but may not fully cover the mapped ISO/IEC 42001:2023 expectation given the different purposes of these documents explained above.

Crosswalk

HITRUST AI security requirement (title) Mapping(s) to ISO/IEC 42001:2023
AI security threat management
Security evaluations such as AI red teaming
  • 8. Operation > Operational planning and control > Paragraph 3
AI legal and compliance
ID and evaluate compliance & legal obligations for AI system development and deployment
  • 4. Context of the organization > 4.1. Understanding the organization and its context > Note 2, bullet a, sub-bullet 1
AI security governance and oversight
Assign roles and responsibilities for AI
  • 4. Governance implications of the organizational use of AI > 4.3. Maintaining accountability when introducing AI
  • 5. Overview of AI and AI systems > 5.5. Constraints on the use of AI
  • 6. Policies to address the use of AI >
    • 6.2. Governance oversight of AI
    • 6.3. Governance of decision-making
  • 6. Policies to address the use of AI > 6.7. Risk > 6.7.3. Objectives
Augment written policies to address AI specificities
  • 4. Governance implications of the organizational use of AI > 4.3. Maintaining accountability when introducing AI
  • 5. Overview of AI and AI systems > 5.2. How AI systems differ from other information technologies > 5.2.3. Adaptive systems
  • 6. Policies to address the use of AI >
    • 6.2. Governance oversight of AI
    • 6.4. Governance of data use
    • 6.7.2. Risk management
Development of AI software
Provide AI security training to AI builders and AI deployers
  • 7. Support >
    • 7.2. Competence
    • 7.3. Awareness
Change control over AI models
  • 8. Operation > Operational planning and control > Paragraph 5
  • Annex A > A.6. AI system life cycle >
    • A.6.2.2. AI system requirements and specification
    • A.6.2.3. Documentation of AI system design and development
    • A.6.2.4. AI system verification and validation
    • A.6.2.5. AI system deployment
Change control over language model tools
  • 8. Operation > Operational planning and control > Paragraph 5
  • Annex A > A.6. AI system life cycle >
    • A.6.2.2. AI system requirements and specification
    • A.6.2.3. Documentation of AI system design and development
    • A.6.2.4. AI system verification and validation
    • A.6.2.5. AI system deployment
Documentation of AI specifics during system design and development
  • Annex A > A.4. Resources for AI systems > A.4. Resources for AI systems (all items)
AI supply chain
AI security requirements communicated to AI providers
  • Annex A > A.10. Third-party and customer relationships >
    • A.10.2. Allocating responsibilities
    • A.10.3. Suppliers
AI system logging and monitoring
Log AI system inputs and outputs
  • Annex A > A.6. AI system life cycle > A.6.2.8. AI system recording of event logs
Documenting and inventorying AI systems
AI data and data supply inventory
  • Annex A > A.7. Data for AI systems >
    • A.7.3. Acquisition of data
    • A.7.5. Data provenance
Resilience of the AI system
Updating incident response for AI specifics
  • Annex A > A.8. Information for interested parties of AI systems > A.8.4. Communication of incidents

Feedback

Thanks for your feedback.

Post your comment on this topic.

Please do not use this for support questions.
Feedback portal link

Post Comment