As a longstanding leader in information security and cybersecurity risk management with 17 years of practical experience and demonstrable results, HITRUST’s role as an IT certification body, assessor accreditation body, and information protection standards harmonization body uniquely positions us to help address the growing need for AI security assurances.

We commend the efforts of global standards development organizations such as ENISA, NIST, and ISO; non-profit foundations such as the OWASP Foundation; and governments across the globe who work tirelessly to provide guidance on wielding artificial intelligence technologies responsibly. However, a key missing element is a robust assurance framework to ensure the relevance, implementation, operation, and maturity of the guardrails and safeguards needed to mitigate AI security threats. The technology industry is not lacking in AI standards but is lacking in a means to ensure that there are appropriate and reliable mechanisms to define, measure, and report on their implementation and effectiveness.

A reliable AI assurance solution must ensure that relevant controls embodied in AI standards and applicable regulations are implemented and operating properly to deliver effective risk mitigation. Relevant frameworks with reliable assurances provide confidence and transparency that the appropriate controls are implemented and operating effectively.

Specifically, consider the following key points:

  1. Strong assurance program: Proven assurance programs should be leveraged to validate that controls are not only in place to mitigate risks and threats introduced through the adoption of AI, but also that these controls are effective and operationally mature. In addition, assurance systems must be transparent, scalable, consistent, accurate, and efficient— essential for trust and integrity.

  2. Continuous improvement: Standards and frameworks must be kept relevant to the rapidly evolving AI risk and threat landscape. Regulatory standards updated infrequently are insufficient to maintain pace with AI; thus, using approaches that adapt actively as threats evolve is essential.

  3. Measurable outcomes: The industry needs a consistent, transparent, and accurate method to measure and benchmark the effectiveness of AI controls. As the adage goes, “if it is important, you need to measure it.” This allows for continuous optimization and better risk management.

  4. Support for control selection for different AI deployment scenarios: All entities deploying AI models into new and existing IT platforms susceptible to novel cyber-attacks unique to AI. The industry therefore needs an approach that begins with a specific set of good security hygiene and/or best/leading AI security practices applicable to all organizations. Tailoring of additional control selections on top of those practices allows support for additional requirements and outcomes based on inherent risk and the approach needed to provide cyber security and resiliency for different types of AI models and the exciting capabilities they unlock. This model works because the eventual set of requirements are all backed and validated by the same transparent, consistent, accurate and efficient assurance system. This approach permits regulatory consistency without the ‘one size fits all’ approach that is inherently suboptimal due to differences in organizational complexity and maturity.

  5. Support for diversity through inheritance and shared responsibility: Smaller organizations, such as health systems supporting rural or underserved communities, need the same cybersecurity as larger organizations with more resources. As small and large organizations heavily rely on cloud service providers for technology and cybersecurity needs, the use of such systems can accelerate cybersecurity capability adoption for their customers—today, 85% of the requirements for a HITRUST assessment may be inheritable by health industry companies from a HITRUST certified cloud service provider, such as Amazon AWS, Microsoft Azure, or Google Cloud. Making robust AI cybersecurity capabilities available to all organizations deploying AI systems increases efficiency and reduces cost while streamlining security compliance.

  6. Protecting the IT infrastructure enabling AI capabilities: In addition to addressing the cybersecurity threats specific to AI, appropriate security controls across the entire technology stack are necessary to deliver AI capabilities in secure manner. The world’s largest cloud and AI service providers have already demonstrated their commitment to foundational IT assurances through achieving HITRUST r2 certifications scoped to include their AI computing infrastructure and AI PaaS platforms. HITRUST continues to actively collaborate with these industry leaders on AI risk management and security requirements, including an AI Assurance Program built on our proven assurance model, and including shared responsibility and inheritance of security controls available from leading AI service providers.

  7. Risk management, not absolute security: It is critical to shift the culture and mindset from seeking absolute security to managing risks. This involves applying relevant controls and using reliable assurance methodologies to reduce risks to acceptable levels, with remaining residual risks covered by cyber insurance. Regulation and policy making based on data-driven evidence of control implementation provided by assurance systems can enable powerful incentives for regulated entities that confidently demonstrate the maturity of their cybersecurity system in a provable manner.

We know that the approach outlined in our recommendations can be effective as demonstrated and documented in HITRUST’s latest Trust Report of our current certifications, which includes organizations of varying sizes in many industries, did not report a breach over the past two-year period while operating in one of the most aggressive cyber-attack environments in history. This is a testament to the significance of relevant controls and a strong assurance program—one that ensures that the appropriate security controls are validated through reliable testing to earn objective certification. The HITRUST framework is continually updated to address the evolving threat landscape and ensures that organizations can implement and maintain controls that are effective in mitigating AI risk and updated in response to the changing AI threat landscape.

The standards, frameworks, and guidance to identify and mitigate novel risks and threats specific to AI effectively continue to emerge and mature. What is needed in the AI security space is an assurance approach that is proven effective. HITRUST has long championed concepts and implemented solutions for cyber threat adaptive control and assurance frameworks to support comprehensive information risk management, emphasizing the implementation of relevant controls backed by proven and measurable operational maturity of sufficient strength. As discussed above, a proactive and proven approach to AI security assurance is essential.

Early adopters of emerging technologies will continue to be frequent targets of criminals and nation states until we implement approaches that make information security validation and assurance an inherent part technological innovation and new system design. Compliance motivations alone do not solve the problem, as the speed of changing cyber threats outpaces compliance systems. Only a proactive, threat-adaptive approach can ensure that relevant controls are in place and operating before entities are attacked.

We urge cybersecurity leaders to consider these points as they look to enhance the cybersecurity posture of new and considered AI deployments. HITRUST stands ready to support these efforts and to work with you to respond with urgency to the AI cybersecurity and risk management challenge we collectively face. We look forward to continuing our dialogue and working together to strengthen our initial assessment and certification in this important area.