Deployed applications leveraging any or all of the following (very) broad types of AI can be included in the scope of the HITRUST AI Cybersecurity Certification:
AI type | Also known as | Description |
---|---|---|
Rule-based AI | heuristic models, traditional AI, expert systems, symbolic AI, classical AI | Rule-based AI systems rely on expert software written using rules. These systems employ human expertise in solving complex problems by reasoning with knowledge. Instead of procedural code, knowledge is expressed using If-Then/Else rules. |
Predictive AI | PredAI, non-generative machine learning models | These are traditional, structured data machine learning models used to make inferences such as predictions or classifications, typically trained on an organization’s enterprise tabular data. These models extract insights from historical data to make accurate predictions about the most likely upcoming event, result or trend. In this context, a prediction does not necessarily refer to predicting something in the future. Predictions can refer to various kinds of data analysis applied to new data or historical data. |
Generative AI | GenAI, GAI | Generative AI (gen AI) is artificial intelligence that responds to a user’s prompt or request with generated original content, such as audio, images, software code, text or video. Most generative AI models start with a foundation model, a type of deep learning model that “learns” to generate statistically probable outputs when prompted. Large language models (LLMs) and small language models (SLMs) are common foundation models for text generation, but other foundation models exist for different types of content generation. |
The table above is intentionally broad so as to encompass a wide variety of AI solutions. For the purposes of this certification, examples of AI systems include anything from an LLM, to a linear regression function, to a carefully curated rule-based inference engine.
Further, the assessment considers security issues brought about by implementing popular generative AI development patterns, including the use of:
- embeddings
- language model tools such as agents and plugins
- retrieval augmented generation (RAG)
NOTE: HITRUST will not award the HITRUST AI Cybersecurity Certification to any AI deployments categorized as unacceptable or are otherwise banned by applicable AI regulation in the jurisdiction of the assessed entity.
Post your comment on this topic.
Madan Singhal wrote: Sep 12, 2024
Great job team for bringing everything together for securing AI. Type of AI may be used to drive compliance in their own way. I have a few suggestions:
- It would be helpful to add a few examples for each AI type.
- I assumed Rule-based AI primarily refers to model-free AI like unsupervised learning. I suggest adding examples for rule-based AI.
Generative AI models bring major and unique security concerns that differ from the previous world of software and traditional AI. This category could evolve further and be categorized into subtypes like Text Generation Models, Image/Video Generation Models, and Speech Generation Models. These three types of content generation bring different security challenges as they align with three core human abilities: writing, seeing, and speaking.
- AI Agents could be a fourth type that covers embedding models and RAG.
Walter Haydock wrote: Sep 11, 2024
Excluding from certification all systems banned by the EU AI Act, irrespective of jurisdiction, is too broad a prohibition. For example, the EU AI Act prohibits real-time, remote biometric identification in public places for law enforcement purposes. It is quite conceivable, however, that a hospital system in the United States could use a facial recognition system to identify individuals who have acted violently in the past and contact local police as a preemptive measure. This practice could be legal in the jurisdiction (and even compliant with ISO/IEC 42001:2023), so banning it due to the EU AI Act does not seem appropriate.
Furthermore, the phrase “categorized as unacceptable” does not make clear who decides on the characterization. It is conceivable that an activist group might categorize a vast swath of AI systems as “unacceptable” and then argue that companies deploying them should not receive a HITRUST AI Security Certification.
To resolve these issues, I would simply remove the phrases “categorized as unacceptable or are otherwise” and “or by the EU AI Act.”
Walter Haydock wrote: Sep 11, 2024
I recommend excluding the first category, “Rule-based AI” because:
- Systems covered by it lack the ability to learn from data and adapt based on new inputs. They are completely deterministic.
- These systems can’t generalize and only handle scenarios explicitly covered by rules provided by humans.
- There is no machine-driven optimization process (e.g., minimizing error, maximizing likelihood) involved, nor do these systems rely on probabilistic or statistical models to infer relationships or make predictions.
- Including these systems would cover most software, making the scope of the certification potentially unworkable.