Term Definition Source Author(s) and/or Editor(s)
adversarial examples Modified testing samples which induce misclassification of a machine learning model at deployment time NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
AI agent Entity that senses and responds to its environment and takes actions to achieve its goals ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
AI application An AI application is a software system that utilizes an artificial intelligence or machine learning model as a core component in order to automate complex tasks. These tasks might require language understanding, reasoning, problem-solving, or perception to automate an IT helpdesk, a financial assistant, or health insurance questions, for example. AI models alone may not be directly beneficial to end-users for many tasks. But, they can be used as a powerful engine to produce compelling product experiences. In such an AI-powered application, end-users interact with an interface that passes information to the model. Robust Intelligence Robust Intelligence, Inc.
AI assurance A combination of frameworks, policies, processes and controls that measure, evaluate and promote safe, reliable and trustworthy AI. AI assurance schemes may include conformity, impact and risk assessments, AI audits, certifications, testing and evaluation, and compliance with relevant standards. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
AI component Functional element that constructs an AI system ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
AI governance A system of laws, policies, frameworks, practices and processes at international, national and organizational levels. AI governance helps various stakeholders implement, manage, oversee and regulate the development, deployment and use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable legal and regulatory requirements. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
AI platform An integrated collection of technologies to develop, train, and run machine learning models. This typically includes automation capabilities, machine learning operations (MLOps), predictive data analytics, and more. Think of it like a workbench–it lays out all of the tools you have to work with and provides a stable foundation on which to build and refine. Redhat.com Red Hat, Inc.
AI red teaming Red teaming is a way of interactively testing AI models to protect against harmful behavior, including leaks of sensitive data and generated content that’s toxic, biased, or factually inaccurate. IBM Research Blog: What is GenAI Red Teaming? Martineau, Kim
AI system Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives. The engineered system can use various techniques and approaches related to artificial intelligence to develop a model to represent data, knowledge, processes, etc. which can be used to conduct tasks. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
algorithm A set of computational rules to be followed to solve a mathematical problem. More recently, the term has been adopted to refer to a process to be followed, often by a computer. Comptroller’s Handbook: Model Risk Management, Version 1.0 Office of the Comptroller of the Currency (OCC)
API rate limiting API rate limiting refers to controlling or managing how many requests or calls an API consumer can make to your API. You may have experienced something related as a consumer with errors about “too many connections” or something similar when you are visiting a website or using an app. An API owner will include a limit on the number of requests or amount of total data a client can consume. This limit is described as an API rate limit. An example of an API rate limit could be the total number of API calls per month or a set metric of calls or requests during another period of time. Axway Blog: What is an API rate limit? Defranchi, Lydia
bias Favoritism towards some things, people, or groups over others ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
confabulation (context: AI) A false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (non-artificial) neural mistake-making process leading to a false memory (confabulation). Wikipedia: Confabulation (neural networks)  
data catalog A Data Catalog is a collection of metadata, combined with data management and search tools, that helps analysts and other data users to find the data that they need, serves as an inventory of available data, and provides information to evaluate fitness of data for intended uses. Alation Blog: What Is a Data Catalog? – Importance, Benefits & Features Wells, Dave
data provenance A process that tracks and logs the history and origin of records in a dataset, encompassing the entire life cycle from its creation and collection to its transformation to its current state. It includes information about sources, processes, actors and methods used to ensure data integrity and quality. Data provenance is essential for data transparency and governance, and it promotes better understanding of the data and eventually the entire AI system. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
data science Methodology for the synthesis of useful knowledge directly from data through a process of discovery or of hypothesis formulation and hypothesis testing. NIST Big Data Interoperability Framework Chang, Wo L.;Grady, Nancy
data scientist A practitioner who has sufficient knowledge in the overlapping regimes of business needs, domain knowledge, analytical skills, and software and systems engineering to manage the end-to-end data processes in the analytics life cycle. NIST Big Data Interoperability Framework Chang, Wo L.;Grady, Nancy
data poisoning Poisoning attacks in which a part of the training data is under the control of the adversary NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
differential privacy Differential privacy is a method for measuring how much information the output of a computation reveals about an individual. It is based on the randomized injection of “noise”. Noise is a random alteration of data in a dataset so that values such as direct or indirect identifiers of individuals are harder to reveal. An important aspect of differential privacy is the concept of “epsilon” or ɛ, which determines the level of added noise. Epsilon is also known as the “privacy budget” or “privacy parameter”. DRAFT Anonymisation, pseudonymisation and privacy enhancing technologies guidance: Chapter 5: Privacy-enhancing technologies (PETs) Information Commissioner’s Office (UK Government)
embedding An embedding is a representation of a topological object, manifold, graph, field, etc. in a certain space in such a way that its connectivity or algebraic properties are preserved. For example, a field embedding preserves the algebraic structure of plus and times, an embedding of a topological space preserves open sets, and a graph embedding preserves connectivity. One space X is embedded in another space Y when the properties of Y restricted to X are the same as the properties of X. Wolfram MathWorld  
ensemble A machine learning paradigm where multiple models (often called “weak learners”) are trained to solve the same problem and combined to get better results. The main hypothesis is that when weak models are correctly combined we can obtain more accurate and/or robust models. Towards Data Science: Ensemble methods: bagging, boosting and stacking Rocca, Joseph
evasion Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Wikipedia: Adversarial Machine Learning > Evasion  
expert system A computer system emulating the decision-making ability of a human expert through the use of reasoning, leveraging an encoding of domain-specific knowledge most commonly represented by sets of if-then rules rather than procedural code. The term “expert system” was used largely during the 1970s and ’80s amidst great enthusiasm about the power and promise of rule-based systems that relied on a “knowledge base” of domain-specific rules and rule-chaining procedures that map observations to conclusions or recommendations. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
fine-tuning Refers to the process of adapting a pre-trained model to perform specific tasks or to specialize in a particular domain. This phase follows the initial pre-training phase and involves training the model further on task-specific data. NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
foundation model A large language model that is trained on a broad set of diverse data to operate across a wide range of use cases. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
generative AI A field of AI that uses deep learning trained on large datasets to create content, such as written text, code, images, music, simulations and videos, in response to user prompts. Unlike discriminative models, generative AI makes predictions on existing data rather than new data. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
guardrail (context: AI) An AI guardrail is a safeguard that is put in place to prevent artificial intelligence from causing harm. AI guardrails are a lot like highway guardrails – they are both created to keep people safe and guide positive outcomes. Techopedia Explains: AI Guardrail Techopedia
graphical processing unit (GPU) A specialized chip capable of highly parallel processing. GPUs are well-suited for running machine learning and deep learning algorithms. GPUs were first developed for efficient parallel processing of arrays of values used in computer graphics. Modern-day GPUs are designed to be optimized for machine learning. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
ground truth Information provided by direct observation as opposed to information provided by inference Collins Dictionary: ‘Ground truth’ HarperCollins Publishers
grounding The practice of ensuring that generative AI tools return results that are accurate (‘grounded’ in facts) rather than just those which are statistically probable or pleasing to a user. The Causeit Guide to Digital Fluency: Concept: Grounding (in AI) Causeit, Inc.
hallucination (context: AI) A response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses or beliefs rather than perceptual experiences. For example, a chatbot powered by large language models (LLMs) may embed plausible-sounding random falsehoods within its generated content. Some researchers believe the specific term “AI hallucination” unreasonably anthropomorphizes computers. (Adapted) Hallucination (artificial intelligence)  
heuristic AI See rule-based AI    
hyperparameters Characteristic of a machine learning algorithm that affects its learning process. Hyperparameters are selected prior to training and can be used in processes to help estimate model parameters. Examples of hyperparameters include the number of network layers, width of each layer, type of activation function, optimization method, learning rate for neural networks; the choice of kernel function in a support vector machine; number of leaves or depth of a tree; the K for K-means clustering; the maximum number of iterations of the expectation maximization algorithm; the number of Gaussians in a Gaussian mixture. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
inference The stage of ML in which a model is applied to a task. For example, a classifier model produces the classification of a test sample. NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning (Draft) Tabassi, Elham;Kevin J. Burns; Michael Hadjimichael; Andres D. Molina-Markham; Julian T. Sexton
input Data received from an external source ISO/IEC/IEEE 24765:2017 — Systems and software engineering — Vocabulary International Standards Organization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE)
large language model (LLM) A type of artificial intelligence (AI) that is trained on a massive dataset of text and code. LLMs used natural language processing to process requests and generate data. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
language model A language model is a probabilistic model of a natural language. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. Language models are useful for a variety of tasks, including speech recognition (helping prevent predictions of low-probability (e.g. nonsense) sequences), machine translation, natural language generation (generating more human-like text), optical character recognition, handwriting recognition, grammar induction, and information retrieval. Wikipedia: Language model  
LLM agent A piece of code that formulates prompts to an LLM and parses the output in order to perform an action or a series of action (typically by calling one or more plugins/tools). OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
LLM tool A piece of code that exposes external functionality to an LLM Agent; e.g., reading a file, fetching the contest of a URL, querying a database, etc. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
LLM plugin Similar to LLM Tool but more often used in the context of chatbots (e.g., ChatGPT) OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
machine learning (ML) A branch of Artificial Intelligence (AI) that focuses on the development of systems capable of learning from data to perform a task without being explicitly programmed to perform that task. Learning refers to the process of optimizing model parameters through computational techniques such that the model’s behavior is optimized for the training task. EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
metaprompt The metaprompt or system message is included at the beginning of the prompt and is used to prime the model with context, instructions, or other information relevant to your use case. You can use the system message to describe the assistant’s personality, define what the model should and shouldn’t answer, and define the format of model responses. Microsoft Artificial Intelligence and Machine Learning Blog: Creating effective security guardrails with metaprompt/system message engineering Young, Sarah
minimization (Part of the ICO framework for auditing AI) AI systems generally require large amounts of data. However, organizations must comply with the minimization principle under data protection law if using personal data. This means ensuring that any personal data is adequate, relevant and limited to what is necessary for the purposes for which it is processed. […] The default approach of data scientists in designing and building AI systems will not necessarily take into account any data minimization constraints. Organizations must therefore have in place risk management practices to ensure that data minimization requirements, and all relevant minimization techniques, are fully considered from the design phase, or, if AI systems are bought or operated by third parties, as part of the procurement process due diligence A guide to ICO Audit: Artificial Intelligence (AI) Audits Information Commissioner’s Office (UK Government)
modality In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image). Wikipedia: Modality (human–computer interaction)  
model A core component of an AI system used to make inferences from inputs in order to produce outputs. A model characterizes an input-to-output transformation intended to perform a core computational task of the AI system (e.g., classifying an image, predicting the next word for a sequence, or selecting a robot’s next action given its state and goals). EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
model card A brief document that discloses information about an AI model, like explanations about intended use, performance metrics and benchmarked evaluation in various conditions, such as across different cultures, demographics or race. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
model extraction Type of privacy attack to extract model architecture and parameters NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model inversion A class of attacks that seeks to reconstruct class representatives from the training data of an AI model, which results in the generation of semantically similar data rather than direct reconstruction of the data (i.e., extraction). NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model poisoning Poisoning attacks in which the model parameters are under the control of the adversary NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model training Process to determine or to improve the parameters of a machine learning model, based on a machine learning algorithm, by using training data ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
model validation Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled ISO/IEC 27043:2015: Information technology — Security techniques — Incident investigation principles and processes International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
open-source AI An AI system that makes its components available under licenses that individually grant the freedoms to: Study how the system works and inspect its components, use the system for any purpose and without having to ask for permission, modify the system to change its recommendations, predictions or decisions to adapt to your needs, and share the system with or without modifications for any purpose. These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system. Opensource.org: The Open Source AI Definition open source initiative
output Process by which an information processing system, or any of its parts, transfers data outside of that system or part ISO/IEC/IEEE 24765:2017 — Systems and software engineering — Vocabulary International Standards Organization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE)
parameter Internal variable of a model that affects how it computes its outputs ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
prediction (context: AI) Primary output of an AI system when provided with input data or information. Predictions can be followed by additional outputs, such as recommendations, decisions and actions. Prediction does not necessarily refer to predicting something in the future. Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure). ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
predictive AI Artificial intelligence systems that utilize statistical analysis and machine learning algorithms to make predictions about potential future outcomes, causation, risk exposure, and more. Systems of this kind have been applied across numerous industries. For example:
• Healthcare: leveraging patient data to diagnose diseases and model disease progression
• Finance: predicting movements of markets and analyzing transaction data to detect fraud
• Retail and e-commerce: examining sales data, seasonality, and non-financial factors to optimize pricing strategies or forecast consumer demand
• Insurance: streamlining claims management or forecasting potential losses to ensure adequate reserves are maintained
Predictive AI Carnegie Council for Ethics in International Affairs
prompt A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query such as “what is Fermat’s little theorem?”, a command such as “write a poem about leaves falling”, or a longer statement including context, instructions, and conversation history. Wikipedia: Prompt engineering  
prompt extraction An attack in which the objective is to divulge the system prompt or other information in an LLMs context that would nominally be
hidden from a user
NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
prompt injection Attacker technique in which a hacker enters a text prompt into an LLM or chatbot designed to enable the user to perform unintended
or unauthorized actions
NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
rate limiting A limit on how often a client can call a service within a defined window of time. When the limit is exceeded, the client—rather than receiving an application-related response—receives a notification that the allowed rate has been exceeded as well as additional data regarding the limit number and the time at which the limit counter will be reset for the requestor to resume receiving responses. NIST Special Publication 800-204: Security Strategies for Microservices-based Application Systems Chandramouli, Ramaswamy
randomized smoothing A method that transforms any classifier into a certifiable robust smooth classifier by producing the most likely predictions under Gaussian noise perturbations. This method results in provable robustness for ℓ2 evasion attacks, even for classifiers trained on large-scale datasets, such as ImageNet. Randomized smoothing typically provides certified prediction to a subset of testing samples (the exact number depends on the radius of the ℓ2 ball and the characteristics of the training data and model). Recent results have extended the notion of certified adversarial robustness to ℓ2-norm bounded perturbations by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier [50]. NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
red-team A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment. Also known as Cyber Red Team. Information Technology Laboratory Computer Security Resource Center (CSRC) Glossary National Institute of Standards and Technology (NIST)
responsible AI An AI system that aligns development and behavior to goals and values. This includes developing and fielding AI technology in a manner that is consistent with democratic values. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
retrieval augmented generation (RAG) RAG is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs’ generative process. IBM Research Blog: Retrieval Augmented Generation Martineau, Kim
robust AI An AI system that is resilient in real-world settings, such as an object-recognition application that is robust to significant changes in lighting. The phrase also refers to resilience when it comes to adversarial attacks on AI components. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
robustness The ability of a machine learning model/algorithm to maintain correct and reliable performance under different conditions (e.g., unseen, noisy, or adversarially manipulated data). NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning Tabassi, Elham;Kevin J. Burns; Michael Hadjimichael; Andres D. Molina-Markham; Julian T. Sexton
rule-based AI Rule-based systems are a basic type of AI model that uses a set of prewritten rules to make decisions and solve problems. Developers create rules based on human expert knowledge, which then enable the system to process input data and produce a result. To build a rule-based system, a developer first creates a list of rules and facts for the system. An inference engine then measures the information given against these rules. Here, human knowledge is encoded as rules in the form of if-then statements. The system follows the rules set and only performs the programmed functions. For example, a rule-based algorithm or platform could measure a bank customer’s personal and financial information against a programmed set of levels. If the numbers match, the bank grants the applicant a home loan. TechTarget.com Tip: Choosing between a rule-based vs. machine learning system Carew, Joseph M.;Foster, Emily;Wisbey, Olivia
sensitive data Data with potentially harmful effects in the event of disclosure or misuse ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
small language model (SLM) A smaller version of their better-known and larger counterparts, large language models. Small is a reference to the size of the models. They have fewer parameters and require a much smaller training dataset, optimizing them for efficiency and better suiting them for deployment in environments with limited computational resources or for applications that require faster training and inference time. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
system prompt See metaprompt    
test data (context: AI) Test data is the data used to evaluate the performance of the AI system, before its deployment. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
threat modeling Threat modeling is analyzing representations of a system to highlight concerns about security and privacy characteristics. At the highest levels, when we threat model, we ask four key questions: (1) What are we working on? (2) What can go wrong? (3) What are we going to do about it? (4) Did we do a good enough job? Threat Modeling Manifesto Braiterman, Zoe;Shostack, Adam;Marcil, Jonathan; de Vries, Stephen;Michlin, Irene;Wuyts, Kim; Hurlbut, Robert;Schoenfield, Brook SE;Scott, Fraser;Coles, matthew;Romeo, Chris;Miller, Alyssa;Tarandach, Izar;Douglen, Avi;French, Mark
training data Training data consists of data samples used to train a machine learning algorithm. Typically, the data samples relate to some particular topic of concern and they can consist of structured or unstructured data. The data samples can be unlabelled or labelled. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
trustworthy AI Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations (2) it should be ethical, demonstrating respect for, and ensure adherence to, ethical principles and values and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Characteristics of Trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. Trustworthy AI concerns not only the trustworthiness of the AI system itself but also comprises the trustworthiness of all processes and actors that are part of the AI system’s life cycle. Trustworthy AI is based on respect for human rights and democratic values. EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
validation (context: AI) In software assessment frameworks, validation is the process of checking whether certain requirements have been fulfilled. It is part of the evaluation process. In AI-specific context, the term “validation” is used to refer to the process of leveraging data to set certain values and properties relevant to the system design. It is not about assessing the system with respect to its requirements, and it occurs before the evaluation stage. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
validation data Validation data corresponds to data used by the developer to make or validate some algorithmic choices (hyperparameter search, rule design, etc.). It has various names depending on the field of AI, for instance in natural language processing it is typically referred to as development data. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)

Feedback

Thanks for your feedback.

Post your comment on this topic.

Please do not use this for support questions.
Feedback portal link

Post Comment