The National Institute of Standards and Technology’s National Cybersecurity Center of Excellence (NCCoE) released Draft NIST Interagency/Internal Report (NISTIR) 8269, A Taxonomy and Terminology of Adversarial Machine Learning, for public comment. This draft is a step toward securing applications of artificial intelligence (AI), especially against adversarial manipulations of Machine Learning (ML), NIST said.
Although AI includes various knowledge-based systems; the data-driven approach of ML introduces additional security challenges in training and testing inference phases of system operations. Inference describes the deployment of a pre-trained neural network on an embedded system that will execute algorithms. In the inference phase, the pre-trained model is used to process live application requests using computed weights. It is the inference phase that consumes the most computing resources and provides the most value for performance optimization.
Security challenges include the potential for adversarial manipulation of ML training data, and adversarial exploitation of model sensitivities that can adversely affect the performance of ML classification and regression. AML is concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences, according to Draft NISTIR 8269. (See Figure 1.)
Attacks are launched by adversaries with malevolent intentions, and security of ML refers to defenses intended to prevent or mitigate the consequences of such attacks. Although ML components can be adversely affected by various unintentional factors, such as design flaws or data biases, these factors are not intentional adversarial attacks, and they are not within the scope of security addressed by the literature on AML.
NIST welcomes public comments on the findings and considerations published in Draft NISTIR 8269. This document develops a taxonomy of concepts and defines terminology in the field of AML. The taxonomy, built on and integrating previous AML survey works, is arranged in a conceptual hierarchy that includes key types of attacks, defenses, and consequences.
The terminology, arranged in an alphabetical glossary, defines key terms associated with the security of ML components of an AI system. Used together, the terminology and taxonomy aim to inform future standards and best practices for assessing and managing the security of ML components, by establishing a common language and understanding the rapidly developing AML landscape, NIST explained.
Publication Details (Draft NISTIR 8269):