Researchers have demonstrated effective attacks on machine learning (ML) algorithms. These attacks can cause high-confidence misclassifications of input data, even if the attacker lacks detailed knowledge of the ML classifier algorithm and/or training data. Developing effective defenses against such attacks is essential if ML is to be used for defense, security, or health and safety applications.
Recent evidence suggests that diverse ensembles of ML classifiers are more robust to adversarial inputs. However, practice has outpaced theory in this area. The objective of the Quantifying Ensemble Diversity for Robust Machine Learning (QED for RML) AI Exploration topic is to develop the theoretical foundations for understanding the behavior of diversified ensembles of ML classifiers and quantifying their utility when under attack. This foundation is necessary for creating provable defenses against classes of attacks or regions of input-space in ML classifiers. QED for RML will explore what types of diversity metrics could enable formal guarantees of ensemble-based classifier performance against various classes of attack.
You are now leaving the DARPA.mil website that is under the control and
management of DARPA. The appearance of hyperlinks does not constitute
endorsement by DARPA of non-U.S. Government sites or the information,
products, or services contained therein. Although DARPA may or may not
use these sites as additional distribution channels for Department of
Defense information, it does not exercise editorial control over all of
the information that you may find at these locations. Such links are
provided consistent with the stated purpose of this website.
After reading this message, click to continue