Defense Advanced Research Projects AgencyTagged Content List

Artificial Intelligence and Human-Computer Symbiosis Technologies

Technology to facilitate more intuitive interactions between humans and machines

Showing 20 results for Artificial Intelligence + Trust RSS
May 17, 2019 ,
DARPA Conference Center
The Strategic Technology Office is holding a Proposers Day meeting to provide information to potential proposers on the objectives of the new Air Combat Evolution (ACE) program and to facilitate teaming. The goal of ACE is to automate air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle. Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight.
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
The growing sophistication and ubiquity of machine learning (ML) components in advanced systems dramatically expands capabilities, but also increases the potential for new vulnerabilities. Current research on adversarial AI focuses on approaches where imperceptible perturbations to ML inputs could deceive an ML classifier, altering its response.
What is opaque to outsiders is often obvious – even if implicit – to locals. Habitus aims to capture and make local knowledge available to military operators, providing them with an insider view to support decision making.