Defense Advanced Research Projects AgencyTagged Content List

Supervised Autonomy

Automated capabilities with human supervision; "human in the loop"

Showing 9 results for Autonomy + Trust RSS
The ACE program seeks to increase trust in combat autonomy by using human-machine collaborative dogfighting as its challenge problem. This also serves as an entry point into complex human-machine collaboration. ACE will apply existing artificial intelligence technologies to the dogfight problem in experiments of increasing realism. In parallel, ACE will implement methods to measure, calibrate, increase, and predict human trust in combat autonomy performance.
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Program Manager
Dr. Bartlett Russell joined DARPA as a program manager in April of 2019. Her work focuses on understanding the variability of human cognitive and social behavior to enable the decision-maker, improve analytics, and generate autonomous and AI systems that enable human adaptability. Prior to joining DARPA, Russell was a senior program manager and lead of the human systems and autonomy research area in Lockheed Martin’s Advanced Technology Laboratories.