Defense Advanced Research Projects AgencyTagged Content List

Artificial Intelligence and Human-Computer Symbiosis Technologies

Technology to facilitate more intuitive interactions between humans and machines

Showing 14 results for AI + Trust RSS
March 14, 2019, 9:00 AM ET,
DARPA Conference Center
The Information Innovation Office is holding a Proposers Day meeting to provide information to potential proposers on the objectives of the new Artificial Social Intelligence for Successful Teams (ASIST) program. ASIST will explore human-machine teaming and machine social intelligence in a teaming context. DARPA envisions computer-based agents that observe their surroundings; build and maintain rich representations of the environment, team, and individuals; infer teammates’ goals (non-verbal behavior); predict teammates’ actions; and assist teams by planning interventions and executing them at appropriate times. During demonstrations in a virtual testbed, ASIST agents will operate in increasingly complex environments and will have to adapt to challenges such as a change in strategy.
February 20, 2019,
Webcast
The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is sponsoring a Proposers Day webcast to provide information to potential proposers on the objectives of an anticipated Broad Agency Announcement (BAA) for the Competency-Aware Machine Learning (CAML) program. The Proposers Day will be held via prerecorded webcast on February 20, 2019 at 11:00 AM EST and will repost at 3:00 PM EST. Advance registration is required for viewing the webcast.
May 17, 2019 ,
DARPA Conference Center
The Strategic Technology Office is holding a Proposers Day meeting to provide information to potential proposers on the objectives of the new Air Combat Evolution (ACE) program and to facilitate teaming. The goal of ACE is to automate air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle. Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight.
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.