Defense Advanced Research Projects AgencyTagged Content List

Artificial Intelligence and Human-Computer Symbiosis Technologies

Technology to facilitate more intuitive interactions between humans and machines

Showing 13 results for AI + Trust RSS
February 20, 2019,
Webcast
The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is sponsoring a Proposers Day webcast to provide information to potential proposers on the objectives of an anticipated Broad Agency Announcement (BAA) for the Competency-Aware Machine Learning (CAML) program. The Proposers Day will be held via prerecorded webcast on February 20, 2019 at 11:00 AM EST and will repost at 3:00 PM EST. Advance registration is required for viewing the webcast.
May 17, 2019 ,
DARPA Conference Center
The Strategic Technology Office is holding a Proposers Day meeting to provide information to potential proposers on the objectives of the new Air Combat Evolution (ACE) program and to facilitate teaming. The goal of ACE is to automate air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle. Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight.
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Media generation and manipulation technologies are advancing rapidly and purely statistical detection methods are quickly becoming insufficient for identifying falsified media assets. Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources (algorithm development, data, or compute).
| AI | Analytics | Trust |