Defense Advanced Research Projects AgencyTagged Content List

Supervised Autonomy

Automated capabilities with human supervision; "human in the loop"

Showing 33 results for Autonomy + Artificial Intelligence RSS
Humans intuitively combine pre-existing knowledge with observations and contextual clues to construct rich mental models of the world around them and use these models to evaluate goals, perform thought experiments, make predictions, and update their situational understanding. When the environment contains other people, humans use a skill called theory of mind (ToM) to infer their mental states from observed actions and context, and predict future actions from those inferred states.
Expanded global access to diverse means of communication is resulting in more information being produced in more languages more quickly than ever before. The volume of information encountered by DoD, the speed at which it arrives, and the diversity of languages and media through which it is communicated make identifying and acting on relevant information a serious challenge. At the same time, there is a need to communicate with non-English-speaking local populations of foreign countries, but it is at present costly and difficult for DoD to do so.
The Communicating with Computers (CwC) program aims to enable symmetric communication between people and computers in which machines are not merely receivers of instructions but collaborators, able to harness a full range of natural modes including language, gesture and facial or other expressions. For the purposes of the CwC program, communication is understood to be the sharing of complex ideas in collaborative contexts. Complex ideas are assumed to be built from a relatively small set of elementary ideas, and language is thought to specify such complex ideas—but not completely, because language is ambiguous and depends in part on context, which can augment language and improve the specification of complex ideas.
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.
CREATE aims to explore the utility of artificial intelligence (AI) on the autonomous formation of scalable machine-to-machine teams capable of reacting to and learning from unexpected missions in the absence of centralized communication and control. CREATE seeks to develop the theoretical foundations of autonomous AI teaming to enable a system of heterogeneous, contextually-aware agents to act in a decentralized manner and satisfy multiple, simultaneous and unplanned missions goals.