Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 64 results for Trust RSS
Unreliable software places huge costs on both the military and the civilian economy. Currently, most Commercial Off-the-Shelf (COTS) software contains about one to five bugs per thousand lines of code. Formal verification of software provides the most confidence that a given piece of software is free of errors that could disrupt military and government operations. Unfortunately, traditional formal verification methods do not scale to the size of software found in modern computer systems. Formal verification also currently requires highly specialized engineers with deep knowledge of software technology and mathematical theorem-proving techniques.
| Cyber | Formal | Trust |
Embedded computing systems are ubiquitous in critical infrastructure, vehicles, smart devices, and military systems. Conventional wisdom once held that cyberattacks against embedded systems were not a concern since they seldom had traditional networking connections on which an attack could occur. However, attackers have learned to bridge air gaps that surround the most sensitive embedded systems, and network connectivity is now being extended to even the most remote of embedded systems.
| Cyber | Formal | Trust |
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
The growing sophistication and ubiquity of machine learning (ML) components in advanced systems dramatically expands capabilities, but also increases the potential for new vulnerabilities. Current research on adversarial AI focuses on approaches where imperceptible perturbations to ML inputs could deceive an ML classifier, altering its response.
What is opaque to outsiders is often obvious – even if implicit – to locals. Habitus aims to capture and make local knowledge available to military operators, providing them with an insider view to support decision making.