Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 48 results for Trust RSS
The Anomaly Detection at Multiple Scales (ADAMS) program creates, adapts and applies technology to anomaly characterization and detection in massive data sets. Anomalies in data cue the collection of additional, actionable information in a wide variety of real world contexts. The initial application domain is insider threat detection in which malevolent (or possibly inadvertent) actions by a trusted individual are detected against a background of everyday network activity.
To be effective, Department of Defense (DoD) cybersecurity solutions require rapid development times. The shelf life of systems and capabilities is sometimes measured in days. Thus, to a greater degree than in other areas of defense, cybersecurity solutions require that DoD develops the ability to build quickly, at scale and over a broad range of capabilities.
The process of determining that a software system’s risk is acceptable is referred to as “certification.” Current certification practices within the Department of Defense (DoD) are antiquated and unable to scale with the amount of software deployed. Two factors prevent scaling: (a) the use of human evaluators to determine if the system meets certification criteria, and (b) the lack of a principled means to decompose evaluations.
| Cyber | Formal | Trust |
The Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program will pursue innovative research into the design of new computer systems that are highly resistant to cyber-attack, can adapt after a successful attack to continue rendering useful services, learn from previous attacks how to guard against and cope with future attacks, and can repair themselves after attacks have succeeded. Exploitable vulnerabilities originate from a handful of known sources (e.g., memory safety); they remain because of deficits in tools, languages and hardware that could address and prevent vulnerabilities at the design, implementation and execution stages.
| Cyber | Trust |
In order to transform machine learning systems from tools into partners, users need to trust their machine counterpart. One component to building a trusted relationship is knowledge of a partner’s competence (an accurate insight into a partner’s skills, experience, and reliability in dynamic environments). While state-of-the-art machine learning systems can perform well when their behaviors are applied in contexts similar to their learning experiences, they are unable to communicate their task strategies, the completeness of their training relative to a given task, the factors that may influence their actions, or their likelihood to succeed under specific conditions.