Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 8 results for Trust + Data RSS
03/06/2014
During the past decade information technologies have driven the productivity gains essential to U.S. economic competitiveness, and computing systems now control significant elements of critical national infrastructure. As a result, tremendous resources are devoted to ensuring that programs are correct, especially at scale. Unfortunately, in spite of developers’ best efforts, software errors are at the root of most execution errors and security vulnerabilities.
June 8, 2018,
Executive Conference Center
DARPA’s Defense Sciences Office (DSO) is hosting a Proposers Day to provide information to potential proposers on the objectives of the Systematizing Confidence in Open Research and Evidence (SCORE) program. SCORE aims to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science (SBS) research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable. The event will be available via a live webcast for those who would like to participate remotely.
The Anomaly Detection at Multiple Scales (ADAMS) program creates, adapts and applies technology to anomaly characterization and detection in massive data sets. Anomalies in data cue the collection of additional, actionable information in a wide variety of real world contexts. The initial application domain is insider threat detection in which malevolent (or possibly inadvertent) actions by a trusted individual are detected against a background of everyday network activity.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
As computing devices become more pervasive, the software systems that control them have become increasingly more complex and sophisticated. Consequently, despite the tremendous resources devoted to making software more robust and resilient, ensuring that programs are correct—especially at scale—remains a difficult and challenging endeavor. Unfortunately, uncaught errors triggered during program execution can lead to potentially crippling security violations, unexpected runtime failure or unintended behavior, all of which can have profound negative consequences on economic productivity, reliability of mission-critical systems, and correct operation of important and sensitive cyber infrastructure.