Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 9 results for Trust + Analytics RSS
June 8, 2018,
Executive Conference Center
DARPA’s Defense Sciences Office (DSO) is hosting a Proposers Day to provide information to potential proposers on the objectives of the Systematizing Confidence in Open Research and Evidence (SCORE) program. SCORE aims to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science (SBS) research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable. The event will be available via a live webcast for those who would like to participate remotely.
The Anomaly Detection at Multiple Scales (ADAMS) program creates, adapts and applies technology to anomaly characterization and detection in massive data sets. Anomalies in data cue the collection of additional, actionable information in a wide variety of real world contexts. The initial application domain is insider threat detection in which malevolent (or possibly inadvertent) actions by a trusted individual are detected against a background of everyday network activity.
To be effective, Department of Defense (DoD) cybersecurity solutions require rapid development times. The shelf life of systems and capabilities is sometimes measured in days. Thus, to a greater degree than in other areas of defense, cybersecurity solutions require that DoD develops the ability to build quickly, at scale and over a broad range of capabilities.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Historically, the U.S. Government deployed and operated a variety of collection systems that provided imagery with assured integrity. In recent years however, consumer imaging technology (digital cameras, mobile phones, etc.) has become ubiquitous, allowing people the world over to take and share images and video instantaneously. Mirroring this rise in digital imagery is the associated ability for even relatively unskilled users to manipulate and distort the message of the visual media.