Defense Advanced Research Projects AgencyTagged Content List

Data Analysis at Massive Scales

Extracting information and insights from massive datasets; "big data"; "data mining"

Showing 11 results for Data + Trust RSS
03/06/2014
During the past decade information technologies have driven the productivity gains essential to U.S. economic competitiveness, and computing systems now control significant elements of critical national infrastructure. As a result, tremendous resources are devoted to ensuring that programs are correct, especially at scale. Unfortunately, in spite of developers’ best efforts, software errors are at the root of most execution errors and security vulnerabilities.
01/30/2020
U.S. forces operating in remote, under-governed regions around the world often find that an area’s distinct cultural and societal practices are opaque to outsiders, but are obvious to locals. Commanders can be hindered from making optimal decisions because they lack knowledge of how local socio-economic, political, religious, health, and infrastructure factors interact to shape a specific community.
June 8, 2018,
Executive Conference Center
DARPA’s Defense Sciences Office (DSO) is hosting a Proposers Day to provide information to potential proposers on the objectives of the Systematizing Confidence in Open Research and Evidence (SCORE) program. SCORE aims to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science (SBS) research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable. The event will be available via a live webcast for those who would like to participate remotely.
The Anomaly Detection at Multiple Scales (ADAMS) program creates, adapts and applies technology to anomaly characterization and detection in massive data sets. Anomalies in data cue the collection of additional, actionable information in a wide variety of real world contexts. The initial application domain is insider threat detection in which malevolent (or possibly inadvertent) actions by a trusted individual are detected against a background of everyday network activity.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.