Defense Advanced Research Projects AgencyTagged Content List

Analytics for Data at Massive Scales

Extracting information from large data sets

Showing 11 results for Analytics + Trust RSS
09/03/2019
The threat of manipulated multi-modal media – which includes audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene.
| AI | Analytics | Trust |
August 28, 2019, 8:00 AM EDT,
DARPA Conference Center
The Information Innovation Office is holding a Proposers Day meeting to provide information to potential performers on the new Semantic Forensics (SemaFor) program. SemaFor seeks to develop innovative semantic technologies for automatically analyzing multi-modal media assets (i.e., text, audio, image, video) to defend against large-scale, automated disinformation attacks. Semantic detection algorithms will determine if media is generated or manipulated; attribution algorithms will infer if media originates from a particular organization or individual; characterization algorithms will reason about whether media was generated or manipulated for malicious purposes. The results of detection, attribution, and characterization algorithms will be used to develop explanations for system decisions and prioritize assets for analyst review. SemaFor technologies could help to identify, understand, and deter adversary disinformation campaigns.
June 8, 2018,
Executive Conference Center
DARPA’s Defense Sciences Office (DSO) is hosting a Proposers Day to provide information to potential proposers on the objectives of the Systematizing Confidence in Open Research and Evidence (SCORE) program. SCORE aims to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science (SBS) research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable. The event will be available via a live webcast for those who would like to participate remotely.
The Anomaly Detection at Multiple Scales (ADAMS) program creates, adapts and applies technology to anomaly characterization and detection in massive data sets. Anomalies in data cue the collection of additional, actionable information in a wide variety of real world contexts. The initial application domain is insider threat detection in which malevolent (or possibly inadvertent) actions by a trusted individual are detected against a background of everyday network activity.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.