Defense Advanced Research Projects AgencyTagged Content List

Data Analysis at Massive Scales

Extracting information and insights from massive datasets; "big data"; "data mining"

Showing 117 results for Data RSS
Deep Purple aims to advance the modeling of complex dynamic systems using new information-efficient approaches that make optimal use of data and known physics at multiple scales. The program is investigating next-generation deep learning approaches that use not only high throughput multimodal scientific data from observations and controlled experiments (including behaviors such as phase transitions and chaos), but also of the known science of such systems at whatever scales it exists.
As a result of combat exposure, warfighters may return home from deployments with psychological health challenges and find it difficult to reconnect with family and society at large. According to the Department of Veteran Affairs’ National Center for PTSD, studies show that between 12 and 25 percent of military personnel who had returned from Afghanistan and Iraq as of 2008 may suffer from PTSD. (1) Despite best efforts to improve awareness and care, additional studies reveal that only a small fraction of warfighters seek help dealing with psychological health issues.
Training, which is conducted in classroom, field, and virtual settings, is a critical element of military readiness. Large-scale social networks, interactive content, and ubiquitous mobile access are emerging as driving technologies in education and training. At the same time, education analytics presents new opportunities for assessing the effectiveness of training strategies, understanding trends and effects in large volumes of education data, and relating these back to alternative modes of instruction.
Malicious actors in cyberspace currently operate with little fear of being caught due to the fact that it is extremely difficult, in some cases perhaps even impossible, to reliably and confidently attribute actions in cyberspace to individuals. The reason cyber attribution is difficult stems at least in part from a lack of end-to-end accountability in the current Internet infrastructure.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.