Defense Advanced Research Projects AgencyTagged Content List

Analytics for Data at Massive Scales

Extracting information from large data sets

Showing 10 results for Analytics + Automation RSS
Military operations depend upon the unimpeded flow of accurate and relevant information to support timely decisions related to battle planning and execution. To address these needs, numerous intelligence systems and technologies have been developed over the past 20 years, but each of these typically provides only a partial picture of the battlefield, and integrating the information has proven to be burdensome and inefficient.
Bonnie Dorr (left), program manager in DARPA’s Information Innovation Office (I2O), shakes hands with Henry Kautz, past president of the Association for the Advancement of Artificial Intelligence (AAAI), upon her recent induction as an AAAI Fellow. Each year, AAAI bestows the lifetime honor of Fellow on only a handful of researchers for their exceptional leadership, research and service contributions to the field of artificial intelligence.
Popular search engines are great at finding answers for point-of-fact questions like the elevation of Mount Everest or current movies running at local theaters. They are not, however, very good at answering what-if or predictive questions—questions that depend on multiple variables, such as “What influences the stock market?” or “What are the major drivers of environmental stability?” In many cases that shortcoming is not for lack of relevant data. Rather, what’s missing are empirical models of complex processes that influence the behavior and impact of those data elements.
Understanding the complex and increasingly data-intensive world around us relies on the construction of robust empirical models, i.e., representations of real, complex systems that enable decision makers to predict behaviors and answer “what-if” questions. Today, construction of complex empirical models is largely a manual process requiring a team of subject matter experts and data scientists.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.