Defense Advanced Research Projects AgencyTagged Content List

Algorithms

A process or rule set used for calculations or other problem-solving operations

Showing 40 results for Algorithms + Programs RSS
Current artificial intelligence (AI) systems excel at tasks defined by rigid rules – such as mastering the board games Go and chess with proficiency surpassing world-class human players. However, AI systems aren’t very good at adapting to constantly changing conditions commonly faced by troops in the real world – from reacting to an adversary’s surprise actions, to fluctuating weather, to operating in unfamiliar terrain.
Serial Interactions in Imperfect Information Games Applied to Complex Military Decision Making (SI3-CMD) builds on recent developments in artificial intelligence and game theory to enable more effective decisions in adversarial domains. SI3-CMD will explore several military decision making applications at strategic, tactical, and operational levels and develop AI/game theory techniques appropriate for their problem characteristics.
In modern warfare, decisions are driven by information. That information can come in the form of thousands of sensors providing information, surveillance, and reconnaissance (ISR) data; logistics/supply-chain and personnel performance measurements; or a host of other sources and formats. The ability to exploit this data to understand and predict the world around us is an asymmetric advantage for the Department of Defense (DoD).
As new defensive technologies make old classes of vulnerability difficult to exploit successfully, adversaries move to new classes of vulnerability. Vulnerabilities based on flawed implementations of algorithms have been popular targets for many years. However, once new defensive technologies make vulnerabilities based on flawed implementations less common and more difficult to exploit, adversaries will turn their attention to vulnerabilities inherent in the algorithms themselves.
The Department of Defense (DoD) often leverages social and behavioral science (SBS) research to design plans, guide investments, assess outcomes, and build models of human social systems and behaviors as they relate to national security challenges in the human domain. However, a number of recent empirical studies and meta-analyses have revealed that many SBS results vary dramatically in terms of their ability to be independently reproduced or replicated, which could have real-world implications for DoD’s plans, decisions, and models. To help address this situation, DARPA’s Systematizing Confidence in Open Research and Evidence (SCORE) program aims to develop and deploy automated tools to assign "confidence scores" to different SBS research results and claims.