Defense Advanced Research Projects AgencyTagged Content List


A process or rule set used for calculations or other problem-solving operations

Showing 33 results for Algorithms + Data RSS
Machine common sense has long been a critical—but missing—component of AI. Its absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general, human-like AI systems we would like to build in the future. The MCS program seeks to create the computing foundations needed to develop machine commonsense services to enable AI applications to understand new situations, monitor the reasonableness of their actions, communicate more effectively with people, and transfer learning to new domains.
The goal of the Modeling Adversarial Activity (MAA) program is to develop mathematical and computational techniques for modeling adversarial activity for the purpose of producing high-confidence indications and warnings of efforts to acquire, fabricate, proliferate, and/or deploy weapons of mass terror (WMTs). MAA assumes that an adversary’s WMT activities will result in observable transactions.
The Physics of Artificial Intelligence (PAI) program is part of a broad DAPRA initiative to develop and apply “Third Wave” AI technologies to sparse data and adversarial spoofing, and that incorporate domain-relevant knowledge through generative contextual and explanatory models.
Serial Interactions in Imperfect Information Games Applied to Complex Military Decision Making (SI3-CMD) builds on recent developments in artificial intelligence and game theory to enable more effective decisions in adversarial domains. SI3-CMD will explore several military decision making applications at strategic, tactical, and operational levels and develop AI/game theory techniques appropriate for their problem characteristics.
The Department of Defense (DoD) often leverages social and behavioral science (SBS) research to design plans, guide investments, assess outcomes, and build models of human social systems and behaviors as they relate to national security challenges in the human domain. However, a number of recent empirical studies and meta-analyses have revealed that many SBS results vary dramatically in terms of their ability to be independently reproduced or replicated, which could have real-world implications for DoD’s plans, decisions, and models. To help address this situation, DARPA’s Systematizing Confidence in Open Research and Evidence (SCORE) program aims to develop and deploy automated tools to assign "confidence scores" to different SBS research results and claims.