Defense Advanced Research Projects AgencyTagged Content List

Algorithms

A process or rule set used for calculations or other problem-solving operations

Showing 60 results for Algorithms RSS
As new defensive technologies make old classes of vulnerability difficult to exploit successfully, adversaries move to new classes of vulnerability. Vulnerabilities based on flawed implementations of algorithms have been popular targets for many years. However, once new defensive technologies make vulnerabilities based on flawed implementations less common and more difficult to exploit, adversaries will turn their attention to vulnerabilities inherent in the algorithms themselves.
The Department of Defense (DoD) often leverages social and behavioral science (SBS) research to design plans, guide investments, assess outcomes, and build models of human social systems and behaviors as they relate to national security challenges in the human domain. However, a number of recent empirical studies and meta-analyses have revealed that many SBS results vary dramatically in terms of their ability to be independently reproduced or replicated, which could have real-world implications for DoD’s plans, decisions, and models. To help address this situation, DARPA’s Systematizing Confidence in Open Research and Evidence (SCORE) program aims to develop and deploy automated tools to assign "confidence scores" to different SBS research results and claims.
In a target-dense environment, the adversary has the advantage of using sophisticated decoys and background traffic to degrade the effectiveness of existing automatic target recognition (ATR) solutions. Airborne strike operations against relocatable targets require that pilots fly close enough to obtain confirmatory visual identification before weapon release, putting the manned platform at extreme risk. Radar provides a means for imaging ground targets at safer and far greater standoff distances; but the false-alarm rate of both human and machine-based radar image recognition is unacceptably high. Existing ATR algorithms also require impractically large computing resources for airborne applications.  
New manufacturing technologies such as additive manufacturing have vastly improved the ability to create shapes and material properties previously thought impossible. Generating new designs that fully exploit these properties, however, has proven extremely challenging. Conventional design technologies, representations, and algorithms are inherently constrained by outdated presumptions about material properties and manufacturing methods. As a result, today’s design technologies are simply not able to bring to fruition the enormous level of physical detail and complexity made possible with cutting-edge manufacturing capabilities and materials.
The Understanding Group Biases (UGB) program seeks to develop and prove out capabilities that can radically enhance the scale, speed, and scope of automated, ethnographic-like methods for capturing group biases and cultural models from increasingly available large digital datasets.