Defense Advanced Research Projects AgencyTagged Content List


A process or rule set used for calculations or other problem-solving operations

Showing 40 results for Algorithms + Programs RSS
The general-purpose computer has remained the dominant computing architecture for the last 50 years, driven largely by the relentless pace of Moore’s Law. As this trajectory shows signs of slowing, however, it has become increasingly more challenging to achieve performance gains from generalized hardware, setting the stage for a resurgence in specialized architectures. Today’s specialized, application-specific integrated circuits (ASICs) — hardware customized for a specific application — offer limited flexibility and are costly to design, fabricate, and program.
Machine learning has shown remarkable success across many application areas in recent years, leveraging advances in computing power and the availability of large sets of training data. It provides a tremendous opportunity to deploy data-driven systems in more complex and interactive tasks including personalized autonomy, agile robotics, self-driving vehicles, and smart cities. Despite dramatic progress, the machine learning community still lacks an understanding of the trade-offs and mathematical limitations of related technologies for a given domain, problem, or dataset.
FunCC aims to uncover fundamental principles of resilient self-organized complex systems applicable to domains spanning autonomous systems to biological networks, the immune system, and ecosystems. The dynamics and evolution of complex collectives are explored using new frameworks that embrace agent heterogeneity, stochasticity, distributed control, and diffusion of (mis)information.
The Gamebreaker program seeks to develop and apply Artificial Intelligence (AI) to existing open-world video games to quantitatively assess game balance, identify parameters that significantly contribute to balance, and explore new capabilities, tactics, and rule modifications that are most destabilizing to the game.
The growing sophistication and ubiquity of machine learning (ML) components in advanced systems dramatically expands capabilities, but also increases the potential for new vulnerabilities. Current research on adversarial AI focuses on approaches where imperceptible perturbations to ML inputs could deceive an ML classifier, altering its response.