Defense Advanced Research Projects AgencyTagged Content List

Software Programming

Pushing the boundaries of computer coding, including language development

Showing 24 results for Programming RSS
Program Manager
Dr. Randy Garrett joined DARPA in February 2019 as a program manager in the Strategic Technology Office. Prior to arriving at DARPA, he worked for commercial cybersecurity companies.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Modern computing systems are incapable of creating sufficient security protections such that they can be trusted with the most sensitive data while simultaneously being exposed to untrusted data streams. In certain places, the Department of Defense (DoD) and commercial industry have adopted a series of air-gaps – or breaks between computing systems – to prevent the leakage and compromise of sensitive information.
Managing complexity is a central problem in software engineering. A common approach to address this challenge is concretization, in which a software engineer makes decisions based on a set of apparently or almost equivalent options to enable the resulting code to compile. Concretization makes the process of software development more controllable, allowing the engineer to define and implement an architecture, divide the development tasks into manageable parts, establish conventions to enable their integration, and integrate them into a cohesive software system.
Machine common sense has long been a critical—but missing—component of AI. Its absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general, human-like AI systems we would like to build in the future. The MCS program seeks to create the computing foundations needed to develop machine commonsense services to enable AI applications to understand new situations, monitor the reasonableness of their actions, communicate more effectively with people, and transfer learning to new domains.