Defense Advanced Research Projects AgencyTagged Content List

Artificial Intelligence and Human-Computer Symbiosis Technologies

Technology to facilitate more intuitive interactions between humans and machines

Showing 48 results for Artificial Intelligence + Programs RSS
The U.S. Government operates globally and frequently encounters so-called “low-resource” languages for which no automated human language technology capability exists. Historically, development of technology for automated exploitation of foreign language materials has required protracted effort and a large data investment. Current methods can require multiple years and tens of millions of dollars per language—mostly to construct translated or transcribed corpora.
Machine common sense has long been a critical—but missing—component of AI. Its absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general, human-like AI systems we would like to build in the future. The MCS program seeks to create the computing foundations needed to develop machine commonsense services to enable AI applications to understand new situations, monitor the reasonableness of their actions, communicate more effectively with people, and transfer learning to new domains.
The past decade has seen explosive growth in development and training of artificial intelligence (AI) systems. However, as AI has taken on progressively more complex problems, the amount of computation required to train the largest AI systems has been increasing ten-fold annually. While AI advances are beginning to have a deep impact in digital computing processes, trade-offs between computational capability, resources and size, weight, and power consumption (SWaP) will become increasingly critical in the near future.
The Physics of Artificial Intelligence (PAI) program is part of a broad DAPRA initiative to develop and apply “Third Wave” AI technologies to sparse data and adversarial spoofing, and that incorporate domain-relevant knowledge through generative contextual and explanatory models.
Machine learning – the ability of computers to understand data, manage results and infer insights from uncertain information – is the force behind many recent revolutions in computing. Email spam filters, smartphone personal assistants and self-driving vehicles are all based on research advances in machine learning. Unfortunately, even as the demand for these capabilities is accelerating, every new application requires a Herculean effort. Teams of hard-to-find experts must build expensive, custom tools that are often painfully slow and can perform unpredictably against large, complex data sets.