Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 35 results for Trust RSS
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Embedded systems form a ubiquitous, networked, computing substrate that underlies much of modern technological society. Such systems range from large supervisory control and data acquisition (SCADA) systems that manage physical infrastructure to medical devices such as pacemakers and insulin pumps, to computer peripherals such as printers and routers, to communication devices such as cell phones and radios, to vehicles such as airplanes and satellites. Such devices have been networked for a variety of reasons, including the ability to conveniently access diagnostic information, perform software updates, provide innovative features, lower costs, and improve ease of use.
| Cyber | Formal | Trust |
Historically, the U.S. Government deployed and operated a variety of collection systems that provided imagery with assured integrity. In recent years however, consumer imaging technology (digital cameras, mobile phones, etc.) has become ubiquitous, allowing people the world over to take and share images and video instantaneously.
As computing devices become more pervasive, the software systems that control them have become increasingly more complex and sophisticated. Consequently, despite the tremendous resources devoted to making software more robust and resilient, ensuring that programs are correct—especially at scale—remains a difficult and challenging endeavor. Unfortunately, uncaught errors triggered during program execution can lead to potentially crippling security violations, unexpected runtime failure or unintended behavior, all of which can have profound negative consequences on economic productivity, reliability of mission-critical systems, and correct operation of important and sensitive cyber infrastructure.
The February 2011 Federal Cloud Computing Strategy released by the U.S. Chief Information Officer reinforces the United States Government’s plans to move information technology away from traditional workstations and toward cloud computing environments. Where compelling incentives to do this exist, security implications of concentrating sensitive data and computation into computing clouds have yet to be fully addressed. The perimeter defense focus of traditional security solutions is not sufficient to secure existing enclaves. It could be further marginalized in cloud environments where there is a huge concentration of homogeneous hosts on high-speed networks without internal checks, and with implicit trust among hosts within those limited perimeter defenses.