Defense Advanced Research Projects AgencyTagged Content List

Technologies for Trustworthy Computing and Information

Confidence in the integrity of information and systems

Showing 12 results for Trust + Automation RSS
In the world of network cyber security, the weak link is often not the hardware or the software, but the user. Passwords are often easily guessed or possibly written down, leaving entire networks vulnerable to attack. Mobile devices containing sensitive information are often lost or stolen, leaving a password as the single layer of defense.
June 8, 2018,
Executive Conference Center
DARPA’s Defense Sciences Office (DSO) is hosting a Proposers Day to provide information to potential proposers on the objectives of the Systematizing Confidence in Open Research and Evidence (SCORE) program. SCORE aims to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science (SBS) research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable. The event will be available via a live webcast for those who would like to participate remotely.
The current standard method for validating a user’s identity for authentication on an information system requires humans to do something that is inherently unnatural: create, remember, and manage long, complex passwords. Moreover, as long as the session remains active, typical systems incorporate no mechanisms to verify that the user originally authenticated is the user still in control of the keyboard. Thus unauthorized individuals may improperly obtain extended access to information system resources if a password is compromised or if a user does not exercise adequate vigilance after initially authenticating at the console.
To be effective, Department of Defense (DoD) cybersecurity solutions require rapid development times. The shelf life of systems and capabilities is sometimes measured in days. Thus, to a greater degree than in other areas of defense, cybersecurity solutions require that DoD develops the ability to build quickly, at scale and over a broad range of capabilities.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.