Summary
The need for automated, scalable, machine-speed vulnerability detection and patching is large and growing fast as more and more systems—from household appliances to major military platforms—get connected to and become dependent upon the internet. Today, the process of finding and countering bugs, hacks, and other cyber infection vectors is still effectively artisanal. Professional bug hunters, security coders, and other security pros work tremendous hours, searching millions of lines of code to find and fix vulnerabilities that could be taken advantage of by users with ulterior motives.
To help overcome these challenges, DARPA launched the Cyber Grand Challenge, a competition to create automatic defensive systems capable of reasoning about flaws, formulating patches and deploying them on a network in real time. By acting at machine speed and scale, these technologies may someday overturn today’s attacker-dominated status quo. Realizing this vision requires breakthrough approaches in a variety of disciplines, including applied computer security, program analysis, and data visualization. Anticipated future benefits include:
- Expert-level software security analysis and remediation, at machine speeds on enterprise scales
- Establishment of a lasting R&D community for automated cyber defense
- Creation of a public, high-fidelity recording of real-time competition between automated cyber defense systems
DARPA hosted the Cyber Grand Challenge Final Event—the world’s first all-machine cyber hacking tournament—on August 4, 2016 in Las Vegas. Starting with over 100 teams consisting of some of the top security researchers and hackers in the world, DARPA pit seven teams against each other during the final event. During the competition, each team’s Cyber Reasoning System (CRS) automatically identified software flaws, and scanned a purpose-built, air-gapped network to identify affected hosts. For nearly twelve hours, teams were scored based on how capably their systems protected hosts, scanned the network for vulnerabilities, and maintained the correct function of software. Prizes of $2 million, $1 million, and $750 thousand were awarded to the top three finishers.
CGC was the first head-to-head competition between some of the most sophisticated automated bug-hunting systems ever developed. These machines played the classic cybersecurity exercise of Capture the Flag in a specially created computer testbed laden with an array of bugs hidden inside custom, never-before-analyzed software. The machines were e challenged to find and patch within seconds—not the usual months—flawed code that was vulnerable to being hacked, and find their opponents’ weaknesses before they could defend against them.
This program is now complete
This content is available for reference purposes. This page is no longer maintained.