In an interview with DARPA Public Affairs, I2O director Dr. Kathleen Fisher shares her insights on the AI Forward workshops and thoughts about the future of AI research at DARPA.
Oct 23, 2023
Earlier this year, DARPA’s Information Innovation Office (I2O) announced plans for AI Forward – the agency’s latest initiative to reimagine the future of artificial intelligence (AI) research that will result in trustworthy systems for national security missions. Approximately 200 participants from across the commercial sector, academia, and government attended two workshops that generated ideas that will inform DARPA’s next phase of AI exploratory projects.
The following is an excerpt from an interview with Kathleen Fisher, Director of the Information Innovation Office at DARPA.
What prompted AI Forward?
DARPA’s interest in AI is quite broad. Roughly 70% of current DARPA programs involve AI in some way or another, and we want to be at the forefront of advancing AI for national security, both at the level of basic research and in applying it in specific domains. We are at an inflection point in that we have game-changing AI on the table that is delivering extraordinary new capabilities, but it's not doing everything we need. In particular, current AI struggles with being trustworthy, and we struggle with understanding why it works in some cases. We launched AI Forward to take a deep look at how we can reliably build trustworthy AI systems by examining aspects of AI theory, AI engineering, and human-AI teaming. To do that, we wanted to engage the community to help define the future of what might be possible and understand what technologies we should look at that present key threats and opportunities. It's an exciting time. We need to be very strategic about allocating our resources to things that can significantly impact national security that would not happen without DARPA's investment.
Who attended the ideation workshops, and how did you select those participants?
We wanted to have a broad swath of people from all sorts of different backgrounds in terms of technical expertise, organization type, application domain, etc. – essentially, people from all sorts of different kinds of backgrounds because the more different perspectives you can productively engage, the deeper the resulting insights.
We selected participants through an application process where people answered questions about their background and expertise and what they thought the most important ideas were going forward. Then, we had a panel of government AI experts evaluate their applications from the perspective of whether they demonstrated the ability to create new ideas to ensure a diverse collection of 100 people at each workshop. We had a strict upper limit on the attendees at the direction of our workshop facilitator, KnowInnovation, which advised 100 was the maximum number of people they could productively wrangle in the available time. Many people approached us to ask if they could sit in on the discussions, and we had to be firm and say, nope, sorry, no one is allowed to passively observe. The activities were designed to put all the participants to work over a couple of intense days of brainstorming and creativity. Previous experience suggested passive observers would encourage workshop participants to disengage, which we couldn’t afford.
Fisher provides framing remarks at the beginning of the in-person workshop in Boston. (DARPA)
How were the workshops structured, and what resulted from them?
From the beginning, we aimed to get as many ideas “on the wall” as possible and then narrow the list for deeper discussions. The brainstorming activities started with hypotheticals like “Wouldn't it be great if” with everybody charged with writing down thoughts on post-it notes and putting them on a wall. We organized the ideas into groupings of similar ideas. Then, a later activity put people in small groups where they picked some ideas based on questions like “What would it take, what might make it possible, what might make it not possible, etc.” Each group was encouraged to explore multiple different ideas. There were people we assigned to poke at whether an idea was possible, what could make it even better, and so on. Throughout the event, groups selectively did more and more iterative deepening in the ideas that attracted the critical mass and passion of people who wanted to keep working on it.
So, we collected everything from the beginning, probably 800 or so crazy ideas, and narrowed them down to 35. The 800 crazy ideas were not all orthogonal, but it meant that we got a lot of ideas explored in some depth. At the end of each workshop, participants produced three-page white papers and 10-minute videos about their ideas.
It's a challenging task to see what is coming over the horizon. You're looking in the crystal ball and giving well-educated guesses about the most significant threats and opportunities. The participants at the two AI Forward workshops did a great job engaging with our questions.
Can you expand on the key themes generated by the brainstorming activities?
The three most popular technical recurring themes were (1) human-AI alignment, which included topics such as intention-aware AI systems and how AI can characterize intent formally; (2) the characterization of social norms, values, and human qualities (e.g., creativity); and (3) the detection and defenses against the weaponization of values. Another popular theme was the detection, mitigation, and anticipation of AI-enabled attacks.
Some examples of resulting ideas included introducing friction into human-machine teaming so humans carefully think through the material provided by an AI instead of being lulled by its confident tone of voice. Another recurring example is how to better evaluate AI systems to develop meaningful trust or how to use dialog to partner with cyber-physical systems in unpredictable environments. Metacognition and composition, or giving AI systems the ability to reason about their own strengths and weaknesses was another. And how can we deploy AI systems as a federation of systems with various capabilities instead of a single monolithic system. Another was thinking about building AI systems with fewer resources, particularly enabling on-the-edge AI for national security applications and advancing the pace of science. As you can see, people had a lot of ideas!
Participants brainstormed hundreds of ideas that were narrowed down over the course of the workshop. (DARPA)
What’s next for AI forward
The resulting white papers helped inform DARPA funding for rapidly exploring select topics known as AI Explorations (AIEs). We’re on track to launch three AIEs, which is one more than we had initially planned, so stay tuned for those announcements later this year. DARPA is constantly launching new programs, and the materials from AI Forward are available to all existing program managers to use as raw material for new programs. So, I fully expect that DARPA program managers will use the materials we produced as part of the AI Forward to inform their thinking as they develop new program concepts.
How will DARPA stay engaged with participants and others in this field who think they can contribute?
The format of AI Forward was wildly successful. It should be something that DARPA seriously considers adding to its toolbox for thinking about ideas, particularly in areas like AI that are changing so incredibly fast. The feedback we got from participants was that they found the workshops interesting and engaging, and the relationships they built and the thought processes they heard were also valuable to them.
Another thing that people can do is sign up for our I2O mailing list for announcements whenever we hire a new program manager or announce a new program. It's a good way of staying on top of what the office is doing, so join that mailing list. In addition, DARPAConnect is a new effort to help people understand how to do business with DARPA and level the playing field to help those who have yet to work with the agency.
When I2O announces a new program, which we promote via that office mailing list, come to the associated Proposers Day (either in person or via a remote option, which is almost always available), talk to other potential performers, listen to the program manager, and write a proposal if you have a solution. New program concepts aren't produced in a vacuum. They come from program managers talking to people in the community and figuring out what's possible, what's not, and where is the best return on investment.
To learn more about AI Forward, visit https://www.darpa.mil/work-with-us/ai-forward.