Summary
The commercial sector is incentivized to provide user-friendly experiences to people who use their artificial intelligence (AI) models, such as large language models.
However, the consequences of this “frictionless” experience may lead users to use AI outputs uncritically and not examine unintended consequences. Moreover, no system exists today that can identify when and how to add friction to dialogue to promote accountability and ensure solutions meet implicit assumptions unknown at the start of a conversation.
The Friction for Accountability in Conversational Transactions (FACT) Artificial Intelligence Exploration (AIE) opportunity will explore human-AI dialogue-based methods that avoid over-trust through reflective reasoning (“friction”) that reveals implicit assumptions between dialogue partners, enabling accountable decision-making in complex environments. FACT aims to develop and evaluate human-AI conversation-shaping algorithms that:
- capture mutual assumptions, views, and intentions based on dialogue history
- auto-assess the consequences of potential actions and the level of accountability for responses
- reveal implicit costs and assumptions to the user, prompting critical analysis, and proposing course changes as appropriate
Resulting technology could determine when and how to slow down conversational interactions with AI agents (e.g., large pre-trained models such as GPT-4, LLAMA-2) in high-stakes situations at decision time.
Examples of friction include asking the user, “Have you explored other options?”, “What about this possibility?” or “Here are my (the machine’s) assumptions; are they valid?”.
Workshops informed the FACT AIE as part of DARPA’s AI Forward initiative.