Summary
Foundation models (FMs) have transformed AI capabilities in many domains by virtue of their large architectures, internet-scale datasets, and unique customization techniques (“fine tuning”).
This trend has recently brought transformational capabilities to robots as well. In particular, FMs enable robots that can parse natural-language directions for complex tasks and then contextualize and execute those tasks in unconstrained, open-world environments – including even “zero-shot” scenarios.
This is a dramatic break from existing autonomous systems, which are designed for tailored applications and narrow, precise operating conditions.
However, natural-language direction for open-world autonomy presents a critical challenge from a safety and assurance perspective, since current methods to assure learning-enabled systems are inadequate to address FMs operating in this paradigm. For example, FMs are known to exhibit unique (semantically) errant behavior such as hallucination, false confidence in reasoning, and manipulation via “jailbreaking.”
Assurances are crucial to deploy FM-enabled robots so that they do not manifest these behaviors, which could fail to execute a critical task.
Opportunity
- Publication: Oct. 2, 2024
- Deadline: Jan. 13, 2025
- ARC Exploration Announcement (2024)