Breadcrumb

  1. Home
  2. Research
  3. Programs
  4. ITM: In The Moment

ITM: In the Moment

Summary

The In the Moment (ITM) program investigates whether the alignment of artificial intelligence (AI) to individual humans affects willingness to delegate in high-stakes domains. 

ITM differs from existing research on AI alignment, which has primarily focused on low-stakes domains like chat, where the primary alignment goal is to avoid noxious behavior. In contrast, the program considers high-stakes domains such as medicine or security, where experts disagree, and decisions must be made in the presence of ambiguity. 

AI systems do not naturally align with humans, nor do we have methods to quantify individual human decision-making, which begs the question of how to align AI to humans. ITM aims to answer how to align with humans and develop technologies to enable such alignment. 

ITM has four goals in investigating the impact of alignment on delegation to AI: 

  1. First, the program aims to develop a means of assessing and characterizing the different ways humans make decisions in high-stakes domains.
  2. In high-stakes domains, decisions are made under contextual pressures. Another program goal is to develop and use immersive virtual reality-testing environments to replicate some of these real-world contexts to increase the fidelity of the program’s assessment of human decision-making.  
  3. The program aims to take what we learned about the different ways humans make decisions and generate human-alignable algorithms. The program will align the algorithms with groups or individual humans and make decisions in high-stakes domains. Human Subjects Research will assess whether humans will delegate decisions to the algorithms in the program’s immersive testbed.
  4. Finally, the program aims to develop a theory of AI alignment that predicts decision-making and delegation across domains.

The program will also include embedded AI technical experts specializing in behavioral and cognitive science, philosophy, law, and policy. 

 

Contact