
May 6, 2025
Join the conversation on AI alignment at our upcoming workshop at the IEEE Conference on Artificial Intelligence this May!
We’re hosting the session, “Human Alignment in AI Decision-Making Systems: An Inter-disciplinary Approach towards Trustworthy AI.”
All who are interested are welcome to attend.
As AI continues to play a growing role in decision-making—both in everyday life and high-stakes environments—ensuring its alignment with human decision-makers is more critical than ever. But what does AI alignment really mean? What would it take to confidently delegate high-stakes decisions to AI systems?
Our upcoming workshop brings together experts from computer science, AI, psychology, and the social sciences to tackle some of the biggest questions in AI alignment:
- Can AI truly align with human values? If so, which ones?
- How does AI alignment impact trust and delegation?
- What are the ethical, legal, and societal implications of alignment?
- How can we develop better models, metrics, and frameworks to measure and improve AI alignment?
We’re calling on researchers, practitioners, and policymakers to join us in shaping the future of human-aligned AI. This workshop will feature engaging discussions, research presentations, and keynotes from leaders in the field.
More details at https://sites.google.com/view/ieee-cai-2025-workshop/about.
Event
May 6, 2025
Hyatt Regency Santa Clara
5101 Great America Parkway
Santa Clara, Calif.