ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

Conference CallsFeatured

Workshop on Models of Human Feedback for AI Alignment at ICML 2024

The Models of Human Feedback for AI Alignment Workshop will be held on July 26, 2024, during ICML in Vienna, Austria. This workshop will focus on crucial questions related to AI alignment and learning from human feedback, including modeling human feedback, learning from diverse human feedback, and ensuring alignment despite misspecified human models. Follow us on Twitter here for updates.

The Call for Papers can be found here, and submissions are due by May 31st, 2024 (AOE). Acceptance notifications will be sent out on June 17th, 2024. The workshop will cover topics such as Learning from Demonstrations, Reinforcement Learning with Human Feedback, Human-AI Alignment, AI Safety, Cooperative AI, Robotics, Preference Learning, Learning to Rank, Computational Social Choice, Operations Research, Behavioral Economics, and Cognitive Science.

For any questions, please feel free to reach out to the organizers: Thomas, Christos, Scott, Constantin, Harshit, Lirong, and Aadirupa.

Leave a Reply

Your email address will not be published. Required fields are marked *