ICML 2024 Workshop on Models of Human Feedback for AI Alignment
The Models of Human Feedback for AI Alignment Workshop is set to take place at ICML 2024 in Vienna, Austria. The workshop will delve into crucial questions for AI alignment and learning from human feedback, such as how to model human feedback, how to learn from diverse human feedback, and how to ensure alignment despite misspecified human models. Submissions are now open and the deadline is May 31st AOE 2024. Key topics include Learning from Demonstrations, Reinforcement Learning with Human Feedback, Human-AI Alignment, AI Safety, Robotics, Preference Learning, Computational Social Choice, Operations Research, Behavioral Economics, and Cognitive Science. Visit the Call for Papers and Submission Portal for more information.