RLEval Workshop
Deadline: May 11, 2026
RLEval workshop on evaluating AI agents, focusing on methods and environments
We invite submissions of 4-page short papers for our upcoming workshop on evaluating AI agents. This workshop focuses on rigorous evaluation methods, RL environment design principles, benchmarks, and real-world case studies.
Topics of interest include:
- Evaluation Techniques
- Data and Benchmarks
- RL Environments for LLM-Based Agents
- Enterprise Case Studies
- Specific Capabilities
Submission information:
- Length: 4 pages of main text (excluding references and appendix)
- Review Process: Single-blind (non-anonymized)
- Format: Accepted papers will be featured in an interactive poster session
- Requirements: Submissions under review elsewhere are permitted, but previously published papers are not
Key dates:
- Submission Deadline: May 11, 2026
- Notification: May 18, 2026
- Camera-ready: May 22, 2026
- Workshop Date: May 26, 2026
Links:
Tags: AI Agents, Evaluation Methods, RL Environments, Benchmarks, Case Studies, Machine Learning, Artificial Intelligence