LLM4Eval@SIGIR 2025: Advancing Evaluation Methodologies in IR with Large Language Models
LLM4Eval@SIGIR 2025 workshop invites submissions on LLM-based evaluations in IR, exploring new opportunities, limitations, and hybrid approaches.
The Third Workshop on LLM4Eval at SIGIR 2025 invites submissions exploring new opportunities, limitations, and hybrid approaches involving LLM-based evaluations in Information Retrieval (IR). Recent advancements in Large Language Models (LLMs) have significantly impacted evaluation methodologies in IR, reshaping the way relevance, quality, and user satisfaction are assessed.
Submissions are encouraged on topics including, but not limited to:
- LLMs for query-document relevance assessment
- Evaluating conversational IR and recommendation systems with LLMs
- Hybrid evaluation frameworks combining LLM and human annotations
- Identifying failure modes and limitations of LLM annotations
- Prompt engineering strategies for improving LLM annotation quality
Important dates:
- Paper submission deadline: April 23, 2025 (AoE)
- Notification of acceptance: May 21, 2025 (AoE)
- Workshop date: July 17, 2025
Submission guidelines and publication options are available on the workshop website: https://easychair.org/conferences/?conf=llm4evalsigir25
Tags: LLM4Eval, SIGIR 2025, Large Language Models, Information Retrieval, Evaluation Methodologies, IR Systems, Conversational Interfaces, Recommendation Systems