Title: First Workshop on Evaluating IR Systems with Large Language Models (LLMs)
Description: The first workshop on evaluating IR systems with Large Language Models (LLMs) is now open for submissions. The workshop aims to bring together researchers and practitioners to discuss the use of LLMs in evaluating IR systems. Submissions are invited on various topics, including LLM-based evaluation metrics, agreement between human and LLM labels, effectiveness of LLMs in producing robust relevance labels, potential systemic biases in LLM-based relevance estimators, automated evaluation of text generation systems, end-to-end evaluation of Retrieval Augmented Generation systems, trustworthiness in LLMs evaluation, prompt engineering in LLMs evaluation, effectiveness of LLMs as ranking models, LLMs in specific IR tasks, and challenges and future directions in LLM-based IR evaluation. Both previously unpublished manuscripts and published manuscripts are welcome. The submission deadline is May 2nd, 2024 (AoE time), and acceptance notifications will be sent out on May 31st, 2024 (AoE time). The workshop will take place on July 18, 2024. For more information, visit the workshop website at