ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

Conference CallsFeatured

CHOMPS Workshop @ IJCNLP-AACL 2025: Mitigating Hallucinations in LLMs

Notice: Heads up: This article was published more than 6 months ago. Details, links, or policies may have changed since then.

CHOMPS Workshop at IJCNLP-AACL 2025 addresses hallucination in LLMs, with a focus on multilingual and precision-critical settings. Submission deadline: September 29, 2025.

The CHOMPS workshop at IJCNLP-AACL 2025, scheduled for December 23/24, 2025, at IIT Bombay, Mumbai, India, focuses on addressing the issue of ‘hallucination, confabulation, and overgeneration’ in Large Language Models (LLMs). These phenomena pose significant risks in precision-critical applications, including healthcare, legal systems, and education.

The workshop will explore various topics, including:

  • Metrics and tools for hallucination detection
  • Factuality challenges in mission-critical domains
  • Mitigation strategies during inference or model training
  • Hallucinatory behaviors in cross-lingual and multilingual scenarios

Invited speakers include Anna Rogers, Danish Pruthi, Abhilasha Ravichander, and Dr. Khyathi Raghavi Chandu. A panel discussion will feature Preslav Nakov, Sunayana Sitaram, and Chung-Chi Chen.

The workshop will also include a shared task, SHROOM-CAP – A Cross-lingual Scientific Hallucination Detection. More information is available at https://helsinki-nlp.github.io/shroom/2025a.

Submissions are open until September 29, 2025, via https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/CHOMPS or through ARR commitment.

Tags: CHOMPS Workshop, IJCNLP-AACL 2025, Hallucination in LLMs, Multilingual Settings, Precision-critical Applications, Large Language Models, NLP Research