GenBench: The Second Workshop on Generalisation (Benchmarking) in NLP @ EMNLP 2024
The GenBench workshop, co-located with EMNLP 2024, aims to serve as a platform for discussing challenging questions related to generalisation in NLP, crowd-sourcing generalisation benchmarks for large language models (LLMs), and making progress on open questions related to generalisation. The workshop welcomes submissions on topics such as opinion papers about generalisation, analyses of model generalisation, empirical studies on generalisation, meta-analyses comparing generalisation studies, and studies on the relationship between generalisation and fairness or robustness.
The workshop calls for two types of submissions: regular workshop submissions and collaborative benchmarking task (CBT) submissions. Regular workshop submissions can be archival papers or extended abstracts, while CBT submissions consist of a data/task artefact and a companion paper. The CBT this year focuses on generating versions of existing evaluation datasets for LLMs with a larger distribution shift than the original test set, to evaluate generalisation to a stronger degree.
Archival papers are up to 8 pages excluding references and report on completed, original, and unpublished research. Extended abstracts can be up to 2 pages excluding references and may report on work in progress or be cross-submissions of work that has already appeared in another venue. Both regular workshop papers and collaborative benchmarking task submissions will undergo double-blind peer review and should be anonymised.
Important dates:
- August 15, 2024: Paper submission deadline
- September 20, 2024: Notification deadline
- October 4, 2024: Camera-ready deadline
- November 15 or 16, 2024: Workshop
For more information, visit the workshop website or email the organisers at genbench at googlegroups.com.