Tackling Gender Bias in Natural Language Processing: Join the Fifth Workshop in Bangkok, Thailand!
The Fifth Workshop on “Gender Bias in Natural Language Processing” is set to take place in Bangkok, Thailand, on August 16, 2024. This event will bring together researchers and industry professionals to address the pressing issue of gender bias in machine-learned models, which can lead to poor user experiences and perpetuate harmful stereotypes.
The workshop aims to create awareness and foster a consensus on combating bias in NLP models by developing standard tasks and metrics. Topics of interest include the detection, measurement, and mediation of gender bias in NLP models and applications, as well as the creation of datasets, identification and assessment of relevant biases, and fairness in NLP systems. Non-technical work addressing sociological perspectives and critical reflections on the sources and implications of bias are also encouraged.
This year, the workshop will feature a Shared Task on Gender Bias Machine Translation evaluation. Submissions will be accepted as short (4-6 pages) and long papers (8-10 pages), plus additional pages for references. A non-archival option is available for those who prefer not to have their research published in the conference proceedings.
Confirmed keynote speakers for the event are Isabelle Augenstein from the University of Copenhagen and Hal Daumé III from the University of Maryland and Microsoft Research NYC. Don’t miss this opportunity to engage with leading experts in the field and contribute to the development of more equitable NLP models.
Important dates:
- May 10, 2024: Workshop Paper Due Date
- June 5, 2024: Notification of Acceptance
- June 25, 2024: Camera-ready papers due
- August 16, 2024: Workshop Dates
For more information, visit the workshop website: https://genderbiasnlp.talp.cat/