ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

Conference CallsFeatured

Call for Papers: L2M2 Workshop @ ACL 2025

Submit your paper to the First Workshop on Large Language Model Memorization (L2M2) at ACL 2025, exploring the phenomenon of LLM memorization and its implications.

The First Workshop on Large Language Model Memorization (L2M2) seeks to provide a central venue for researchers studying LLM memorization from different angles.

We invite paper submissions on topics related to LLM memorization, including behavioral analyses, methods for measuring memorization, interpretability work, and more.

  • Behavioral analyses: Understanding what training data is memorized by models
  • Methods for measuring memorization: Measuring the extent to which models memorize training data
  • Interpretability work: Analyzing how models memorize training data
  • Relationship between training data memorization and membership inference
  • Analyses of how memorization of training data is related to generalization and model capabilities
  • Methods or benchmarks for preventing models from outputting memorized data
  • Model editing techniques for modifying knowledge that models have memorized
  • Legal implications and risks of LLM memorization
  • Privacy and security risks of LLM memorization
  • Implications of memorization for benchmarking

Submission guidelines: Format your papers using the standard ACL template and submit via OpenReview or L2M2 website.

Tags: Large Language Model Memorization, L2M2 Workshop, ACL 2025, Machine Learning, Natural Language Processing, AI Research, Computer Science, Robotics, Linguistics, NLP, Translation