LLMSEC 2025: Call for Papers on Large Language Model Security
LLMSEC 2025: Call for papers on large language model security, including adversarial attacks, data poisoning, and secure LLM use.
LLMSEC 2025 is an academic event focused on adversarially-induced failure modes of large language models, the conditions that lead to them, and their mitigations. The workshop will take place on August 1, 2025, in Vienna, Austria, co-located with ACL 2025.
The event scope includes LLM attacks, LLM defense, and the contextualization of LLM security. Topics of interest include adversarial attacks on LLMs, data poisoning, model inversion, and secure LLM use and deployment.
Keynote speakers include Johannes Bjerva, Aalborg University, and Erick Galinkin, NVIDIA Corporation. Submission formats include long and short papers, qualitative work, and ‘war stories’.
Submissions must be anonymized and de-identified, following ACL policy, and in the ACL template. The direct submission deadline is April 15, 2025.
Submit via softconf: https://softconf.com/acl2025/llmsec2025/
Tags: LLMSEC 2025, Large Language Model Security, ACL 2025, Adversarial Attacks, Data Poisoning, Model Inversion, Secure LLM Use