PhD Student in AI Security at Linköping University
PhD student position in AI Security at Linköping University, focusing on security risks of large language model agents.
A PhD student position in AI Security is available at Linköping University, focusing on the security risks associated with large language model (LLM) agents, particularly memory poisoning attacks.
- Project goal: Develop a theoretical foundation for understanding and mitigating memory poisoning in LLM agents.
- Funded by the Swedish Research Council (VR).
- Opportunity to work at the forefront of AI security.
For more details and to apply, visit the job posting.
Tags: AI Security, PhD Student, Linköping University, Large Language Models, Memory Poisoning Attacks, Swedish Research Council