ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

Conference CallsFeatured

NeSy 2025: Call for Papers on Neurosymbolic Methods

NeSy 2025: Call for Papers on Neurosymbolic Methods for Trustworthy and Interpretable AI

We are pleased to announce the Neurosymbolic Methods for Trustworthy and Interpretable AI special track at the 19th International Conference on Neurosymbolic Learning and Reasoning (NeSy 2025).

This special track aims to bring together researchers working on neurosymbolic approaches to develop AI systems that are transparent, fair, robust, and ethically aligned.

Combining symbolic reasoning and neural systems offers unique opportunities to enhance trustworthiness and interpretability as AI continues to be applied in high-stakes domains.

  • Fairness and Bias Mitigation: Leveraging symbolic reasoning and ontologies to detect, mitigate, and explain biases.
  • Explainable Decision-Making: Neurosymbolic techniques for generating interpretable AI justifications.
  • Robustness and Verifiability: Formal verification of AI systems using neurosymbolic methods.
  • Knowledge Graphs and Trustworthy AI: Enhancing AI reliability through structured knowledge representations.
  • Neurosymbolic Debugging: Identifying and explaining AI prediction failures.
  • Metrics and Benchmarks: Developing new evaluation methodologies for trustworthiness, interpretability, and robustness.

Please submit your papers via OpenReview and select the ‘Neurosymbolic Methods for Trustworthy and Interpretable AI’ track. Submissions should adhere to the conference submission guidelines available on the NeSy 2025 website.

Important Dates:

  • Submission Deadline: March 7, 2025
  • Notification of Acceptance: April 18, 2025
  • Conference Dates: 8-10 September 2025

We look forward to your contributions and hope to see you at NeSy 2025!

Learn more about NeSy 2025

View submission guidelines

Tags: NeSy 2025, Neurosymbolic Methods, Trustworthy AI, Interpretable AI, Call for Papers, AI Research, Machine Learning