Second Call for Papers: Explainable Deep Neural Networks for Responsible AI (DeepXplain 2025) at IJCNN
Submit papers for the DeepXplain 2025 special session at IJCNN 2025 focusing on Explainable Deep Neural Networks and Responsible AI. Learn about important dates, topics of interest, and submission guidelines.
We invite paper submissions to the special session Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025) at IJCNN 2025. This session focuses on innovative methodologies for improving the interpretability of Deep Neural Networks (DNNs) and fairness, including post-hoc explanation methods and self-explaining mechanisms. The goal is to enhance human trust in these models and mitigate risks of negative social impacts. Topics of interest include theoretical advancements, development of inherently interpretable architectures, post-hoc and self-explaining methods for Large Language Models, application-driven explainability insights, ethical evaluations, methods for improving interpretability and fairness, ethical discussions, datasets and benchmarking tools, and explainable AI in critical applications.
Submissions must be written in English, adhere to the IJCNN-2025 formatting guidelines, and be submitted as a single PDF file. Important dates: submission link open, submission deadline is January 15, 2025, notification date is March 15, 2025, and camera-ready submission is May 1, 2025.
Tags: DeepXplain 2025, IJCNN 2025, Explainable Deep Neural Networks, Responsible AI, Post-Hoc Explanation Methods, Self-Explaining Mechanisms, Interpretability, Fairness, Deep Learning, IJCNN, Artificial Intelligence, Conference