IJCNN Special Session on Explainable Deep Neural Networks for Responsible AI: DeepXplain 2025
Join the IJCNN Special Session on Explainable Deep Neural Networks for Responsible AI: DeepXplain 2025. Submit your papers by January 15, 2025, and explore innovative methodologies for improving DNN interpretability.
IJCNN Special Session on Explainable Deep Neural Networks for Responsible AI: DeepXplain 2025
We invite you to participate in the special session "Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025)" at IJCNN 2025. This session focuses on innovative methodologies to improve the interpretability of Deep Neural Networks while maintaining high predictive accuracy. The goal is to enhance human trust in these models and mitigate negative social impacts.
Topics of interest include theoretical advancements in post-hoc explanation methods, development of inherently interpretable architectures, post-hoc and self-explaining methods for Large Language Models, application-driven explainability insights, ethical evaluations of DNN-based AI models, methods for improving interpretability and fairness in DNNs, ethical discussions about the social impact of non-transparent AI models, datasets and benchmarking tools for explainability, and explainable AI in critical applications.
Submit your academic papers (long or short) by January 15, 2025, following the IJCNN-2025 formatting guidelines.
Organizers: Francielle Vargas (University of São Paulo, Brazil), Roseli Romero (University of São Paulo, Brazil), Jackson Trager (University of Southern California, USA), and Edson Prestes (Federal University of Rio Grande do Sul, Brazil).
Tags: IJCNN, DeepXplain, Deep Neural Networks, Explainable AI, Responsible AI, Post-Hoc Approaches, Self-Explaining Approaches, Interpretability, DNNs, IJCNN 2025