ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

Conference CallsFeatured

Explainable Deep Neural Networks for Responsible AI: Post-hoc and Self-Explaining Approaches

Join the IJCNN 2025 special session on Explainable Deep Neural Networks for Responsible AI, and submit your papers on post-hoc and self-explaining approaches, addressing fairness and bias mitigation in deep neural networks. Collaborate, promote ethical AI, and develop benchmarks and datasets.

The Explainable Deep Neural Networks for Responsible AI special session at IJCNN 2025 invites paper submissions addressing fairness and bias mitigation in deep neural networks. This session aims to foster interdisciplinary collaboration, promote ethical AI systems, and encourage the development of benchmarks and datasets. Topics include theoretical advancements, inherently interpretable architectures, post-hoc explanations for large language models, application-driven explainability, ethical evaluations, methods and metrics for improving interpretability and fairness, ethical discussions, and databanking tools. Submit your papers in English, adhering to the IJCNN-2025 formatting guidelines, by the deadline of January 31, 2025.

Tags: Explainable Deep Neural Networks, Responsible AI, Post-hoc explanation, Self-explaining approaches, IJCNN 2025, Deep neural networks, Fairness, Bias mitigation, Interdisciplinary collaboration, Ethical AI systems, Benchmarks, Datasets