Ereshi Akkamahadevi, Samatha2024-11-112024-11-112024https://hdl.handle.net/2097/44713Interpreting hidden neuron activations in convolutional neural networks (CNNs) is a critical objective in the field of Explainable Artificial Intelligence (XAI), aiming to transform "black box" models into transparent systems. Previously, our research group addressed this challenge by employing concept induction and semantic reasoning using a concept hierarchy derived from the Wikipedia knowledge graph. However, that process was manual and required several days to complete, limiting its practicality and scalability. In this study, we introduce a fully automated pipeline that streamlines model training, data preparation, concept induction, image retrieval, classification, and statistical validation, reducing execution time from days to approximately 1 hour and 30 minutes while ensuring consistent, reproducible results. The automation enables flexible adjustments, such as incorporating a broader range of training images and classes and examining additional concept induction results across various neuron layers with minimal code changes. The automation efficiently processes large datasets by utilizing parallel processing capabilities on high-performance computing resources, ensuring that the pipeline operates quickly and effectively. The results confirm that our automated approach effectively bridges the gap between deep learning models and human-understandable concepts, enhancing model interpretability without compromising performance. This work advances the field of Explainable AI (XAI) by offering a practical solution that balances interpretability with computational efficiency, contributing to the broader goal of making AI systems more transparent and trustworthy.en-USExplainable Artificial IntelligenceDeep learningKnowledge graphSemantic webAutomation in AIAutomation of concept induction based CNN neuron interpretation using high performance computingThesis