Explainable deep learning: A visual analytics approach with transition matrices

dc.contributor.authorRadiuk, Pavlo
dc.contributor.authorOlexander, Barmak
dc.contributor.authorManziuk, Eduard
dc.contributor.authorIurii, Krak
dc.date.accessioned2024-04-02T06:52:20Z
dc.date.available2024-04-02T06:52:20Z
dc.date.issued2024-03-29
dc.description.abstractThe non-transparency of artificial intelligence (AI) systems, particularly in deep learning (DL), poses significant challenges to their comprehensibility and trustworthiness. This study aims to enhance the explainability of DL models through visual analytics (VA) and human-in-the-loop (HITL) principles, making these systems more transparent and understandable to end users. In this work, we propose a novel approach that utilizes a transition matrix to interpret results from DL models through more comprehensible machine learning (ML) models. The methodology involves constructing a transition matrix between the feature spaces of DL and ML models as formal and mental models, respectively, improving the explainability for classification tasks. We validated our approach with computational experiments on the MNIST, FNC-1, and Iris datasets using a qualitative and quantitative comparison criterion, that is, how different the results obtained by our approach are from the ground truth of the training and testing samples. The proposed approach significantly enhanced model clarity and understanding in the MNIST dataset, with SSIM and PSNR values of 0.697 and 17.94, respectively, showcasing high-fidelity reconstructions. Moreover, achieving an F1m score of 77.76% and a weighted accuracy of 89.38%, our approach proved its effectiveness in stance detection with the FNC-1 dataset, complemented by its ability to explain key textual nuances. For the Iris dataset, the separating hyperplane constructed based on the proposed approach allowed for enhancing classification accuracy. Overall, using VA, HITL principles, and a transition matrix, our approach significantly improves the explainability of DL models without compromising their performance, marking a step forward in developing more transparent and trustworthy AI systems.
dc.identifier.citationRadiuk P., Barmak O., Manziuk E., Krak Iu. Explainable deep learning: A visual analytics approach with transition matrices. Mathematics. 2024. Vol. 12. No. 7. P. 1024. DOI: https://doi.org/10.3390/math12071024
dc.identifier.issn2227-7390
dc.identifier.urihttps://elar.khmnu.edu.ua/handle/123456789/15818
dc.language.isoen
dc.publisherMultidisciplinary Digital Publishing Institute
dc.subjectexplainable artificial intelligence (XAI)
dc.subjectdeep learning
dc.subjectmachine learning
dc.subjectvisual analytics
dc.subjecthuman-in-the-loop
dc.subjectmodel explainability
dc.subjecttransition matrix
dc.titleExplainable deep learning: A visual analytics approach with transition matrices
dc.typeСтаття
Файли
Контейнер файлів
Зараз показуємо 1 - 1 з 1
Вантажиться...
Ескіз
Назва:
Radiuk_Explainable-Deep-Learning.pdf
Розмір:
5.57 MB
Формат:
Adobe Portable Document Format
Ліцензійна угода
Зараз показуємо 1 - 1 з 1
Назва:
license.txt
Розмір:
4.26 KB
Формат:
Item-specific license agreed upon to submission
Опис: