Логотип репозиторію
  • English
  • Українська
  • Увійти
    або
    Новий користувач? Зареєструйтесь.Забули пароль?
Логотип репозиторію
  • Фонди та зібрання
  • Пошук за критеріями
  • English
  • Українська
  • Увійти
    або
    Новий користувач? Зареєструйтесь.Забули пароль?
  1. Головна
  2. Переглянути за автором

Перегляд за Автор "Yakovlev, Sergiy"

Зараз показуємо 1 - 2 з 2
Результатів на сторінці
Налаштування сортування
  • Вантажиться...
    Ескіз
    Документ
    Integration of Contextual Descriptors in Ontology Alignment for Enrichment of Semantic Correspondence
    (IEEE, Inc., 2024-12-02) Manziuk, Eduard; Barmak, Oleksander; Radiuk, Pavlo; Kuznetsov, Vladislav; Krak, Iurii; Yakovlev, Sergiy
    This paper proposes a novel approach to semantic ontology alignment using contextual descriptors. A formalization was developed that enables the integration of essential and contextual descriptors to create a comprehensive knowledge model. The hierarchical structure of the semantic approach and the mathematical apparatus for analyzing potential conflicts between concepts, particularly in the example of "Transparency" and "Privacy" in the context of artificial intelligence, are demonstrated. Experimental studies showed a significant improvement in ontology alignment metrics after the implementation of contextual descriptors, especially in the areas of privacy, responsibility, and freedom & autonomy. The application of contextual descriptors achieved an average overall improvement of approximately 4.36%. The results indicate the effectiveness of the proposed approach for more accurately reflecting the complexity of knowledge and its contextual dependence.
  • Вантажиться...
    Ескіз
    Документ
    Toward explainable deep learning in healthcare through transition matrix and user-friendly features
    (Frontiers Media SA, 2024-11-25) Barmak, Oleksander; Krak, Iurii; Yakovlev, Sergiy; Manziuk, Eduard; Radiuk, Pavlo; Kuznetsov, Vladislav
    Modern artificial intelligence (AI) solutions often face challenges due to the “black box” nature of deep learning (DL) models, which limits their transparency and trustworthiness in critical medical applications. In this study, we propose and evaluate a scalable approach based on a transition matrix to enhance the interpretability of DL models in medical signal and image processing by translating complex model decisions into user-friendly and justifiable features for healthcare professionals. The criteria for choosing interpretable features were clearly defined, incorporating clinical guidelines and expert rules to align model outputs with established medical standards. The proposed approach was tested on two medical datasets: electrocardiography (ECG) for arrhythmia detection and magnetic resonance imaging (MRI) for heart disease classification. The performance of the DL models was compared with expert annotations using Cohen’s Kappa coefficient to assess agreement, achieving coefficients of 0.89 for the ECG dataset and 0.80 for the MRI dataset. These results demonstrate strong agreement, underscoring the reliability of the approach in providing accurate, understandable, and justifiable explanations of DL model decisions. The scalability of the approach suggests its potential applicability across various medical domains, enhancing the generalizability and utility of DL models in healthcare while addressing practical challenges and ethical considerations.

DSpace software copyright © 2002-2026 LYRASIS

  • Налаштування куків
  • Зворотний зв'язок