Document Type
Conference Proceeding
Publication Title
Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies
Publication Date
2-16-2023
Meeting Name
16th International Joint Conference on Biomedical Engineering Systems and Technologies
Meeting Date
February 15-17, 2023
Meeting Location
Lisbon, Portugal
Abstract/ Summary
Artificial intelligence (AI) systems are becoming common for decision support. However, the prevalence of the black-box approach in developing AI systems has been raised as a significant concern. It becomes very crucial to understand how an AI system makes decisions, especially in healthcare, since it directly impacts human life. Clinical decision support systems (CDSS) frequently use Natural Language Processing (NLP) techniques to extract information from textual data including Electronic Health Records (EHRs). In contrast to the prevalent black box approaches, emerging ’Explainability’ research has improved our comprehension of the decision-making processes in CDSS using EHR data. Many researches use ’attention’ mechanisms and ’graph’ techniques to explain the ’causability’ of machine learning models for solving text-related problems. In this paper, we conduct a survey of the latest research on explainability and its application in CDSS and healthcare AI systems using NLP. For our work, we searched through medical databases to find explainability components used for NLP tasks in healthcare. We extracted 26 papers that we found relevant for this review based on their main approach to develop explainable NLP models. We excluded some papers since they did not possess components for inherent explainability in architectures or they included explanations directly from the medical experts for the explainability of their work, leaving us with 16 studies in this review. We found attention mechanisms are the most dominant approach for explainability in healthcare AI and CDSS systems. There is an emerging trend using graphing and hybrid techniques for explainability, but most of the projects we studied employed attention mechanisms in different ways. The paper discusses the inner working, merits and issues in the underlying architectures. To the best of our knowledge, this is among the few papers summing up latest explainability research in the healthcare domain mainly to support future work on NLP-based AI models in healthcare.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.