Interpretable AI for Intelligent Event Detection and Anomaly Classification in Healthcare Monitoring Systems

Uncategorized

Interpretable AI for Intelligent Event Detection and Anomaly Classification in Healthcare Monitoring Systems
Authors:-Assistant Professor Mrs.K.S.R.Manjusha, D.Ashok Kumar, M.Harish, M.Hari Sathvik, M.Vinsy, A.Sri Sai Keerthi.

Abstract-Artificial intelligence (AI) is transforming healthcare by automating the detection and classification of events and anomalies, enhancing patient monitoring and intervention. In this context, events refer to abnormalities caused by medical conditions such as seizures or falls, while anomalies are erroneous data resulting from sensor faults or malicious attacks. AI-based event and anomaly detection (EAD) enables early identification of critical issues, reducing false alarms and improving patient outcomes. The advancement of Medical Internet of Things (MIoT) devices has further facilitated real-time data collection, AI-driven processing, and transmission, enabling remote monitoring and personalized healthcare. However, ensuring the transparency and explainability of AI systems is crucial in medical applications to foster trust and understanding among healthcare professionals. This work presents an online EAD approach utilizing a lightweight autoencoder (AE) on MIoT devices to detect abnormalities in real time. The detected abnormalities are then explained using Kernel SHAP, a technique from explainable AI (XAI), and subsequently classified as either events or anomalies using an artificial neural network (ANN). Extensive simulations conducted on the Medical Information Mart for Intensive Care (MIMIC) dataset demonstrate the robustness of the proposed approach in accurately detecting and classifying events, regardless of the proportion of anomalies present.

DOI: 10.61137/ijsret.vol.11.issue2.292

× How can I help you?