Designing Explainable Large Language Models for Critical Decision-Making in Healthcare: A Review-Based XAI Perspective

Uncategorized

Designing Explainable Large Language Models for Critical Decision-Making in Healthcare: A Review-Based XAI Perspective
Authors:-Roshan Nikam, Parv Shah, Anish Shinde, Chaitanyaa Kashid/strong>

Abstract- Large Language Models (LLMs) are changing the world of healthcare by simplifying clinical documentation, diagnosing, and making medical decisions. Despite their promise, the black-box-like behavior of LLMs poses severe challenges in a healthcare scenario where trust, interpretability, and accountability are of utmost importance. This paper investigates the LLM interpretability techniques in the medical domain based on the salient insights obtained from an extensive survey of Explainable AI (XAI) literature. It looks into post-hoc explanation techniques (SHAP, LIME), collaborative human-AI decision-making frameworks, and interpretable approaches such as neurosymbolic systems. Highlighted are the main issues: algorithmic bias, hallucinations, healthcare compliance, and algorithmic inefficiencies. It is posited that structured prompting, especially Chain-of-Thought (CoT) reasoning enhanced by diagnostic logic, would ensure that LLM actions, in terms of outputs and explanations, are much more in sync with clinical reasoning, thus enhancing transparency while preserving performance. It is concluded that, drawing on the insights gained from XAI, the more interpretable LLMs promote clinicians’ trust in AI systems’ conduct and consequently promote ethical and effective integration into the healthcare setting. The way forward is to focus on some direction to maintain the balance between model accuracy and interpretability and cater to evolving regulatory requirements.

DOI: 10.61137/ijsret.vol.11.issue2.337