Authors: P. Meiyazhagan, R. Harish Muthu, M.V. Kowshika, S. Mohammed Kaif
Abstract: The rapid proliferation of fake profiles across heterogeneous social media platforms presents a significant challenge to online security, misinformation control, and digital trust. Traditional machine learning models for fake profile detection often operate as black-box systems, making it difficult to interpret their decisions. To address this, we propose a novel Explainable AI (XAI)-driven framework that enhances transparency and accountability in fake profile identification. Our approach integrates ensemble machine learning models (Random Forest, XGBoost, and Support Vector Machines) with SHapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to provide interpretable feature importance insights. By analyzing user metadata, behavioral patterns, and social network interactions, our system detects fake profiles while justifying its predictions in a human-understandable manner. Furthermore, an interactive XAI dashboard enables users and platform moderators to visualize decision factors, improving trust and ethical AI adoption. Experimental results demonstrate high detection accuracy and explainability, making this framework a promising solution for combating fake identities across diverse social media ecosystems.
DOI: