Authors: Mrs.M.Uma Devi, Togaru Reshma Sri, Katikidala Satya Ratna Naveen, Gubbala Leela Madhavi, Kalvakolanu Venkata Pavan Chaitanya, Velduti Srivenkata Surya Sai Kumar
Abstract: The rapid advancement of generative artificial intelligence has made it increasingly difficult to distinguish between real images and AI-generated synthetic images. Modern diffusion models can produce highly realistic visuals that closely resemble authentic photographs, raising serious concerns about misinformation, digital fraud, and media manipulation. As synthetic image generation becomes more accessible, reliable detection mechanisms are essential to maintain digital trust and security.This project presents an image classification framework for identifying AI-generated synthetic images using deep learning techniques. A balanced dataset is constructed by combining real images from the CIFAR-10 dataset with synthetic images generated using Stable Diffusion. A Convolutional Neural Network (CNN) model is trained to perform binary classification, distinguishing between real and fake images. In addition to classification, Explainable Artificial Intelligence (XAI) techniques such as Grad-CAM are applied to interpret model decisions and visualize the regions that influence predictions.Experimental results demonstrate that the proposed model achieves high accuracy in detecting synthetic images while maintaining reliable generalization performance. The explainability component further enhances transparency by revealing distinctive patterns and artifacts present in AI-generated images. The proposed system contributes to improving digital image forensics and strengthening defences against AI-driven visual misinformation.