Authors: Praveen B, Sripadma R
Abstract: Social media platforms such as Facebook, Twitter, Instagram, and WhatsApp have emerged as primary channels for public communication, information sharing, and social interaction. However, the same platforms also serve as spaces where abusive expressions, offensive remarks, and hate speech are increasingly common. Hate speech may target individuals or groups based on factors such as religion, nationality, gender, ethnicity, or other identity characteristics, and can result in psychological harm, discrimination, and real-world conflict. Manual moderation of such continuously increasing online content is challenging, inconsistent, and time-consuming. Therefore, automated detection systems are needed to analyze and classify harmful language. This research proposes a Natural Language Processing based system that preprocesses text, extracts features using TF-IDF, and classifies content using Support Vector Machine (SVM). The results show that this approach effectively distinguishes between normal, abusive, and hate speech, making it suitable for real-time moderation in social media platforms.