Authors: Drbrindhas, Ms. P.Abirami In, Mr. Ajay.R, Mr. Anbarasan.R, Mr. Rishihesh .M.M, Mr.Safwan.S, Mr.Sriram.V
Abstract: This paper presents the design and implementation of a Smart Playlist Generator using Affective Computing — a real-time, AI-driven music recommendation system that personalizes playlists based on the user's emotional state. The system integrates three core components: (1) a Facial Emotion Recognition (FER) module built on OpenCV and Convolutional Neural Networks (CNNs) that classifies emotions in real time from webcam input, (2) a Natural Language Processing (NLP) module that supports Thanglish (Tamil- English transliterated) text commands for conversational interaction, and (3) a Spotify Web API integration that maps detected emotions to audio features such as valence, energy, and tempo to generate context-aware playlists. The system achieves an emotion recognition accuracy of 87– 90%, Thanglish command interpretation accuracy exceeding 90%, and a playlist-mood alignment rate of 85–90%, with an end-to-end latency of approximately 3 seconds. The architecture leverages HTML/CSS/JavaScript for the frontend, Node.js with Express for the backend, Firebase for data persistence, and Python-based AI modules for emotion and language processing. Experimental results confirm the viability of affective computing for dynamic, personalized music delivery, and the system demonstrates significant potential for next- generation human-computer interaction in multimedia platforms.
DOI: https://doi.org/10.5281/zenodo.19659822