IJSRET » March 30, 2026

Daily Archives: March 30, 2026

Uncategorized

Real Time Automatic Phishing Detector_468

Authors: Bhakti Pokale

Abstract: Phishing attacks have become one of the most serious cybersecurity threats worldwide, causing identity theft, financial loss, and data breaches. Attackers use fake websites, emails, and malicious links to trick users into revealing sensitive information. Traditional security mechanisms such as antivirus software and browser filters are often unable to detect newly generated phishing URLs, making users vulnerable to attacks. To address this issue, this project proposes a Real-Time Automatic Phishing Detection System that identifies and blocks phishing links instantly. The system uses Machine Learning techniques, specifically the Random Forest Classifier, to analyze URL features such as length, domain age, and special characters. It operates silently in the background without requiring user intervention, ensuring continuous and seamless protection. The system is developed using Python, Java, JavaScript, Node.js, MongoDB, HTML, and CSS to support multi-platform functionality. It provides real-time alerts and maintains logs of detected threats for further analysis. The proposed solution aims to enhance cybersecurity by offering proactive protection and ensuring a safer digital environment for individuals and organizations.

DOI:

 

 

Published by:
Uncategorized

Sign-Voice Bidirectional Communication System For Normal, Deaf/Dumb And Blind People Based On Machine Learning

Authors: Jyothsna M, Srinika Kontham, Sharanya Balachandran, Meghana Danta, Sindhu Naine

Abstract: The SignVoice system is based on artificial intel- ligence technology that offers a bidirectional communication system for deaf, mute, and visually impaired persons to com- municate smoothly with normal persons. The SignVoice system is based on machine learning, deep learning, and computer vision technologies to offer different types of communication such as sign language, speech, text, and image-based communication. The hand gestures are recorded through the webcam and processed through MediaPipe to identify the landmark and classify the image through machine learning to produce text output. The input is converted into text through Whisper for speech input, and the text output is generated through an artificial intelligence- based chatbot and then converted into audio through text- to-speech technology. The SignVoice system is based on the hybrid approach to process the gestures through client-side processing and computationally intensive operations such as speech recognition through cloud-based services. In addition to this, the chatbot can perform image input, text output, and speech output that can be helpful for visually impaired persons. The proposed SignVoice system can communicate efficiently and accurately for impaired persons through gesture, speech, and intelligent text-based responses.

DOI: https://doi.org/10.5281/zenodo.19326545

 

Published by:
× How can I help you?