IJSRET » Blog Archives

Author Archives: vikaspatanker

Using Ensemble Of Multiple Fine-Tuned EfficientNet Models For Skin Cancer Classification

Uncategorized

Authors: Mr. Rohit Daundkar, Mr. Kaustubh Shirke, Dr. Jasbir Kaur, Assistant Professor Mr. Suraj Kanal

Abstract: Skin cancer is a prevalent form of cancer, and its early and accurate identification is critical for effective treatment. In this research paper, using an ensemble of fine- tuned Efficient Net models we proposed an improved approach for skin cancer classification. Our methodology incorporates data augmentation techniques to augment the dataset size, fine- tuning of the Efficient Net model by unfreezing the last few blocks, and employing an average ensemble for enhanced classification accuracy. The proposed approach when compared with other related work proved its effectiveness by outperforming them. Furthermore, our proposed ensemble method shows a precision value of 0.990, and accuracy of 0.988. Our findings demonstrate the effectiveness of the proposed methodology and its potential to significantly improve the diagnosis and treatment of skin cancer.

DOI:

 

Published by:

HATE SPEECH DETECTION USING MACHINE LEARNING

Uncategorized

Authors: Dr. Mainka Saharan, Mainka Saharan, Prince Kumar, Anuj Sharma, Sonu Kashyap, Yash Saxsenad

Abstract: Hate speech on social media has become a critical issue, posing a threat to societal harmony and individual well-being. As online platforms have become integral to communication, the dissemination of hateful and offensive language is increasingly unchecked, necessitating automated systems to detect and mitigate its impact [1][3]. This project aims to develop an automated hate speech detection system using advanced deep learning techniques, specifically the DistilBERT model, a lightweight transformer architecture known for its efficiency and accuracy [2][9]. The system categorizes textual content into three distinct classes: hate speech, offensive language, and neutral speech [1][4]. By employing comprehensive preprocessing methods to clean the text and leveraging tokenization to capture semantic meaning [1][6], the model is fine-tuned on a labeled dataset and achieves a test accuracy of 90.5%. The proposed system is designed for scalability and real-time deployment, addressing the challenge of moderating the vast amount of user-generated content on social media [5]. This study highlights the importance of using robust transformer models to analyze linguistic nuances, ensuring accurate classification even in complex and implicit cases of hate speech [9][2]. The project’s contributions include the development of a deployable application, introduction of data balancing techniques, and an evaluation of various preprocessing and modeling approaches [1][4].

DOI: http://doi.org/

 

Published by:

The Technical Efficiency On Zero Effect And Zero-Defect In Chennai Automotive Components Cluster

Uncategorized

Authors: E.Bhaskaran, S.Baskara Sethupathy, Harikumar Pallathadka

Abstract: The study on Zero Defect and Zero Effect for MSMEs leads to getting bronze, silver and gold certification. The objective is to study on 20 ZED parameters performance for 40 Automotive Components Manufacturing Enterprises at Tirumudivakkam. The methodology adopted is getting 5-point scale data and analysing using business analytics / artificial intelligence techniques like descriptive analysis, correlation analysis, predictive analysis and decision analysis using Difference in Difference method and Technical Efficiency where Traditional is considered as Control Variable and AI + Robotics implementation is considered as Treated Variable. To conclude technical efficiency of traditional and AI + Robotics are calculated and found that the Technical Efficiency of AI + Robotics implemented is greater than Technical Efficiency of Traditional one. It is also found that Measurement and Analysis is ranked as No.1, Risk Management is ranked as No.2, Human Resource Management is ranked as No.3, Product Quality & Safety is ranked no. 4, Quality Management is ranked no. 5, waste management and Technology Upgradation is ranked no. 6 , Occupational Safety is ranked no. 7, Timely delivery, Daily works management and Material Management is ranked no. 8, Natural Resource Conservation is ranked no. 9 and Leadership, Planned Maintenance & Calibration, Environment Management and Supply Chain Management is ranked no.10. The remaining 3 parameters like the swach work place ranked no. 11, Process Control ranked no. 12 and Energy Management ranked no. 13 needs improvement in DID score so that the overall performance of Automotive Components will improve and also all will get 3 certifications like bronze, silver and gold.

DOI: https://doi.org/10.5281/zenodo.15813467

Published by:

Determinants Of Repurchase Intention For Skincare Serums Among Young Women: A Quantitative Study Of Consumer Behaviour On Nykaa

Uncategorized

Authors: Shreya Dabral

Abstract: The growing market for beauty serums, especially on online shopping websites such as Nykaa, offers a strong research case to study consumer repurchase behavior. This research examines the drivers of consumers' loyalty towards certain serum brands, with specific emphasis on the efficacy of Korean beauty serums and the importance of ingredient transparency. The study utilizes a quantitative approach, conducting a survey of 100 participants made up mainly of young women, a group that accounts for a large part of the serum market. Key takeaways indicate that 67% of consumers focus on product efficacy when making purchasing decisions, while 51% consider ingredients, showing a turn towards ingredient-driven purchasing habits. The findings also indicate that 37% of respondents buy serums every 2-3 months, showing moderate devotion to serum consumption and indicating the development of brand loyalty. Most importantly, 55% of consumers prefer buying online, indicating the vital role of ecommerce in the beauty sector. Even though brands such as Dot & Key and L'Oréal enjoy popularity, according to the research, 60% of the respondents have never used Korean serums and are also indifferent to their effectiveness, with 36.7% viewing them as a fleeting trend. This lack of trust is a challenge to K-beauty brands that need to present strong evidence of their product's effectiveness in order to change consumer attitudes. In addition, before-and-after outcome importance, highly rated by 57% of participants, shows the necessity to boost consumer faith by making strategies more transparent and open. Solving this research issue is essential because not only does it enrich the knowledge about consumer behaviour within the beauty segment, but also offers practical solutions for brands willing to boost market presence and cultivate consumer loyalty. Through an analysis of repurchase intentions, the present study tries to inform marketing strategies that would appeal to changing consumer preferences within the serum sector.

DOI: http://doi.org/10.5281/zenodo.15780127

Published by:

The Brain Behind The Map: AI And Traffic Prediction In Google Maps

Uncategorized

Authors: Ayush Vishwakarma, Yashi Verma

 

 

Abstract: Accurate estimation of travel time is no longer a luxury but a necessity in modern navigation systems, directly impacting user trust and urban transportation efficiency. As cities grow more complex and dynamic, conventional prediction models struggle to adapt to real-time changes. This paper explores the transformative role of big data and artificial intelligence (AI) in refining Estimated Time of Arrival (ETA) predictions, with a focus on Google Maps. Leveraging massive datasets—including GPS trajectories, historical travel data, real-time traffic flows, and userreported incidents—Google Maps employs advanced machine learning algorithms to make adaptive and reliable ETA forecasts [3][4][8][9]. This study investigates how these AI models interpret multilayered traffic data to generate predictions, even under volatile traffic conditions. It further examines how deep learning architectures and neural networks detect patterns, anomalies, and geographic variations in travel behaviours [1][2][19]. A time-based graphical analysis illustrates the improvements in ETA prediction accuracy from 2017 to 2025, emphasizing the system’s continual evolution. Additionally, the paper breaks down the core data sources that fuel this predictive engine, offering insights into the structure and effectiveness of Google Maps’ data pipeline [5][6][7]. As part of this research, we also propose a novel real-time user feedback mechanism designed to enhance live traffic prediction by incorporating human intelligence in the loop. The system enables commuters to quickly report congestion, blockages, or discrepancies, providing hyper-local input that can improve ETA accuracy, especially in under-reported areas.

DOI: http://doi.org/

 

 

Published by:

Car Price Prediction

Uncategorized

Authors: Mr. Muskan Aherwar, Tushar Ahirwar, Dr. Jasbir Kaur, Ms. Ifrah Kampoo

Abstract: This paper aims to build a model to predict used car’s reasonable prices based on multiple aspects, including vehicle mileage, year of manufacturing, fuel consumption, trans- mission, fuel type, and engine size. This model can benefit sellers, buyers, and car manufacturers in the used cars market. Upon completion, it can output a relatively accurate price prediction based on the information that user’s input. The model building process involves machine learning and data science. The dataset used was scraped from listings of used cars. Various regression methods, including linear regression, deci- sion tree regression, and random forest regression, were applied in the research to achieve the highest accuracy. Before the actual start of model-building, this project visualized the data to under- stand the dataset better. The dataset was divided and modified to fit the regression, thus ensure the performance of the regression. To evaluate the performance of each regression, R-square was calculated. Among all regressions in this project, random forest achieved the highest R-square of 0.90416. Compared to previous research, the resulting model includes more aspects of used cars while also having a higher prediction accuracy

Published by:

Enhancing Fake Profile Detection in Social Media Using Explainable Ai for Cybersecurity in Machine Learning

Uncategorized

Authors: P. Meiyazhagan, R. Harish Muthu, M.V. Kowshika, S. Mohammed Kaif

Abstract: The rapid proliferation of fake profiles across heterogeneous social media platforms presents a significant challenge to online security, misinformation control, and digital trust. Traditional machine learning models for fake profile detection often operate as black-box systems, making it difficult to interpret their decisions. To address this, we propose a novel Explainable AI (XAI)-driven framework that enhances transparency and accountability in fake profile identification. Our approach integrates ensemble machine learning models (Random Forest, XGBoost, and Support Vector Machines) with SHapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to provide interpretable feature importance insights. By analyzing user metadata, behavioral patterns, and social network interactions, our system detects fake profiles while justifying its predictions in a human-understandable manner. Furthermore, an interactive XAI dashboard enables users and platform moderators to visualize decision factors, improving trust and ethical AI adoption. Experimental results demonstrate high detection accuracy and explainability, making this framework a promising solution for combating fake identities across diverse social media ecosystems.

DOI:

 

Published by:

Bridging Global Cybersecurity Governance Gaps: A Comparative Legal Analysis of the European Union and Emerging Frameworks in South Asia and Latin America

Uncategorized

Authors: Mr. Shantanu Gamre, Dr. Jasbir Kaur, Assistant Professor Mr. Suraj Kanal

Abstract: This research takes a close look at how different parts of the world are tackling cybersecurity, comparing the well-established approach of the European Union with the evolving systems in South Asia and Latin America. We found that there are still big gaps in global cybersecurity efforts. These gaps exist because cyber threats don't respect borders, countries have very different levels of readiness and resources, and cybersecurity laws often clash or aren't consistent worldwide. The European Union stands out with its strong, rights-focused legal framework, including key laws like GDPR and NIS2, and powerful agencies like ENISA. In contrast, South Asian countries are rapidly embracing digital technology but often struggle with outdated or inconsistent laws, political challenges, and a tricky balance between national security and individual online freedoms. Meanwhile, Latin American nations face advanced cybercrime and even attacks from other governments. While some, like Brazil with its LGPD, have made good progress in data protection, the region generally suffers from a shortage of skilled cybersecurity professionals and difficulties in putting plans into action. Ultimately, this research concludes that to truly make our digital world safer and more fair, we need a global shift towards shared responsibility, focused efforts to build up cybersecurity capabilities where they're weakest, and much stronger international cooperation.

DOI: https://doi.org/10.5281/zenodo.15773377

Published by:

Beyond The Lie: AI-Powered Deep Detection Of Deception In The Social Media Era

Uncategorized

Authors: Aryan Bhatt, Aryan Verma, Syed Qayam

Abstract: Social media has transformed the way people consume news by providing affordable, readily available, and quick information communication. Unfortunately, it has also provided a breeding ground for fake news—intentionally misleading or untruthful information—that can have serious consequences for individuals and society. Identification of fake news on social media has therefore become an essential research area. This problem differs from conventional news media, since false news is deliberately designed to mislead and thus cannot be easily detected by content alone. To overcome this, auxiliary data, including user behavior and social interactions, are usually needed. But it is hard to make use of this information because it is large-scale, incomplete, unstructured, and noisy. Considering the significance and complexity of the issue, the paper discusses Artificial Intelligence (AI) in the detection of false news on social media. We present an overview of the fake news features according to psychological theory and social theory, discuss previous AI-based detection algorithms, as well as assessment metrics and corpora. Further, we stress major challenges, current research work, and the possible future areas of AI use in detecting fake news.

DOI:

 

Published by:

Disaster Recovery Planning For Hybrid Solaris And Linux Infrastructures

Uncategorized

Authors: Sambasiva Rao Madamanchi

Abstract: Disaster recovery (DR) planning for hybrid infrastructures that combine Solaris and Linux poses unique challenges due to differences in tooling, system architecture, and operational practices. Solaris often supports legacy, mission-critical applications, while Linux drives modern, scalable workloads. This document provides a comprehensive guide to building a resilient DR strategy across both platforms. Key areas include risk assessment, backup and recovery tooling, system state preservation, and application/database restoration. Emphasis is placed on automation through Ansible, shell, and Python scripts, as well as configuration management and monitoring integration. The guide also highlights best practices such as maintaining consistent time and user IDs, isolating recovery zones, and leveraging enterprise backup solutions. Through clear documentation, defined team roles, and routine testing, organizations can achieve a DR framework that is platform-aware, repeatable, and aligned with evolving operational and compliance requirements.

DOI: https://doi.org/10.5281/zenodo.15771601

 

Published by:
× How can I help you?