IJSRET » Blog Archives

Author Archives: vikaspatanker

Morph Detect

Uncategorized

Authors: K. Sai Teja, M.Surya Teja, S.Bharath Simha Rao, Y.Hemanth Kumar

Abstract: Face morphing attacks represent a critical vulnerability in biometric authentication sys- tems, where two or more facial images are digitally blended to create a forged identity. Such morphed images can successfully deceive automated face verification systems, leading to severe risks in applications like passport issuance, border control, and iden- tity management. Traditional detection techniques, relying on handcrafted features or differential meth- ods, often fail to generalize across diverse morphing techniques and image qualities. To overcome these limitations, we propose MorphDetect, a deep learning-based Single- Image Morphing Attack Detection (S-MAD) system powered by the EfficientNet-B7 model. The system first preprocesses face images for normalization and then extracts high- dimensional features using EfficientNet-B7’s advanced convolutional blocks. These features are passed through a classification layer that determines whether an input is genuine or morphed, producing a reliable confidence score for decision-making. MorphDetect eliminates the need for a trusted reference image and provides a scal- able, real-time solution for morph detection. By leveraging a strong pretrained back- bone, it ensures robustness against unseen morphing techniques and diverse imaging conditions. This makes the system well-suited for deployment in high-security appli- cations such as e-passport verification, financial KYC procedures, and secure access systems.

Published by:

To Find Material Performance Assessment For Efficient Leachate Filtration Bed

Uncategorized

Authors: Tushar Kadam, Dhiraj Gadhave, Nirzara Sarole, Shital Shinde

Abstract: Landfills are a potential threat to human health and the environment, especially from the detrimental and toxic heavy metals. This study focuses on the assessment of heavy metals contamination in leachate and surface soils from different landfills in Pune. The impacted soils showed high heavy metal concentrations especially at non-sanitary unlined landfills, as compared to background values, and natural soil nearby the landfills. Leachate possesses potential risk to surface and groundwater aquifer within the area surrounding the landfill site. The aim of this chapter is to assess the physical parameters and heavy metal levels in leachate. Heavy metals are one of the important pollutants in landfill leachate. Plants and soil near the landfill may be contaminated by leachate. In this study, by evaluating the heavy metals in the leachate of three landfills, the amount of pollution caused by the leachate in the environment around the landfills in Pune was investigated.

Published by:

A Low-Cost Self-Healing Smart Grid Prototype Using Embedded Random Forest Classification And ESP-NOW Wireless Coordination

Uncategorized

Authors: Angel Lalu, Dr Prakash R, Shreyas Sunil, Nandhakumar S, Divya Bharti

Abstract: Self-healing distribution systems are one of the foundational requirements for future smart grids that are built to withstand disturbances, accommodate bidirectional power flow, and also maintain reliability despite the threat of in- creasing renewable penetration. Traditional FLISR (Fault Location, Isolation, and Service Restoration) solutions used currently depend mostly on SCADA, PMUs, and other high- cost protection relays. This infrastructure is usually not un- available in low-voltage networks, microgrids, and academic environments for teaching purposes. Our work proposes a novel low-cost, microcontroller-based self-healing grid pro- totype that uses ACS712 current sensors, ESP32/ESP8266 wireless sensing nodes communicating via ESP-NOW, and an STM32 Nucleo 64 (F446RE) microcontroller executing an embedded Random Forest classifier through the Eloquent- TinyML library. This system automatically and autonomously detects, classifies, and isolates faults based on a real-time multi- feature current signature. Our experimental setup and further validation shows an overall classification accuracy of 92.76%, ESP-NOW latency of 12-to 18 ms over 22 metres, and a pro- tection response time under 200 ms. Compared to other con- ventional schemes, our proposed architecture provides an inex- pensive yet robust platform similar to SCADA-like self-healing behaviours.

Published by:

Design And Development Of An AI–ML Framework For Higher Education: An Education 5.0 Perspective

Uncategorized

Authors: Mrs, Seema Amol More, Professor Dr. Swati Nitin Sayankar

Abstract: Education 5.0 represents a paradigm shift toward human-centric, ethical, and sustainable learning ecosystems by synergizing advanced digital technologies with societal needs. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as key enablers in transforming higher education through personalized learning, predictive analytics, and intelligent decision support. However, the absence of a unified and scalable framework often leads to fragmented adoption and ethical concerns. This paper proposes a comprehensive AI–ML framework tailored for higher education institutions from an Education 5.0 perspective. The framework integrates data-driven learning analytics, adaptive instructional systems, student performance prediction, and automated academic administration while emphasizing transparency, inclusivity, and data privacy. The proposed architecture consists of layered modules encompassing data acquisition, intelligent processing, decision intelligence, and stakeholder interaction. A conceptual case study demonstrates the applicability of the framework in a university environment. Comparative analysis highlights improvements in academic outcomes, operational efficiency, and learner engagement. The proposed framework provides a structured pathway for institutions seeking sustainable and ethical AI adoption, contributing to the evolving discourse on next-generation higher education systems.

DOI: http://doi.org/10.5281/zenodo.18204804

Published by:

Assessing The Capabilities of Ai in Private Real Estate Development Within the Construction Sector

Uncategorized

Authors: Ms Ruchi Natekar

Abstract: In Mumbai’s fast-growing private real estate construction sector, persistent challenges—cost overruns, schedule delays, and inconsistent quality—continue to limit project performance despite rising demand and increasing urban pressures. Artificial Intelligence (AI) has emerged globally as a transformative tool capable of reshaping construction planning, execution, and monitoring. Yet, in Mumbai, AI adoption remains at a formative stage, shaped by a complex interplay of technological limitations, cultural resistance, and organisational readiness. This study explores how AI is currently being used, where it creates value, and what barriers must be overcome for meaningful transformation. A mixed-methods research design was employed to capture both the breadth and depth of AI adoption. Quantitative insights were gathered through a structured survey of 99 construction professionals, spanning developers, engineers, consultants, and project managers. To complement this, qualitative interviews and focus group discussions were conducted with industry experts to understand their lived experiences, perceptions, and concerns regarding AI-enabled practices. Data were analysed using descriptive statistics, factor analysis, and thematic coding to produce an integrated, evidence-based understanding of AI’s real-world impact within Mumbai’s construction environment. Findings reveal that while AI adoption is still emerging, its footprint is steadily expanding. The most recognised and frequently applied AI tools include predictive analytics for cost estimation, automated scheduling systems, and computer-vision-based quality inspections. Respondents involved in AI-enabled projects reported heightened confidence in the technology’s potential to enhance efficiency, reduce rework, and improve decision-making. However, this optimism exists alongside significant obstacles. The study identifies notable barriers such as low digital literacy, fragmented data systems, regulatory ambiguity, and organisational cultural resistance. Many firms struggle to integrate AI into legacy workflows, and small and medium-sized enterprises face higher financial and technical hurdles. The discussion highlights that successful AI-enabled transformation requires more than just technological investment—it demands structural, cultural, and behavioural shifts within organisations. AI’s impact is therefore as socio-technical as it is operational, requiring alignment across people, processes, and platforms. This research confirms that AI holds strong promise for reducing chronic inefficiencies in Mumbai’s real estate construction sector. Yet, the gap between theoretical potential and on-ground performance remains wide. To bridge this divide, organisations must adopt a phased, context-appropriate strategy that prioritises digital literacy, data standardisation, regulatory clarity, and targeted workforce upskilling. The study offers a practical implementation roadmap tailored to Mumbai’s unique ecosystem, serving as a valuable resource for developers, project managers, policymakers, and technology providers. Ultimately, AI is positioned not as a replacement for human expertise, but as a powerful enabler of smarter, safer, and more resilient urban development.

DOI: https://doi.org/10.5281/zenodo.18204579

 

Published by:

Online Parking Management System

Uncategorized

Authors: Ragini Shivashetti, Nikita Waghamare, Pranita Bhosale, Namrata Shinde, Pranoti Hukkire, Professor Ms. Savita Kadam

Abstract: An online booking system is a web-based platform that automates scheduling and reservations, allowing customers to book services or events (like movies, appointments, or travel) 24/7, while providing administrators tools to manage availability, bookings, and payments efficiently, reducing manual work and improving customer experience through features like user registration, seat selection, payment integration, and real-time confirmations.

Published by:

Operationalizing Regulatory Governance Through Enterprise Master Data Design: A Practical Examination of OFAC, KYC, and GDPR Controls

Uncategorized

Authors: Nagender Yamsani

Abstract: This study examines how enterprise master data design can be operationalized as a primary mechanism for regulatory governance within highly regulated financial environments. The research addresses a persistent industry challenge where regulatory obligations such as OFAC screening, customer due diligence, and personal data protection are often implemented as isolated compliance processes rather than embedded into core data architectures. The purpose of this work is to demonstrate how governance-first master data management can translate regulatory intent into enforceable, auditable, and scalable enterprise controls. Using a qualitative case-based methodology grounded in architectural analysis, control mapping, and operating model assessment, the study evaluates how regulatory requirements are structurally realized through master data domains, stewardship workflows, validation checkpoints, and exception handling mechanisms. The findings show that treating master data as a governed control layer enables consistent regulatory enforcement across operational systems, reduces manual remediation cycles, and strengthens audit readiness. The study further highlights how clear ownership models, policy-driven data validation, and controlled synchronization patterns contribute to sustained compliance without constraining business operations. From an academic perspective, the research extends governance and information systems literature by positioning master data architecture as a regulatory execution instrument rather than a purely technical capability. From an industry standpoint, the study provides practical guidance for financial institutions seeking to embed compliance obligations directly into enterprise data foundations, reinforcing trust, transparency, and operational resilience.

DOI: http://doi.org/10.5281/zenodo.19019592

Published by:

RoadGuardian: A Multi-Modal AI Framework for Enhanced Road Safety through Real-Time Drowsiness, Pothole, and Vehicle Detection

Uncategorized

Authors: Sri Raghuvardhan B, Srujan A U, Vinay Shankar H V, Willson Kumar, Dr. T N Anitha

Abstract: Road accidents remain a global concern, with human er- ror, road infrastructure defects, and environmental fac- tors contributing to millions of fatalities annually. This paper presents RoadGuardian, an integrated multi- modal AI framework designed to enhance road safety through real-time detection of three critical risk factors: driver drowsiness, road potholes, and surrounding ve- hicles. The system employs computer vision techniques with specialized architectures for each detection mod- ule. Drowsiness detection utilizes facial landmark anal- ysis with EAR (Eye Aspect Ratio) and MAR (Mouth Aspect Ratio) metrics. Pothole detection implements a custom YOLO architecture trained on augmented road datasets. Vehicle detection leverages YOLOv8 for ro- bust object recognition. These modules are integrated into a unified dashboard that provides real-time alerts, risk assessment scoring, and situational awareness visu- alization. Experimental results demonstrate high accu- racy rates: 96.8% for drowsiness detection, 94.2% for pothole detection, and 97.5% for vehicle detection with an average inference time of 45ms per frame on stan- dard hardware. The framework represents a significantadvancement in proactive road safety systems, offering a comprehensive solution to mitigate multiple accident risk factors simultaneously.

DOI: https://doi.org/10.5281/zenodo.18195863

 

Published by:

Explainable AI for medical or financial predictions

Uncategorized

Authors: Pradhebaa S

Abstract: Artificial Intelligence (AI) and Machine Learning (ML) models have become powerful tools for predictive analytics in medical and financial domains, enabling early diagnosis of disease, fraud detection, and risk forecasting with remarkable accuracy. Despite these advancements, most state-of-the-art models operate as complex black-box systems, offering minimal transparency into how predictions are formed. In healthcare, where predictions influence clinical decisions, lack of interpretability reduces clinician trust, raises ethical concerns, and limits real-world deployment. Similarly, in finance, opaque ML systems create challenges in regulatory audits, accountability, and fairness in automated risk scoring. These limitations motivate the need for Explainable AI (XAI) frameworks that provide human-interpretable reasoning without sacrificing predictive performance. This paper proposes a unified, model-agnostic explainable machine learning framework tailored for high-stakes prediction tasks. The system employs predictive models such as Random Forest, XGBoost, and LSTM for structured and longitudinal clinical data, integrated with XAI methods including SHAP, LIME, attention visualization, and counterfactual reasoning to generate both global and instance-level explanations. To enhance explanation reliability, the framework incorporates stability analysis, imbalance-aware training, and a composite trust scoring mechanism validated by domain experts. The approach aims to improve transparency, support clinician and analyst decision-making, and enable safer, auditable deployment of AI in medical prediction pipelines. Experimental results from existing research demonstrate that combining high-accuracy ML with robust explanation layers significantly improves stakeholder trust and practical adoption, positioning the framework as a step toward responsible and interpretable predictive intelligence in real-world applications.

Published by:

Test Paper Submit By Saquib Siddiqui 3232025saquib Latest_990

Uncategorized

Authors: Mohd saquib siddiqui

Abstract: Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.

 

Published by:
× How can I help you?