IJSRET » Blog Archives

Author Archives: vikaspatanker

Evaluating The Performance Of Supervised Multiple Linear Regression Machine Learning Algorithm In Predicting The Ampacity Of Overhead Transmission Lines

Uncategorized

Authors: Kemudeme Sunday Effiong, Hachimenum Nyebuchi Amadi, Biobele A. Wokoma, Richeal Chinaeche Ijeoma

Abstract: This study examines the overhead transmission line ampacity prediction performance of a supervised multiple linear regression machine learning algorithm integrated with the IEEE-738 heat balance equation, using ten years of historical data from the Nigerian Meteorological Agency (NiMet) and operational data from the Transmission Company of Nigeria (TCN) Afam network using a Python environment. Key meteorological factors included ambient temperature, wind velocity, solar radiation, and air pressure, while conductor properties such as emissivity and age were also considered. The aim was to evaluate the performance of supervised multiple regression algorithm to predict the dynamic amapcity of overhead transmission lines. This was achieved by first deriving the amapcity under different weather and line conditions, then deploying the algorithm for real-time dynamic line rating (DLR) prediction to determine its accuracy and speed based on the performance metrics. The IEEE-738 heat balance amapcity derivation results showed that the 450A-rated conductors had ampacitiy between 309A and 1406A (62% to 312% of the rated value) while the 630A-rated lines ranged from 380A to 1897A (60% to 301%), implying that depending on the weather conditions and other parameters, overhead transmission lines dynamic amapcity can increase up to 212% and decrease up to about 40% of the rated values of the lines’ conductors. On the other hand, the prediction results of the Multiple Regression Machine Learning Algorithm showed a coefficient of determination 0.8912, a Standard Deviation of 0.0021, Root Mean Squared Error (RMSE) of 56.03, Mean Square Error (MSE) of 3139.32, and Mean Absolute Error (MAE) of 39.64 within a computing time of 0.9 second. While the prediction speed is very good, it is recommended that other supervised machine learning algorithms should be deployed with the same data to compare their prediction accuracy.

DOI: https://doi.org/10.5281/zenodo.19109133

 

Published by:

AI-Based Voting System Using Face Recognition

Uncategorized

Authors: S. Vimala, Dr. M. Senthilkumar, Abishek Winston I, Santhosh Kumar T, Sivakumar P

Abstract: An AI-based Online E-Voting System is developed to provide a secure, transparent, and reliable digital voting mechanism by integrating face recognition techniques with Java and SQL-based processing. The system authenticates voters by capturing live facial images and comparing them with registered facial data using machine learning and computer vision methods to prevent impersonation and duplicate voting. It validates voter eligibility, enforces one-time voting through database constraints, and securely records votes to ensure data integrity and accuracy. Users interact with the system through a user-friendly interface where voter registration, authentication, and vote casting are performed seamlessly. The backend application processes voting requests, manages election data, and automates vote counting and result generation. By leveraging AI-driven facial authentication instead of traditional credential-based verification, the system enhances election security and minimizes manual intervention. The proposed framework improves the efficiency, trustworthiness, and scalability of online voting systems and supports fair and reliable elections in institutional and organizational environments.

DOI: https://doi.org/10.5281/zenodo.19107029

 

Published by:

A Systematic Review Of Explainable Artificial Intelligence Techniques For Trustworthy Machine Learning Systems

Uncategorized

Authors: Dr M. Lavanya, Monisha B, Monika. G

Abstract: While machine learning models become increasingly predictive, their lack of transparency threatens trust in high-risk domains like healthcare, finance, and civil infrastructure. Explainable AI research, thus, mainly deals with the challenges associated with making model behaviors and decision processes interpretable. This systematic review, carried out using the PRISMA 2020 statement, examines 89 peer-reviewed Q1 and Q2 journal articles published from 2018 to 2025 and identifies fourteen different XAI techniques. The leading methods in the literature are post-hoc explainability (82%), while SHAP and LIME are the most widely adopted XAI techniques, more so in healthcare applications at 28%. Other model-specific techniques include the Grad-CAM method and attention mechanisms, which find wide applications in computer vision and natural language processing tasks. Going beyond descriptive syntheses, this review proposes an integrated hybrid framework for explainability that leverages SHAP with counterfactual explanations, enhancing interpretive, actionable, and user trust. The review further develops key gaps in current research inquiries: (i) absence of causal reasoning mechanisms, (ii) lacks of uniform evaluation metrics, and (iii) limited human-centered validation. Directions for further studies are discussed and should be oriented toward understanding causal XAI, federated and privacy-preserving explainability, and neurosymbolic hybrid models.

DOI: https://doi.org/10.5281/zenodo.19106811

 

Published by:

Exploring The Impacts Of Artificial Intelligence On Urban Sustainability And Efficiency In Smart Cities_124

Uncategorized

Authors: Mr. Piyush Mohan, Ms. Reshu Bhardwaj

Abstract: Cities all over the world are experiencing pressures as they are rapidly urbanizing, which can bring many issues in transportation, energy consumption, waste management, and sustainability. Smart cities have become a new frontier for urban infrastructure modernization and technology integration. At the core of the vision is artificial intelligence (AI), which supports increased efficiency and sustainability in these cities. In this paper, we present a study of what potential advantages AI tools can bring to sustainable urban progress and operational efficiency in such smart cities, which are deployed in the transportation, energy, waste, city planning, and general industries of smart cities. AI could enhance economic development and citizen quality of life, as well as fulfil roles that governments may perform, to enhance safety and the ease of living in cities. It enables homeowners to play the role of owners in controlling their own homes, managing their trucks and waste disposal, and also in monitoring the traffic flow. This research deals with AI’s influence on sustainable development in various areas, such as smart transport infrastructure, healthcare services, residential management, industry, energy use, agriculture, governance arrangements in urban settings, and education. It also touches on the benefits and drawbacks of AI’s role in urban governance and where to head. The findings demonstrate the possibilities created in this regard, such as optimizing the use of urban resources and reduction of environmental footprints, effective service enhancement through AI applications; however, those effects have to be considered for context about data privacy rights, investment in infrastructure, and ethical considerations to allow AI to become a successful integration within such a high-value environment.

DOI: https://doi.org/10.5281/zenodo.19105176

Published by:

A Resilient Multi-Cloud Intelligence Layer For Modern Enterprises: Coordinating AI, Microservices, And ERP-Based Workforce Platforms At Scale

Uncategorized

Authors: Kai Lorenz, Elena Kovarik, Mateo Serrano, Tariq Al-Nadim, Ananya Kulkarni

Abstract: Distributed enterprise infrastructures increasingly connect operational applications, workforce management platforms, and analytical services across multiple cloud environments. Coordinating these interconnected systems while maintaining reliability, scalability, and intelligent decision support presents significant engineering challenges for large organizations. Conventional enterprise architectures frequently depend on tightly coupled systems and centralized analytical platforms that struggle to manage rapidly evolving services deployed across hybrid and multi cloud infrastructures. This study introduces a resilient multi cloud intelligence layer designed to coordinate artificial intelligence services, microservice based applications, and ERP driven workforce platforms within large scale enterprise ecosystems. The proposed architecture establishes an intermediary intelligence layer that aggregates operational data streams, orchestrates service communication across distributed cloud environments, and enables predictive analytics capabilities to operate directly alongside operational systems. Microservices provide modular and scalable service components that support flexible integration between enterprise applications, while containerized deployment models ensure portability across cloud infrastructures. Artificial intelligence models integrated within the intelligence layer analyze operational signals to support workforce optimization, operational forecasting, and anomaly detection across enterprise processes. The framework also incorporates resilience mechanisms such as distributed service orchestration, automated scaling, and cross cloud workload coordination to maintain operational continuity under dynamic workloads. By integrating microservices architecture, machine learning driven analytics, and ERP based workforce management platforms within a unified multi cloud intelligence framework, the proposed approach enables organizations to transform fragmented enterprise infrastructures into coordinated intelligent ecosystems capable of supporting scalable operations and continuous analytical insight.

Published by:

From Transactions To Intelligence: Engineering Data-Centric ERP Ecosystems With Streaming Analytics, DevOps Automation, And Predictive Modeling

Uncategorized

Authors: Daniel Sørensen, Hiroshi Nakamura, Dr. Matteo Rinaldi, Elena Petrova, Ananya Kulkarni

Abstract: Large organizational information systems were historically designed to record and process structured transactions across business functions such as finance, supply chain, and human resources. While these systems ensured operational consistency and data reliability, their architecture primarily focused on transaction processing rather than continuous intelligence generation. Growing data volumes, distributed digital infrastructures, and the need for rapid decision making now require ERP environments to evolve beyond batch reporting and static analytics. This research presents an engineering framework for transforming transaction oriented ERP systems into data centric intelligence ecosystems through the integration of streaming analytics, DevOps automation, and predictive modeling. The proposed architecture enables continuous data ingestion, real time analytics pipelines, automated deployment of analytical services, and embedded predictive intelligence capable of supporting proactive operational decisions. By integrating data engineering principles with scalable analytics infrastructures, the framework demonstrates how operational data streams can be converted into actionable insights that improve forecasting accuracy, operational efficiency, and organizational responsiveness. The study contributes a unified approach for designing ERP ecosystems that support both reliable transaction processing and continuous intelligence generation within complex digital enterprises.

DOI: https://doi.org/10.5281/zenodo.19105020

Published by:

Adaptive Query Intelligence: AI-Enabled Optimization Strategies For High-Volume SQL And NoSQL Processing In Regulated Industries

Uncategorized

Authors: Dr. Matteo Rinaldi, Hiroshi Nakamura, Elena Petrova, Daniel Sørensen, Ananya Kulkarni

Abstract: This paper explores how machine learning–driven query optimization can elevate the performance, scalability, and operational resilience of SQL and NoSQL database systems deployed in high-volume financial and healthcare environments. Conventional rule-based and cost-based optimizers frequently encounter limitations when confronted with volatile workloads, uneven data distributions, and rapidly shifting access behaviors that define contemporary transaction processing and clinical data infrastructures. The central inquiry of this study examines whether adaptive, data-aware optimization models—trained on historical execution traces, telemetry signals, and workload metadata—can deliver superior efficiency and stability in such dynamic contexts. The research employs a blended methodological approach that integrates architectural framework design, algorithmic prototyping, and comparative benchmarking across representative relational and non-relational database platforms operating under large-scale transactional and analytical loads. Empirical evaluation indicates that learning-enabled optimizers meaningfully lower query response times, improve compute and memory utilization, and enhance predictability during peak data surges when compared to traditional strategies. Core contributions include the development of predictive cost estimation models, context-aware index adaptation mechanisms, and real-time execution plan adjustments powered by supervised and reinforcement learning paradigms. Collectively, the study advances the theoretical foundations of intelligent data management by embedding adaptive learning into optimization workflows, while offering practical guidance for engineering robust, high-throughput database infrastructures capable of sustaining accuracy, compliance, and responsiveness in mission-critical financial and healthcare systems.

DOI: https://doi.org/10.5281/zenodo.19104981

Published by:

Digital Nervous Systems For Enterprises: Integrating IoT, Big Data, And Artificial Intelligence Across SAP SuccessFactors And Cloud HCM Landscapes

Uncategorized

Authors: Sebastian Moreau, Yuki Matsumoto, Adrian Kovalenko, Matteo Ricci, Ananya Kulkarni

Abstract: Digital transformation in human capital management has created complex, distributed ecosystems in which employee data originates from connected devices, cloud platforms, transactional systems, and external intelligence services. Fragmented architectures limit the ability to sense patterns, contextualize signals, and coordinate timely action across SAP SuccessFactors and heterogeneous cloud HCM landscapes. This study introduces a digital nervous system architecture that integrates Internet of Things telemetry, scalable big data infrastructures, and artificial intelligence driven cognition into a unified sensing and response framework. The proposed model organizes system design into sensing layers for real time signal acquisition, transmission layers for streaming and synchronization, cognitive layers for predictive and prescriptive analytics, and response layers for coordinated orchestration across talent, payroll, performance, and compliance domains. A formal Enterprise Signal Latency Index is developed to quantify responsiveness across distributed platforms, alongside a Neural Stability Metric that measures adaptive coherence within the integrated HCM ecosystem. Through architectural modeling and scenario based evaluation, the research demonstrates reductions in signal propagation delay, improved anomaly detection accuracy, enhanced decision synchronization across platforms, and strengthened systemic resilience. The findings establish a scalable blueprint for constructing intelligent, continuously learning digital infrastructures that unify IoT, big data, and artificial intelligence within multi cloud human capital environments.

DOI: https://doi.org/10.5281/zenodo.19104930

Published by:

Human Capital Systems In Motion: Designing Event-Driven, Machine Learning–Enabled ERP Architectures For Continuous Workforce Optimization

Uncategorized

Authors: Camila Duarte, Romain Delacroix, Arjun Mehta, Tobias Lindgren, Ananya Kulkarni

Abstract: Workforce systems are increasingly expected to operate as adaptive intelligence infrastructures rather than static repositories of employee transactions. Conventional ERP based human capital platforms were engineered to ensure data accuracy, compliance integrity, and standardized administrative workflows; however, their batch oriented processing logic and retrospective analytics limit organizational capacity to detect emerging workforce risks and coordinate timely interventions. This study proposes an event-driven, machine learning enabled ERP architecture that transforms human capital systems into continuously responsive ecosystems capable of real time sensing, predictive modeling, and automated orchestration. The framework integrates streaming event pipelines, dynamic feature engineering, embedded predictive and prescriptive models, and governance aligned control mechanisms within a unified enterprise environment. A novel Continuous Workforce Optimization Index is introduced to quantify adaptive stability across engagement momentum, performance variability, capacity distribution, and compliance consistency. Through architectural modeling and scenario based simulation, the research demonstrates measurable reductions in decision latency, improvements in prediction accuracy, enhanced intervention precision, and strengthened systemic resilience when compared with traditional batch driven ERP configurations. The findings establish a scalable blueprint for designing next generation human capital architectures that enable sustained, continuous workforce optimization in complex and rapidly evolving enterprise contexts.

DOI: https://doi.org/10.5281/zenodo.19104880

Published by:

Robust Digital Foundation Architectures Supporting Enterprise Java And Spring Engineering

Uncategorized

Authors: Ramani Teegala

Abstract: This research examines the growing need for structured, reliable, and productivity focused engineering environments during a period when organizations were transitioning from traditional development models to more collaborative and tool integrated ecosystems. It investigates the fragmentation of workflows, uneven tool adoption, and the absence of standardized development practices that often constrained Java and Spring engineers, resulting in delivery delays, quality inconsistencies, and extended onboarding cycles. The research aims to establish a systematic approach for designing and implementing developer experience platforms that unify development workflows, automate repetitive tasks, and promote consistency across large engineering teams. Using a mixed methods approach that integrates qualitative interviews with engineering leaders and quantitative analysis of productivity indicators, the study identifies platform capabilities that most effectively enhance developer outcomes, including integrated build automation, standardized project templates, curated toolchains, and centralized knowledge resources. The findings show that well structured developer experience platforms reduce cognitive load, improve code quality, and accelerate delivery cycles while strengthening collaboration and engineering confidence. Academically, the work contributes a structured model for evaluating developer experience within Java and Spring ecosystems and provides a foundational basis for future research in platform centric software engineering. Strategically, it demonstrates how organizations can leverage such platforms to modernize engineering culture, reinforce architectural alignment, and achieve sustainable delivery performance. The study concludes that formalizing developer experience as an operational discipline holds long term significance for both industrial practice and academic advancement.

DOI: https://doi.org/10.5281/zenodo.19100556

Published by:
× How can I help you?