IJSRET » Blog Archives

Author Archives: vikaspatanker

Active Cell Balancing For Efficient Battery Management System

Uncategorized

Authors: Ms. Nirmala R G, Pratap K V, Nithilan I

Abstract: The growing adoption of electric vehicles (EVs), renewable-energy microgrids, and portable power systems has intensified the need for efficient and reliable battery management strategies. Conventional passive balancing circuits in lithium-ion battery packs dissipate excess energy as heat, resulting in low efficiency, poor scalability, and thermal stress. This paper presents an Active Cell Balancing Battery Management System (ACB-BMS) employing a bidirectional buck–boost converter topology integrated with an Extended Kalman Filter (EKF)-based state-of-charge (SOC) estimation algorithm. The system dynamically redistributes charge between cells, achieving faster equalization and significantly reduced energy loss compared with resistor-based methods. The EKF enables accurate real-time tracking of each cell’s SOC, improving safety and charge control under varying load and temperature conditions. A complete MATLAB/Simulink simulation model of the proposed system has been developed and validated, demonstrating superior voltage uniformity, faster balancing response, and enhanced energy efficiency. The proposed approach forms a practical foundation for next- generation intelligent BMS architectures suitable for electric vehicles and hybrid renewable-energy storage. Future hardware implementation is planned to extend the technology toward commercial-grade embedded platforms.

Published by:

Percolation Threshold Estimation Via Probabilistic Bounds And Simulation

Uncategorized

Authors: Hanumesha S T

Abstract: Percolation theory provides a mathematically elegant and practically powerful framework for modeling connectivity transitions in random media, with applications ranging from porous materials and composite conductivity to epidemics, network robustness, and transport in disordered systems. A central quantity is the percolation threshold p_c, the critical occupation probability at which macroscopic connectivity emerges with nontrivial scaling. Although p_c is known exactly for a few planar cases and lattices, many practical scenarios require estimation under finite-size, boundary, and uncertainty constraints. This paper develops a rigorous and computation-oriented methodology for percolation threshold estimation that couples (i) probabilistic inequalities and bracketing arguments (crossing probabilities, monotonicity, sharp-threshold heuristics, and finite-size scaling), with (ii) simulation-based estimators (spanning probability curves, union-find connectivity, confidence intervals, and extrapolation). We emphasize a "two-engine" approach: bounds that constrain plausible threshold locations and simulation that refines the estimate while quantifying uncertainty. We also introduce an uncertainty-aware parameterization using intuitionistic fuzzy sets and (hyper)graph abstractions to represent ambiguous occupancy mechanisms and heterogeneous coupling patterns; this is motivated by real settings where the effective "open probability" is not a crisp scalar but a range informed by measurement noise or multi-factor criteria. The final manuscript provides a Word-ready, mathematics-forward exposition, with figures and tables embedded to illustrate lattice configurations, spanning curves, scaling collapse, and probabilistic bracketing.

DOI: http://doi.org/10.5281/zenodo.18092138

Published by:

Analysis Design of Structures with High Performance Concrete

Uncategorized

Authors: Vishal Ranjan, Dr. Jyoti Yadav

Abstract: High-Performance Concrete (HPC) is an advanced form of cement concrete where ingredients are selected and proportioned to enhance various properties of the concrete in both fresh and hardened states. One key feature of HPC is its higher strength, which offers significant structural advantages. The primary components contributing to the cost of a structural member are concrete, steel reinforcement, and formwork. This paper compares these components when higher-grade concrete, specifically HPC, is used, and highlights how high-strength concrete provides the most economical solution for designing load-bearing members, particularly in carrying vertical loads to the building foundation through columns. The mix design variables critical to concrete strength include the water-cementitious material ratio, total cementitious material, cement-admixture ratio, and superplasticizer dosage, which are analyzed to achieve the desired high-grade concrete mix.

DOI: https://doi.org/10.5281/zenodo.18092124

 

Published by:

IJSRET Volume 5 Issue 1, Jan-Feb-2019

Uncategorized

Percolation Threshold Estimation Via Probabilistic Bounds And Simulation

Authors: Hanumesha S T

Abstract: Percolation theory provides a mathematically elegant and practically powerful framework for modeling connectivity transitions in random media, with applications ranging from porous materials and composite conductivity to epidemics, network robustness, and transport in disordered systems. A central quantity is the percolation threshold p_c, the critical occupation probability at which macroscopic connectivity emerges with nontrivial scaling. Although p_c is known exactly for a few planar cases and lattices, many practical scenarios require estimation under finite-size, boundary, and uncertainty constraints. This paper develops a rigorous and computation-oriented methodology for percolation threshold estimation that couples (i) probabilistic inequalities and bracketing arguments (crossing probabilities, monotonicity, sharp-threshold heuristics, and finite-size scaling), with (ii) simulation-based estimators (spanning probability curves, union-find connectivity, confidence intervals, and extrapolation). We emphasize a "two-engine" approach: bounds that constrain plausible threshold locations and simulation that refines the estimate while quantifying uncertainty. We also introduce an uncertainty-aware parameterization using intuitionistic fuzzy sets and (hyper)graph abstractions to represent ambiguous occupancy mechanisms and heterogeneous coupling patterns; this is motivated by real settings where the effective "open probability" is not a crisp scalar but a range informed by measurement noise or multi-factor criteria. The final manuscript provides a Word-ready, mathematics-forward exposition, with figures and tables embedded to illustrate lattice configurations, spanning curves, scaling collapse, and probabilistic bracketing.

DOI: http://doi.org/10.5281/zenodo.18092138

Graph Analytics For Network Topology Optimization

Authors: Muhammad Hakim

Abstract: The escalating complexity of global digital infrastructures, characterized by the convergence of 5G, massive IoT deployments, and hyperscale cloud-to-edge continuums, has rendered traditional linear network management models obsolete. At the heart of this complexity lies the network topology—the intricate map of nodes and interconnections that dictates the flow, latency, and resilience of data. This review article explores the paradigm shift toward Graph Analytics for Network Topology Optimization. Unlike traditional tabular data analysis, graph analytics treats the network as a native mathematical graph, where routers, switches, and endpoints are vertices, and the communication links are edges. This relational perspective allows for the discovery of structural properties—such as centrality, community clusters, and bottleneck bottlenecks—that are invisible to classical monitoring. We categorize the core methodologies of graph-driven optimization, including the use of Graph Neural Networks (GNNs) for predictive traffic steering and PageRank-inspired algorithms for identifying critical infrastructure vulnerabilities. The article examines how graph analytics enables "Topological Resilience," allowing networks to autonomously reconfigure their structure in response to failures or shifting demand. Furthermore, the review addresses the critical challenges of processing massive-scale dynamic graphs in real-time, the computational overhead of graph embeddings, and the necessity for explainable graph models in network operations. By synthesizing recent breakthroughs in spectral graph theory and combinatorial optimization, this paper provides a strategic roadmap for building "Self-Optimizing Topologies." The findings suggest that graph analytics is the foundational intelligence required to manage the "Relational Complexity" of the 6G era, ensuring that global networks are not just faster, but fundamentally more robust, efficient, and adaptive.

DOI: https://doi.org/10.5281/zenodo.19491714

Behavioural Analytics For Insider Threat Detection Using Machine Learning

Authors: Ahmad Rizal

Abstract: Insider threats represent one of the most challenging cybersecurity risks, as they originate from individuals with legitimate access to organizational systems and data. Traditional security mechanisms often fail to detect such threats due to their reliance on signature-based or rule-based approaches that lack contextual awareness. Behavioral analytics, powered by machine learning (ML), has emerged as a transformative approach for identifying anomalous patterns indicative of insider misuse, fraud, or sabotage. This review explores the integration of behavioral analytics and ML techniques to enhance insider threat detection capabilities. By leveraging user activity logs, network traffic data, and system interactions, ML models can establish baseline behavioral profiles and identify deviations in real time. The study examines supervised, unsupervised, and hybrid learning approaches, highlighting their effectiveness in detecting both known and unknown threats. Additionally, it discusses feature engineering, data preprocessing, and the role of contextual information in improving detection accuracy. Challenges such as data imbalance, privacy concerns, adversarial behavior, and model interpretability are also critically analyzed. The review further explores emerging trends, including deep learning, graph-based analytics, and explainable AI, which are shaping next-generation insider threat detection systems. Ultimately, behavioral analytics

DOI: https://doi.org/10.5281/zenodo.19491716

Published by:

An Analysis of the Application of High-Performance Concrete in Building Structures

Uncategorized

Authors: Vishal Ranjan, Dr. Jyoti Yadav

Abstract: High-Performance Concrete (HPC) has become an essential material in modern construction due to its superior mechanical properties, durability, and environmental benefits. This paper explores the use of HPC in building structures within India, with a focus on its performance, advantages, and the impact on the construction industry. By reviewing recent studies, case studies, and performance data, this research demonstrates the role of HPC in enhancing structural integrity, reducing maintenance costs, and contributing to sustainability. The paper also discusses the challenges and potential future directions for the use of HPC in India’s infrastructure development.

DOI: https://doi.org/10.5281/zenodo.18091902

Published by:

Early Alzheimer\\\’s Disease Prediction Using Machine Learning And Deep Learning Algorithms.

Uncategorized

Authors: Ms.Dhanushni.N, Ms.Vivisha Catherin.P

Abstract: Alzheimer’s disease (AD) is a pressing global issue, It’s known as the severe neuron disease. They Mainly damages the Brain cells, which leads to permanent lose of memory which is also called dementia. Many people die due to this disease every year because it is not curable but the early detection can prevent from spreading. Alzheimer’s are most commonly found in the elder peoples or from the age of (60 and above). It requires an efficient and automated system which can detect the disease and classify it in the basis of Alzheimer’s stages like Mild Demented(MD), Moderate Demented(MOD), Non Demented(ND), Very Mild Demented(VMD). For the prediction we use Machine learning and deep learning Algorithm’s like convolutional neural networks for imaging data(CNNs), Random forest and Gradient Boosting(XGBoost / LightGBM), Support Vector Machines(SVM) Which is much more efficient from the preexisting models of the Alzheimer Detection. Of relying on methods, like CNNs and SVM for our model design like Random Forest and XGBoost do typically with fixed structures and manual feature selection processes; we take a different approach thats more intricate and advanced by utilizing transfer learning through the InceptionV3 network already trained on ImageNet for its robust feature extraction abilities. To boost our models effectiveness in handling datasets adequately; we integrate various data augmentation methods such as adjusting image angles and proportions along, with mirroring techniques. Address the issue of class distribution by adjusting the weights for classes to focus more on identifying cases of Alzheimers disease accurately. In addition, to this adjustment in class weighting strategy consider implementing techniques like dropout regularization method and early stopping along with model checkpoint mechanism to prevent the model from learning noise and improve generalization. This holistic strategy leads to a model that's proficient in reducing both positives and false negatives which is crucial, in accurate medical diagnosis.

Published by:

Operational Graph Patterns For Continuity And Fulfillment In Large Enterprises: A Field-Based Reference Architecture

Uncategorized

Authors: Mallesh Miryala

Abstract: Large organizations run on operational data that changes every hour: people join and leave, locations are renamed, incidents unfold, and responsibilities shift. In practice, the hardest part is not storing records; it is keeping the records consistent enough that policy decisions and workflows remain trustworthy. This paper proposes a practical design pattern called the policy- aware operational graph. The pattern treats people, organizational units, locations, requests, and tasks as a connected graph with explicit ownership and audit history. It combines three ideas that are often built separately: identity lifecycle management, rule-driven routing, and cross-system transaction safety. The design is informed by field experience maintaining a continuity platform at a large public university and building high-volume fulfillment workflows at a national telecom. The paper contributes a reference architecture, a repeatable identity hygiene loop for key contacts, and an efficient duplicate-detection method that routes only uncertain cases to human review. A small reference implementation is provided to demonstrate how blocking keys and union-find can scale to large datasets without excessive memory or quadratic comparisons.

Published by:

Face Recognition Voting System

Uncategorized

Authors: Roshni.S, Sudarshana.K

Abstract: Face recognition technology has emerged as a powerful biometric solution capable of enhancing the security and efficiency of modern voting systems. Traditional voting mechanisms, including paper ballots and manual electronic verification, face numerous challenges such as voter impersonation, multiple voting, long verification times, and susceptibility to human error. In recent years, the rapid advancement of artificial intelligence and machine learning has enabled more accurate and scalable facial recognition systems, making them suitable for large-scale applications such as elections. This paper presents an in-depth study of a face recognition–based voting system, discussing its conceptual design, system architecture, methodology, security mechanisms, performance considerations, advantages, limitations, ethical implications, and future scope. The study concludes that while face recognition technology has significant potential to improve election integrity and voter convenience, successful implementation requires robust privacy protection, legal frameworks, and public trus.

Published by:

Self-Healing Cloud Infrastructure Using Digital Immune Systems

Uncategorized

Authors: Shrihari.G, Abilash.R

Abstract: Modern cloud infrastructures host large numbers of distributed services and microservices, where failures and attacks can propagate rapidly across virtual machines, containers, and orchestration layers. In this setting, static, signature-driven defenses are insufficient to maintain availability and resilience. Inspired by the biological immune system (BIS), this paper presents a self-healing cloud infrastructure framework that applies second-generation Digital Immune System (DIS) principles to detect, contain, and recover from process-level anomalies in real time. The approach treats cloud nodes and services as components of a larger artificial organism, embedding immune-like agents throughout the stack rather than relying solely on perimeter defense. At the core of the framework is a biologically plausible, multi-layered cellular signalling architecture for process anomaly detection. Building on Matzinger’s Danger Theory, the system moves beyond simple self/non-self discrimination by combining “danger signals” such as abnormal syscall patterns, privilege escalation attempts, and volatile resource usage with “safe signals” derived from stable workload and performance baselines. Specialized artificial cell populations—Dendritic Cells (aDCs), T-Helper Cells (T_H), and B-Cells—are instantiated as distributed agents within a cloud-aware middleware. aDCs aggregate local evidence on each node, T_H cells perform distributed consensus across nodes and services, and B-Cells maintain memory detectors that rapidly recognize previously observed attack strategies. These immune agents communicate over a virtual cytokine bus, enabling spatial-temporal correlation of signals across containers, virtual machines, and availability zones. When coordinated danger levels exceed adaptive thresholds, the framework triggers self-healing actions such as throttling or isolating compromised containers, rolling back affected service instances, or re-provisioning clean replicas through the underlying orchestration platform. Evaluation on syscall-level datasets and realistic exploit scenarios indicates that the proposed DIS-based controller can distinguish normal from attack behaviour with high accuracy while imposing minimal overhead, and that its coordinated responses significantly reduce both time-to-detection and time-to-recovery compared to baseline policies. The work demonstrates that biologically inspired, multi-agent immunity can provide a practical foundation for self-healing cloud infrastructure capable of adapting alongside evolving threats.

Published by:

Cloud-Native Intelligent Healthcare Data Management Framework

Uncategorized

Authors: Deeksha M, Subhiksha N

Abstract: The rapid growth of healthcare data generated by electronic health records, medical imaging systems, wearable sensors, and telemedicine platforms has created unprecedented challenges for healthcare data management. Conventional on-premise infrastructures are increasingly unable to support the scalability, interoperability, and analytical intelligence required by modern healthcare ecosystems. Cloud computing has emerged as a promising alternative; however, its adoption in healthcare remains limited due to concerns regarding data security, regulatory compliance, interoperability, and performance reliability. This paper proposes a cloud-native intelligent healthcare data management framework that integrates secure data ingestion, standards-based interoperability, artificial intelligence–driven analytics, and automated compliance governance within a hybrid or multi-cloud environment. The framework is designed to support heterogeneous healthcare data sources while maintaining privacy, regulatory adherence, and real-time responsiveness. A detailed architectural design, data flow model, security mechanisms, and use-case-driven analysis are presented. The proposed solution demonstrates how cloud-native principles can enable scalable, secure, and intelligent healthcare data management suitable for next-generation digital health systems.

Published by:
× How can I help you?