IJSRET » July 9, 2025

Daily Archives: July 9, 2025

Uncategorized

Seismic Performance Analysis Of A G+11 Building Based On Live Earthquake Load Using ETABS Software

Authors: Keshav Kumar Ahirwar, Asssitant professor Mrs. Ankita singhai, Mr. Rahul Satbhaiya

Abstract: A building needs to be able to withstand considerable ground vibrations during construction or operation in order to be deemed earthquake-resistant. However, ground motions have a particular effect on structure reactions. Time-history analysis is effective for buildings that are subject to large ground vibrations. Stepwise integration of the pushover analysis of a multi-degree-of-freedom system (MDOF) in the time domain is used to illustrate a structure's response. This method is advantageous even if it takes a lot of time. In order to speed up the design and assessment of seismic structures, the pushover analysis was developed.Pushover study indicates that during seismic events, structures vibrate mostly in the lower or early modes. The multi-DOF system is then reduced to a single-DOF system using the characteristics revealed by its nonlinear static analysis. After that, a response spectrum analysis is performed on the ESDOF system using either nonlinear time history analysis, damped analysis, or constant-ductility analysis. Modal links are used to translate ESDOF seismic needs into MDOF seismic requirements.A model was created to show the overall consequences of the RCC frame building. In seismic zone II, four G+11 concrete planar frames with four bays oriented in X and Y were built in compliance with Indian regulations. There are five loading scenarios for every frame. Pushover analysis evaluates frames with different elevational anomalies but the same loading conditions. Frame after frame, the results are different. Each frame's capacity spectrum and pushover curve between base shear and displacement are calculated and evaluated. At seismic zone II stress, STAAD was utilized to analyze RCC frame non-linear response. To weight parameters, utilize Pro v8i. Infill walls and bare frames were compared

DOI: https://doi.org/10.5281/zenodo.15846885

Published by:
Uncategorized

A Review Article on Auto-Categorization of Syslogs Using NLP and Deep Learning

Authors: Nisha Verma, Gaurav Nair, Swathi Reddy, Tarun Bhatia

Abstract: In modern IT ecosystems, syslogs serve as the primary diagnostic and auditing trail, capturing granular system-level, application, and security events. As infrastructures grow in scale and complexity spanning cloud-native applications, hybrid UNIX environments, and distributed edge deployments the volume of syslog data has become overwhelming. Traditional rule-based parsing methods and regex-driven filters struggle to scale across heterogeneous logs, leading to missed alerts, alert fatigue, and significant operational overhead. This review explores the transformative role of Natural Language Processing (NLP) and deep learning techniques in auto-categorizing syslogs with accuracy, adaptability, and semantic understanding. The paper begins with an overview of syslog formats, protocols, and the inherent variability in message content and structure. It then introduces modern NLP preprocessing techniques such as tokenization, entity masking, embedding strategies, and contextual vectorization. A detailed examination of deep learning architectures including CNNs, RNNs, LSTMs, and Transformer-based models like BERT is provided to demonstrate their effectiveness in capturing syntactic and contextual nuances. The review also presents methodologies for supervised, semi-supervised, and weakly supervised learning, with practical tools for building ground truth corpora. Operational pipeline considerations such as real-time streaming ingestion, model deployment, latency optimization, and SIEM integration are addressed. Use cases spanning data centers, telecom networks, and security monitoring highlight the practical impact of AI-based syslog categorization. Additionally, the article explores key challenges, including model interpretability, data privacy, false positives, and compliance risks. Future trends such as domain-specific Transformers, self-supervised log learning, federated training, and multi-modal observability are discussed as avenues for further innovation. Ultimately, this review positions NLP and AI as foundational to building scalable, intelligent, and proactive log management systems, paving the way for predictive operations and automated root cause analysis in complex enterprise environments.

DOI: https://doi.org/10.5281/zenodo.15846838

Published by:
Uncategorized

Adaptive Load Balancing in Ldoms Using Edge AI Models

Authors: Komal Jain, Ajeet Kumar, Shravanthi R, Ritu Chauhan

Abstract: Oracle Solaris Logical Domains (LDOMs) offer flexible, high-performance virtualization at the hardware layer, enabling fine-grained resource allocation across critical workloads. However, as enterprise infrastructures grow in complexity and scale particularly in edge and hybrid environments the need for dynamic and intelligent load balancing becomes paramount. Traditional static and reactive policies fall short in addressing modern demands marked by workload volatility, bursty usage patterns, and constrained physical resources. In this context, Edge AI models present a transformative approach to adaptive load management. This review explores how AI particularly Edge-deployed supervised, unsupervised, time-series, and reinforcement learning models can be leveraged to predict resource saturation, detect faults, and proactively manage LDOM reallocation and live migrations. Emphasis is placed on integrating AI pipelines with Solaris-native telemetry tools (kstat, vmstat, prstat) and automating control actions using the ldm command suite. Real-world case studies across telecom, financial, and healthcare sectors are analyzed to demonstrate improvements in SLA compliance, resource efficiency, and fault avoidance through AI-assisted decisions. We further address system-level integration with Oracle Ops Center, highlight governance concerns such as model explainability and override control, and explore lightweight inference frameworks suitable for constrained control domains. Challenges in data quality, model trust, and automation safety are also discussed. The review concludes by outlining future directions including federated learning, policy-aware AI agents, cross-domain telemetry fusion, and convergence with AI-Ops ecosystems. By embedding intelligence directly into the LDOM infrastructure, organizations can evolve from static resource provisioning to a self-optimizing virtualization platform—capable of continuous learning, rapid adaptation, and resilience at the edge. This shift is vital to meet the performance and operational demands of modern digital infrastructure.

DOI: https://doi.org/10.5281/zenodo.15846618

Published by:
Uncategorized

Leveraging AI to Optimize Oracle EM Ops Center Operations

Authors: Lakshmi Menon, Aravind Krishnan, Ramya K, Vineeth Das

Abstract: Modern IT environments, characterized by hybrid infrastructure, rapid virtualization, and regulatory constraints, demand sophisticated systems management platforms that go beyond manual operations. Oracle Enterprise Manager Ops Center (OEMOC) has long served as a unified platform for provisioning, patching, asset discovery, and monitoring in Oracle Solaris and Linux-based data centers. However, as operational complexity scales, traditional rules-based workflows face limitations in managing configuration drift, correlating events, and predicting performance degradation. This has prompted a shift toward integrating artificial intelligence into Ops Center’s telemetry and operational lifecycle. This review explores the application of AI and machine learning techniques to optimize various facets of OEMOC. From predictive asset discovery and patch prioritization to real-time anomaly detection and resource planning, AI offers the potential to transform the platform into a proactive, self-optimizing system. The review evaluates supervised, unsupervised, and reinforcement learning models that can be trained on logs, asset data, and historical events collected across Enterprise Controllers and Agent Controllers. Specific emphasis is placed on using time series forecasting for utilization prediction, clustering techniques for configuration drift detection, and NLP algorithms for intelligent alert triage. Additionally, the review delves into the architectural integration of AI pipelines with OEMOC components, the use of SNMP, syslog, and ITSM APIs for external telemetry fusion, and case studies from financial, government, and telecom deployments. The article also addresses challenges related to model explainability, data governance, and integration within legacy environments. In doing so, it outlines a roadmap for enhancing Ops Center with intelligent automation, turning it from a monitoring tool into a closed-loop operations platform capable of dynamic remediation and resource optimization.

DOI: https://doi.org/10.5281/zenodo.15846496

Published by:
Uncategorized

Hybrid AI Models for ZFS Usage Forecasting

Authors: Ritika Ghosh, Abhishek Dey, Sonali Mondal, Arjun Sen

Abstract: In today's data-intensive environments, the Zettabyte File System (ZFS) plays a central role in ensuring reliable and high-performance storage for applications ranging from databases to high-performance computing and cloud workloads. However, predicting future storage consumption, ARC/L2ARC cache pressure, and snapshot bloat has become increasingly complex due to the dynamic and non-linear nature of modern workload behaviors. Traditional statistical approaches often fall short in capturing these complexities, necessitating the adoption of hybrid AI models that blend statistical, machine learning (ML), and deep learning techniques. These hybrid systems can more accurately model usage trends, recognize anomalous patterns, and respond to previously unseen behaviors, especially when trained on detailed ZFS telemetry. This review article explores the use of hybrid AI techniques for ZFS usage forecasting, focusing on time series modeling, anomaly detection, snapshot growth prediction, and proactive capacity management. It begins with a foundational overview of ZFS architecture, highlighting the importance of ARC, L2ARC, ZIL, and snapshot layers in the overall usage landscape. It then discusses the specific forecasting challenges that arise in ZFS due to caching hierarchies, concurrent access patterns, and latency-sensitive applications. We examine a taxonomy of AI models used in the domain and analyze how hybrid designs can improve accuracy and adaptability. The review further details the construction of end-to-end pipelines for training, evaluating, and deploying predictive models based on ZFS metrics. Case studies from healthcare, research clusters, and enterprise NAS environments are presented to demonstrate the operational impact of intelligent forecasting. Finally, the article outlines future directions including federated learning, online retraining, and integration with AIOps platforms to support self-optimizing storage infrastructures.

DOI: https://doi.org/10.5281/zenodo.15845749

Published by:
Uncategorized

Predictive Backup Failure Analytics in Commvault Environments

Authors: Pratibha Kumari, Rajiv Tripathi, Snehal Ramesh, Tanvi Kapoor

Abstract: In modern enterprise IT, data protection is not merely a compliance requirement but a critical operational pillar. Backup systems such as Commvault are expected to perform with high reliability to meet recovery point objectives (RPO) and recovery time objectives (RTO). However, unpredictable backup failures continue to challenge IT operations, causing delays in restoration, potential data loss, and non-compliance with service level agreements (SLAs). Traditional monitoring and alerting are often reactive, which leaves little time for administrators to respond before a backup job fails. Predictive analytics offers a paradigm shift by enabling preemptive identification of failure patterns based on historical and real-time data. In Commvault environments, telemetry from CommServe, MediaAgents, and job logs offers rich sources for modeling and failure prediction. This review investigates how predictive analytics—particularly machine learning (ML) can be applied within Commvault environments to anticipate and mitigate backup failures. We discuss key failure types, log analysis strategies, anomaly detection, model training pipelines, and real-time visualization techniques. The integration of supervised and unsupervised ML algorithms, including regression models, clustering, and sequence prediction (e.g., LSTM), is explored for their applicability in Commvault's operational workflows. In addition, we evaluate the advantages of integrating predictive outputs into SLA-aware orchestration and automated remediation workflows. The article further contrasts Commvault’s predictive capabilities with other backup platforms and explores future research avenues in deep learning, AIOps, and federated modeling for distributed environments. By aligning predictive insights with operational pipelines, enterprises can achieve proactive data protection, reduce downtime, and improve compliance readiness in a cost-effective manner.

DOI: https://doi.org/10.5281/zenodo.15845638

Published by:
Uncategorized

Integrating Time Series Forecasting And Business Intelligence: A Power BI Dashboard Approach For Sales Prediction

Authors: Vaishnavi Kane, Assistant Professor Dr. Suhas Mache, Assistant Professor Dr. Arshiya Khan

Abstract: In the modern business environment, accurate sales forecasting is essential for effective decision-making. This paper explores the integration of time series forecasting techniques with Business Intelligence (BI) tools, specifically using Microsoft Power BI, to build an interactive dashboard for sales prediction. We present a model that combines statistical forecasting methods with data visualization to enhance decision-making in sales management. The system is designed to provide real-time insights, support strategic planning, and identify sales trends through dynamic dashboards. Our case study demonstrates that integrating forecasting models within BI platforms significantly improves sales predictability and operational efficiency. This study explores the integration of time series forecasting techniques with Microsoft Power BI to build a dashboard that predicts future sales. By combining forecasting models like ARIMA and Prophet with Power BI’s interactive features, the system enables better business decision-making. The dashboard allows users to visualize historical trends and forecasted sales data in real time.

Published by:
Uncategorized

A Word Embedding Approach To Analyzing CEO Earnings Call Transcripts And Stock Market Reactions

Authors: Harsha Sammangi, Aditya Jagatha, Hari Gopal Maddireddy

Abstract: This study presents a sentiment-driven Decision Support System (DSS) that leverages advanced word embedding techniques—Word2Vec, GloVe, and BERT—to analyze CEO earnings call transcripts and predict stock market reactions. Tra- ditional lexicon-based sentiment models fail to capture the nuanced, contextual language used by executives. By employing pre-trained embeddings and machine learning classifiers, the study enhances the accuracy of sentiment classification. The proposed system integrates quantitative sentiment scores with event study method- ology to assess the impact of CEO tone on stock performance. Thematic analysis further enriches interpretability by identifying recurring patterns in executive com- munication. Results demonstrate that positive CEO sentiment generally correlates with stock appreciation, while negative sentiment aligns with declines. Among models tested, BERT outperformed others in classification accuracy. This research contributes to real-time financial analytics by embedding sentiment intelligence into DSS frameworks, supporting investors, analysts, and automated trading sys- tems with improved decision-making capabilities grounded in contextual linguistic analysis.

DOI: https://doi.org/10.5281/zenodo.15845150

 

Published by:
Uncategorized

Autopapermine: Research Paper Information Extractor

Authors: Jinta Johnson, Assistant Professor Athira B, Professor Dr. Shine Raj G

Abstract: This paper presents a lightweight and intelligent system for the automatic extraction of structured information from academic research papers in PDF format. The proposed system leverages Natural Language Processing (NLP) techniques, TF-IDF-based summarization, and Sentence-BERT semantic similarity to extract and analyze metadata such as title, authors, organizations, keywords, and references. Built using Python and Streamlit, the tool allows users to upload PDF documents, parse academic content, and interactively review summarized metadata, references, and semantic relevance—all in real-time. This paper details the system architecture, implementation pipeline, challenges, and experimental results, demonstrating its effectiveness and scope for future enhancements

 

 

Published by:
Uncategorized

Designing Multi-Speciality Hospitals: Architectural Integration of Healing, Functionality, and Technology in Rural India

Authors: Anant Kumar, Professor Gulfam B. Shaikh, Professor Dilip L. Jade

Abstract: With the rapid urbanization of rural regions in India, the demand for healthcare infrastructure has surged dramatically. This research paper explores the architectural planning and design of a Multi-Speciality Hospital in Sikandarpur, Bihta, Patna (Bihar)—a region currently underserved in medical facilities. Emphasizing healing environments, sustainable strategies, patient-centered design, and technological integration, the paper outlines the necessity, process, and architectural responses to contemporary hospital design. A case study of Tata Medical Centre, Kolkata informs the practical and structural feasibility of the proposed design.

Published by:
× How can I help you?