IJSRET » Blog Archives

Author Archives: vikaspatanker

The Introduction of Multi-Tenant Solaris Environments for Research Institutions

Uncategorized

Authors: Harini Samarasinghe, Dilan Madushanka, Ruwani Gamage, Amila Wickramasinghe

Abstract: Research institutions are increasingly challenged to support a diverse array of computing workloads ranging from high-throughput bioinformatics to high-performance simulations within constrained physical infrastructure. Multi-tenant architectures offer a cost-effective and scalable solution, enabling multiple research groups to securely share resources while maintaining strong boundaries of isolation, performance, and compliance. This review explores the architectural, operational, and security dimensions of building multi-tenant environments using Oracle Solaris. It covers foundational technologies such as zones and Logical Domains (LDOMs), details approaches to resource allocation, identity management, and audit logging, and addresses the specific needs of research computing environments including regulatory compliance (HIPAA, GDPR, FERPA), data reproducibility, and access governance. The article further discusses automation, orchestration, and monitoring strategies, including integration with DevOps tools and SIEM platforms. Real-world use cases from genomics labs, physics departments, and engineering faculties illustrate the practical applications of Solaris-based tenancy. Challenges such as kernel-sharing risks, resource contention, and cloud scalability limitations are critically examined. Finally, the paper outlines future directions including hybrid cloud integration, AI-optimized zone support, and policy-as-code templates for rapid, compliant deployments. This comprehensive review serves as a technical and strategic guide for research institutions seeking to modernize and secure their multi-tenant UNIX infrastructure using Solaris.

DOI: https://doi.org/10.5281/zenodo.15847545

Published by:

Privacy-Preserving Collaborative Searchable Encryption Using Blake3 for Cloud-Based Group Data Sharing

Uncategorized

Authors: Aatheni U, Dr. M. M. Janeela Theresa

Abstract: – Collaborative searchable encryption for group data sharing enables authorized users to jointly generate trapdoors and retrieve encrypted data without compromising privacy. However, existing solutions remain vulnerable to keyword guessing attacks (KGAs) by malicious insiders and subversion threats such as backdoors from untrusted hardware or software vendors. To overcome these security challenges, we propose a Privacy- Preserving Collaborative Searchable Encryption (PCSE) scheme using the BLAKE3 hash function. PCSE introduces a dedicated keyword server to enable server-derived keywords that resist insider KGAs, and employs cryptographic reverse firewalls to mitigate subversion risks. A distributed, multi-server keyword architecture is adopted to prevent single-point failures. The system also supports multi-keyword search, result verification, and includes a rate-limiting mechanism to restrict brute-force attempts. Formal analysis confirms resistance against KGAs and subversion attacks. Empirical evaluations demonstrate that PCSE achieves strong privacy, scalability, and efficient keyword-based search, making it suitable for secure cloud-based group data sharing

DOI: https://doi.org/10.5281/zenodo.15847524

 

Published by:

The Review on Patching Strategies for Always-On Biomedical Data Systems

Uncategorized

Authors: Dilani Jayawardena, Kasun Rathnayake, Nimali Dissanayake, Sahan Abeysekera

Abstract: Biomedical data systems operate under stringent uptime requirements, complex regulatory constraints, and increasingly sophisticated cyber threats. Ensuring the security and reliability of these systems through regular patching presents a significant operational challenge, particularly in environments where downtime is unacceptable. This review examines state-of-the-art patching strategies tailored for always-on biomedical infrastructures, including electronic health records (EHR), PACS, LIMS, and real-time monitoring platforms. Key considerations such as risk-based patch prioritization, live kernel patching, failover strategies, and automation via CI/CD pipelines are discussed in detail. Emphasis is placed on regulatory compliance with HIPAA, FDA 21 CFR Part 11, and ISO 27001, as well as alignment with industry standards such as NIST SP 800-40 and CIS benchmarks. The review also explores governance mechanisms, stakeholder coordination, and validation processes essential for maintaining both uptime and auditability. Through real-world case studies and analysis of common pitfalls, the paper provides actionable insights into achieving secure, reliable, and regulation-ready patch deployment in biomedical environments. Future directions highlight the convergence of artificial intelligence, continuous compliance validation, and threat-informed patch orchestration as the next evolution in patch management for mission-critical healthcare systems.

DOI: https://doi.org/10.5281/zenodo.15847374

Published by:

Secure Data Storage Design for Biomedical Compliance Environments

Uncategorized

Authors: Nadeesha Perera, Tharindu Silva, Ishara Fernando, Chamika Weerasinghe

Abstract: Secure data storage in biomedical environments is a foundational requirement for maintaining regulatory compliance, safeguarding patient privacy, and enabling ethical scientific research. As healthcare and life sciences organizations generate and manage vast amounts of sensitive information ranging from electronic health records to genomic sequences the need for secure, resilient, and policy-driven storage architectures has become increasingly urgent. This review examines the technical, regulatory, and operational considerations involved in designing storage systems that align with frameworks such as HIPAA, GDPR, and FDA 21 CFR Part 11. The paper begins by analyzing the classification of protected health information (PHI) and the importance of data sensitivity in biomedical workflows. It explores regulatory mandates related to auditability, legal retention, and chain-of-custody, followed by a detailed examination of the evolving threat landscape, including ransomware and insider attacks. The review compares traditional SAN/NAS models, object-based architectures, and software-defined storage solutions, highlighting their respective roles in compliance-driven deployments. Further sections address critical security practices such as encryption, key management, access control, and data lifecycle enforcement. The integration of secure storage with biomedical systems like PACS, LIMS, and EHRs is evaluated, with attention to secure APIs and auditability. Emerging technologies including confidential computing, blockchain-based integrity tracking, and AI-driven anomaly detection are also explored for their future impact. Through real-world case studies, the review illustrates successful implementations in hospitals, research institutions, and hybrid infrastructures. It concludes with an analysis of common challenges such as vendor lock-in and the trade-offs between compliance and usability. Looking ahead, the paper advocates for zero trust-aligned architectures and adaptive compliance automation as guiding principles for next-generation biomedical storage design.

DOI: https://doi.org/10.5281/zenodo.15847131

Published by:

The Concept of UNIX Infrastructure Optimization for Genomic Data Processing

Uncategorized

Authors: Faria Mahmud, Khaled Noor, Sabrina Yasmin, Tanmoy Hossain

Abstract: The unprecedented growth of genomic data driven by next-generation sequencing technologies has imposed complex computational demands on bioinformatics infrastructure. UNIX-based systems comprising Solaris, AIX, and Linux form the backbone of genomic data processing environments due to their reliability, performance, and rich toolchain support. However, their default configurations are seldom tuned for the high-throughput, memory-intensive, and I/O-sensitive nature of genomic workloads. This review explores the critical need for infrastructure-level optimization in UNIX environments to support workflows such as sequence alignment, variant calling, and RNA-Seq analysis. It presents a detailed examination of system-level strategies including NUMA-aware CPU allocation, memory page tuning, ZFS and GPFS storage optimization, network throughput enhancement, and scheduler configuration using SLURM and PBS. Case studies from academic and clinical domains highlight the real-world impact of these optimizations on pipeline performance and resource efficiency. The article also addresses compliance considerations under HIPAA and GDPR, demonstrating how audit controls and data encryption can be embedded into UNIX configurations. Looking forward, the review outlines emerging trends such as AI-assisted infrastructure tuning, containerization of genomic workflows, and the integration of persistent memory and cloud bursting strategies. Collectively, this review provides system administrators, bioinformatics engineers, and IT architects with a comprehensive blueprint for transforming UNIX platforms into high-performance, secure, and scalable environments tailored for genomics.

DOI: https://doi.org/10.5281/zenodo.15846976

Published by:

Seismic Performance Analysis Of A G+11 Building Based On Live Earthquake Load Using ETABS Software

Uncategorized

Authors: Keshav Kumar Ahirwar, Asssitant professor Mrs. Ankita singhai, Mr. Rahul Satbhaiya

Abstract: A building needs to be able to withstand considerable ground vibrations during construction or operation in order to be deemed earthquake-resistant. However, ground motions have a particular effect on structure reactions. Time-history analysis is effective for buildings that are subject to large ground vibrations. Stepwise integration of the pushover analysis of a multi-degree-of-freedom system (MDOF) in the time domain is used to illustrate a structure's response. This method is advantageous even if it takes a lot of time. In order to speed up the design and assessment of seismic structures, the pushover analysis was developed.Pushover study indicates that during seismic events, structures vibrate mostly in the lower or early modes. The multi-DOF system is then reduced to a single-DOF system using the characteristics revealed by its nonlinear static analysis. After that, a response spectrum analysis is performed on the ESDOF system using either nonlinear time history analysis, damped analysis, or constant-ductility analysis. Modal links are used to translate ESDOF seismic needs into MDOF seismic requirements.A model was created to show the overall consequences of the RCC frame building. In seismic zone II, four G+11 concrete planar frames with four bays oriented in X and Y were built in compliance with Indian regulations. There are five loading scenarios for every frame. Pushover analysis evaluates frames with different elevational anomalies but the same loading conditions. Frame after frame, the results are different. Each frame's capacity spectrum and pushover curve between base shear and displacement are calculated and evaluated. At seismic zone II stress, STAAD was utilized to analyze RCC frame non-linear response. To weight parameters, utilize Pro v8i. Infill walls and bare frames were compared

DOI: https://doi.org/10.5281/zenodo.15846885

Published by:

A Review Article on Auto-Categorization of Syslogs Using NLP and Deep Learning

Uncategorized

Authors: Nisha Verma, Gaurav Nair, Swathi Reddy, Tarun Bhatia

Abstract: In modern IT ecosystems, syslogs serve as the primary diagnostic and auditing trail, capturing granular system-level, application, and security events. As infrastructures grow in scale and complexity spanning cloud-native applications, hybrid UNIX environments, and distributed edge deployments the volume of syslog data has become overwhelming. Traditional rule-based parsing methods and regex-driven filters struggle to scale across heterogeneous logs, leading to missed alerts, alert fatigue, and significant operational overhead. This review explores the transformative role of Natural Language Processing (NLP) and deep learning techniques in auto-categorizing syslogs with accuracy, adaptability, and semantic understanding. The paper begins with an overview of syslog formats, protocols, and the inherent variability in message content and structure. It then introduces modern NLP preprocessing techniques such as tokenization, entity masking, embedding strategies, and contextual vectorization. A detailed examination of deep learning architectures including CNNs, RNNs, LSTMs, and Transformer-based models like BERT is provided to demonstrate their effectiveness in capturing syntactic and contextual nuances. The review also presents methodologies for supervised, semi-supervised, and weakly supervised learning, with practical tools for building ground truth corpora. Operational pipeline considerations such as real-time streaming ingestion, model deployment, latency optimization, and SIEM integration are addressed. Use cases spanning data centers, telecom networks, and security monitoring highlight the practical impact of AI-based syslog categorization. Additionally, the article explores key challenges, including model interpretability, data privacy, false positives, and compliance risks. Future trends such as domain-specific Transformers, self-supervised log learning, federated training, and multi-modal observability are discussed as avenues for further innovation. Ultimately, this review positions NLP and AI as foundational to building scalable, intelligent, and proactive log management systems, paving the way for predictive operations and automated root cause analysis in complex enterprise environments.

DOI: https://doi.org/10.5281/zenodo.15846838

Published by:

Adaptive Load Balancing in Ldoms Using Edge AI Models

Uncategorized

Authors: Komal Jain, Ajeet Kumar, Shravanthi R, Ritu Chauhan

Abstract: Oracle Solaris Logical Domains (LDOMs) offer flexible, high-performance virtualization at the hardware layer, enabling fine-grained resource allocation across critical workloads. However, as enterprise infrastructures grow in complexity and scale particularly in edge and hybrid environments the need for dynamic and intelligent load balancing becomes paramount. Traditional static and reactive policies fall short in addressing modern demands marked by workload volatility, bursty usage patterns, and constrained physical resources. In this context, Edge AI models present a transformative approach to adaptive load management. This review explores how AI particularly Edge-deployed supervised, unsupervised, time-series, and reinforcement learning models can be leveraged to predict resource saturation, detect faults, and proactively manage LDOM reallocation and live migrations. Emphasis is placed on integrating AI pipelines with Solaris-native telemetry tools (kstat, vmstat, prstat) and automating control actions using the ldm command suite. Real-world case studies across telecom, financial, and healthcare sectors are analyzed to demonstrate improvements in SLA compliance, resource efficiency, and fault avoidance through AI-assisted decisions. We further address system-level integration with Oracle Ops Center, highlight governance concerns such as model explainability and override control, and explore lightweight inference frameworks suitable for constrained control domains. Challenges in data quality, model trust, and automation safety are also discussed. The review concludes by outlining future directions including federated learning, policy-aware AI agents, cross-domain telemetry fusion, and convergence with AI-Ops ecosystems. By embedding intelligence directly into the LDOM infrastructure, organizations can evolve from static resource provisioning to a self-optimizing virtualization platform—capable of continuous learning, rapid adaptation, and resilience at the edge. This shift is vital to meet the performance and operational demands of modern digital infrastructure.

DOI: https://doi.org/10.5281/zenodo.15846618

Published by:

Leveraging AI to Optimize Oracle EM Ops Center Operations

Uncategorized

Authors: Lakshmi Menon, Aravind Krishnan, Ramya K, Vineeth Das

Abstract: Modern IT environments, characterized by hybrid infrastructure, rapid virtualization, and regulatory constraints, demand sophisticated systems management platforms that go beyond manual operations. Oracle Enterprise Manager Ops Center (OEMOC) has long served as a unified platform for provisioning, patching, asset discovery, and monitoring in Oracle Solaris and Linux-based data centers. However, as operational complexity scales, traditional rules-based workflows face limitations in managing configuration drift, correlating events, and predicting performance degradation. This has prompted a shift toward integrating artificial intelligence into Ops Center’s telemetry and operational lifecycle. This review explores the application of AI and machine learning techniques to optimize various facets of OEMOC. From predictive asset discovery and patch prioritization to real-time anomaly detection and resource planning, AI offers the potential to transform the platform into a proactive, self-optimizing system. The review evaluates supervised, unsupervised, and reinforcement learning models that can be trained on logs, asset data, and historical events collected across Enterprise Controllers and Agent Controllers. Specific emphasis is placed on using time series forecasting for utilization prediction, clustering techniques for configuration drift detection, and NLP algorithms for intelligent alert triage. Additionally, the review delves into the architectural integration of AI pipelines with OEMOC components, the use of SNMP, syslog, and ITSM APIs for external telemetry fusion, and case studies from financial, government, and telecom deployments. The article also addresses challenges related to model explainability, data governance, and integration within legacy environments. In doing so, it outlines a roadmap for enhancing Ops Center with intelligent automation, turning it from a monitoring tool into a closed-loop operations platform capable of dynamic remediation and resource optimization.

DOI: https://doi.org/10.5281/zenodo.15846496

Published by:

Hybrid AI Models for ZFS Usage Forecasting

Uncategorized

Authors: Ritika Ghosh, Abhishek Dey, Sonali Mondal, Arjun Sen

Abstract: In today's data-intensive environments, the Zettabyte File System (ZFS) plays a central role in ensuring reliable and high-performance storage for applications ranging from databases to high-performance computing and cloud workloads. However, predicting future storage consumption, ARC/L2ARC cache pressure, and snapshot bloat has become increasingly complex due to the dynamic and non-linear nature of modern workload behaviors. Traditional statistical approaches often fall short in capturing these complexities, necessitating the adoption of hybrid AI models that blend statistical, machine learning (ML), and deep learning techniques. These hybrid systems can more accurately model usage trends, recognize anomalous patterns, and respond to previously unseen behaviors, especially when trained on detailed ZFS telemetry. This review article explores the use of hybrid AI techniques for ZFS usage forecasting, focusing on time series modeling, anomaly detection, snapshot growth prediction, and proactive capacity management. It begins with a foundational overview of ZFS architecture, highlighting the importance of ARC, L2ARC, ZIL, and snapshot layers in the overall usage landscape. It then discusses the specific forecasting challenges that arise in ZFS due to caching hierarchies, concurrent access patterns, and latency-sensitive applications. We examine a taxonomy of AI models used in the domain and analyze how hybrid designs can improve accuracy and adaptability. The review further details the construction of end-to-end pipelines for training, evaluating, and deploying predictive models based on ZFS metrics. Case studies from healthcare, research clusters, and enterprise NAS environments are presented to demonstrate the operational impact of intelligent forecasting. Finally, the article outlines future directions including federated learning, online retraining, and integration with AIOps platforms to support self-optimizing storage infrastructures.

DOI: https://doi.org/10.5281/zenodo.15845749

Published by:
× How can I help you?