IJSRET » Blog Archives

Author Archives: vikaspatanker

Developing a Multi-Modal Edge-AI Framework for Continuous Infant Monitoring: Predicting Mental Health Outcomes

Uncategorized

Authors: Research Scholar Sandeep Keshav, Professor Dr. Sanjeev Puri

Abstract: The evolution of Edge-AI technologies has created new opportunities in pediatric healthcare, allowing for real-time monitoring of infants while maintaining privacy. This research introduces an innovative multi-modal Edge-AI framework that combines video, audio, and physiological data to anticipate potential mental health issues in infants. The proposed system processes information locally on edge devices, minimizing latency, enhancing privacy, and enabling continuous monitoring in both clinical and home settings. By employing lightweight AI models for on-device processing, the system promotes early identification of neurodevelopmental challenges and encourages timely interventions. This approach aims to shift healthcare from a reactive stance to a preventive one, ultimately aiming to foster long-term enhancements in mental health. The paper outlines the system's architecture, techniques for optimizing AI models, and prospective applications in pediatric healthcare environments.

Published by:

Enhancement of Load Bearing Capacity in Diagrid Multistory Building with Observed Torsion

Uncategorized

Authors: Research Scholar Yawar Khan, Professor Sachin Sironiya

Abstract: The difference between conventional outer brake frame structures and current diagrid structures is that for diagrid structures, almost all conventional vertical columns are removed. Elimination of vertical columns is possible because diagonal elements in diagrid structural systems can carry gravitational loads as well as lateral forces, while diagonals in conventional elastic frame structures carry only lateral loads. The most normal and popular material in the process of building diagrids is steel. The incisions commonly used are rectangular, rounded and wide flanges. The weight and size of the sections are made to withstand high bending loads.

 

 

Published by:

Novel Approach to Load Analysis of Multistory Building with Its Bending

Uncategorized

Authors: Research Scholar Aman Singh Bais, Professor Rajesh Chouhan

Abstract: A multi-storey is a building that has multiple floors above the ground. It can be a residential or commercial building. In this project the analysis and design of multi-storey building. In general, the analysis of multi-storey is elaborate and rigorous because those are statically indeterminate structures. Shears and moments due to different loading conditions are determined by many methods such as portal method, moment distribution method and matrix method. The present project deals with the analysis of a building. The dead load & live loads are applied and the design for beams, columns, the footing is obtained manually.

 

 

Published by:

Hardening Kernel Parameters for Compliance in Medical Research Servers

Uncategorized

Authors: Zarina Safarova, Jamshid Rahmonov, Nargis Khudoyarova, Farhod Karimov

Abstract: In medical research environments, system-level security is paramount due to the highly sensitive nature of biomedical and genetic data. With regulatory frameworks like HIPAA, GDPR, and 21 CFR Part 11 requiring strong data protection and verifiable access controls, kernel parameter hardening has become a foundational strategy for achieving compliance. By tuning kernel parameters using tools such as sysctl on Linux and equivalent mechanisms on Solaris, administrators can restrict system behaviors related to networking, inter-process communication (IPC), and memory management. These configurations mitigate common vulnerabilities, including buffer overflows, shared memory leakage, and IP spoofing. When integrated into an Infrastructure-as-Code (IaC) model using tools like Puppet, Ansible, or Chef, kernel hardening becomes consistent, auditable, and reproducible across large-scale clinical or research server deployments. This review explores specific kernel parameters that enhance system integrity and reduce attack surfaces while maintaining application compatibility in complex biomedical environments. It also examines compliance-driven configuration baselines such as CIS Benchmarks and DISA STIGs. Operational challenges—including drift, rollback complexity, and conflicting application requirements—are addressed with best practices and automation frameworks. Finally, emerging trends such as AI-based anomaly detection, kernel lockdown mechanisms, and TPM-integrated validation are discussed as future directions. This comprehensive evaluation supports security professionals and biomedical IT architects in building hardened, compliant, and resilient research computing infrastructures.

DOI: https://doi.org/10.5281/zenodo.15852624

Published by:

Blockchain-Based Insurance Claim

Uncategorized

Authors: Priyanka Gupta, Hardik Gupta, Anurag Tomar, Piyush Raghav

Abstract: This project aims to revolutionize the insurance claims process by leveraging blockchain technology and smart contracts to address inefficiencies such as fraud, human errors, security risks, and high administrative costs. Traditional insurance claim processing is complex, requiring extensive human intervention, multi-domain interactions, and data from multiple sources, making it time-consuming and labor-intensive. By utilizing a private Ethereum blockchain and the Solidity programming language for smart contract development, this framework automates claim verification and settlement, ensuring transactions occur only if all predefined conditions are met. The integration of the Proof of Authority (PoA) consensus algorithm enhances transaction validation, improving security and transparency throughout the process. Additionally, decentralized applications (DApps) facilitate seamless user interactions, while the InterPlanetary File System (IPFS) enables off-chain data storage to maintain accessibility and immutability without overloading the blockchain network. This decentralized system prioritizes trust, transparency, and scalability, allowing for efficient processing of health insurance claims, particularly for prescription drugs, while significantly reducing operational costs. By combining blockchain’s transparency with scalable off-chain storage, this solution transforms the insurance sector, offering a reliable, secure, and cost-effective approach to claim management.

 

 

Published by:

Review on Audit-Ready System Builds Using SMF and Puppet

Uncategorized

Authors: Kateryna Holub, Oleksandr Kravchuk, Natalia Koval, Yuriy Sydorenko

Abstract: In regulated IT environments, achieving audit-ready system builds is crucial for maintaining compliance, operational integrity, and trust. This review explores how the integration of Solaris Service Management Facility (SMF) and Puppet configuration management enables the creation of infrastructure that is both resilient and verifiable. SMF offers deterministic service lifecycle control, dependency resolution, and fault recovery, while Puppet ensures declarative system provisioning, configuration drift correction, and policy enforcement. Together, they form a robust framework for building and maintaining UNIX systems that meet stringent compliance standards such as HIPAA, SOX, PCI-DSS, and ISO 27001. This article details integration patterns between Puppet and SMF, including automated service registration, state enforcement, and logging strategies that support continuous compliance verification. Real-world use cases from healthcare, finance, and scientific research sectors highlight the scalability and traceability benefits of this approach. Further, the paper addresses challenges in manifest maintenance, performance bottlenecks, and error debugging, offering practical mitigation strategies. Emerging trends such as Policy-as-Code, AIOps integration, and immutable infrastructure are also discussed, illustrating the direction of future-ready, compliance-driven automation. By aligning infrastructure-as-code principles with service-level orchestration, this framework transforms audit-readiness from a reactive task into a continuous, automated operational model.

DOI: https://doi.org/10.5281/zenodo.15848272

Published by:

Secure Access Control Using CentrifyDC in Heterogeneous Networks

Uncategorized

Authors: Olena Shevchenko, Dmytro Bondarenko, Iryna Kovalenko, Andriy Melnyk

Abstract: Modern IT environments increasingly span a mix of Linux, UNIX (Solaris and AIX), and Windows systems, creating significant challenges in managing decentralized user accounts, enforcing strong authentication, and maintaining comprehensive audit trails. Security and compliance frameworks including HIPAA, SOX, and NIST SP 800-53 demand centralized control over identity and privileged access, yet many organizations still rely on fragile local account systems or disparate tools. This fragmented model often leads to inconsistent enforcement, audit gaps, and elevated risk of unauthorized access. This review examines CentrifyDC, an Active Directory bridge that delivers unified, centralized authentication and role-based access control across heterogeneous environments. By integrating with Linux Pluggable Authentication Modules (PAM), Name Service Switch (NSS), SSH, and native Role-Based Access Control (RBAC) for Solaris and AIX, CentrifyDC enables seamless AD-based login, command-level delegation, and multi-factor authentication. Privileged sessions are audited, logged, and stored centrally, bolstering compliance while minimizing reliance on sudo or multiple account stores. Deployment considerations and operational benefits are highlighted through real-world use cases from high-performance research clusters and Solaris-based healthcare infrastructure to AIX servers in government environments. CentrifyDC demonstrates how centralized policy inheritance, zone-based delegation, and secure PAM routines enforce least privilege and simplify administration across large fleets. Performance optimizations including login caching and load balancing are evaluated to ensure scalability. The review concludes with an exploration of future enhancements, such as integration with Azure Active Directory and Okta, AI-driven access risk modeling, and Infrastructure-as-Code pipelines for automated policy deployment. These developments promise to extend centralized access control into hybrid cloud environments and DevSecOps workflows. Ultimately, CentrifyDC offers a robust, compliant, and future-ready solution for managing identity and privileged access across diverse operating systems under a unified directory infrastructure.

DOI: https://doi.org/10.5281/zenodo.15848025

Published by:

Real-Time Security Compliance Enforcement Using Tripwire in Solaris

Uncategorized

Authors: Daria Kuznetsova, Sergey Belov, Anna Fedorova, Viktor Pavlov

Abstract: As Solaris continues to serve mission-critical workloads across healthcare, government, and financial sectors, maintaining system integrity and regulatory compliance has become increasingly complex. Traditional security controls often lack the real-time responsiveness and policy-driven rigor required for hardened UNIX environments. This review explores the application of Tripwire a widely trusted file integrity monitoring solution for enforcing real-time security compliance on Solaris platforms. The article delves into how Tripwire enables continuous monitoring of system files, binaries, libraries, and configuration artifacts using cryptographic checksums and customized policies. Through automated scans, deviation detection, and audit-ready reporting, Tripwire ensures alignment with frameworks such as HIPAA, FISMA, and PCI-DSS. The review further examines operational deployments of Tripwire within Solaris Zones, legacy AIX integrations, and hybrid infrastructures. Challenges related to system overhead, false positives, and policy maintenance are also analyzed, with optimization techniques offered to minimize performance impact. Emphasis is placed on Tripwire’s integration with SIEM platforms, service management facilities (SMF), and compliance dashboards, enabling seamless escalation, incident tracking, and forensics. The framework's ability to enforce baseline configurations, detect unauthorized modifications, and generate tamper-proof audit evidence makes it invaluable in regulated UNIX environments. Looking ahead, Tripwire's role is evolving through alignment with AIOps, Compliance-as-Code, and GitOps pipelines, paving the way for dynamic and automated security enforcement. This article concludes by asserting that Tripwire, when strategically configured and integrated, provides a scalable and proactive compliance solution tailored for Solaris-based infrastructures strengthening operational resilience while satisfying stringent audit requirements.

DOI: https://doi.org/10.5281/zenodo.15847881

Published by:

Adaptive Server Hardening in Mission-Critical Biomedical Systems

Uncategorized

Authors: Ekaterina Morozova, Ivan Petrov, Natalia Smirnova, Alexey Volkov

Abstract: Biomedical computing environments face a unique set of challenges in securing critical infrastructure while maintaining the high availability, performance, and regulatory compliance required for sensitive healthcare and research workloads. From electronic medical record (EMR) systems and genomics data pipelines to real-time telemedicine platforms, these systems demand adaptive and resilient security architectures. Traditional static hardening techniques—based on fixed baselines, manual patching, and predefined firewall rules are increasingly insufficient in the face of dynamic threat landscapes, complex workloads, and ever-evolving compliance mandates like HIPAA, HITECH, and 21 CFR Part 11. This review explores the concept of adaptive server hardening, a modern, behavior-driven approach that dynamically adjusts server configurations, access controls, and security policies based on real-time telemetry, system state, and threat intelligence. It examines OS-specific strategies across Red Hat, Solaris, and AIX platforms, highlighting tools like SELinux, SMF, Trusted AIX, ZFS ACLs, and live patching utilities. Key technologies include behavior-based anomaly detection, AI-assisted rule tuning, and integration with SIEM and EDR platforms such as Tripwire, Splunk, and OSSEC. Furthermore, the paper addresses runtime configuration drift, automated remediation, privilege management, and audit automation for compliance readiness. Through detailed technical analysis and real-world case studies, the review demonstrates how adaptive hardening improves security posture, supports continuous compliance, and ensures operational continuity in biomedical settings. It also considers challenges such as overhead management, multi-platform complexity, and tuning of dynamic policies. Finally, the article discusses future trends including autonomous compliance agents, AIOps integration, and adaptive security in hybrid and cloud-based biomedical infrastructures.

DOI: https://doi.org/10.5281/zenodo.15847766

Published by:

The Concept of ZFS for Long-Term Biomedical Imaging Data Storage

Uncategorized

Authors: Chathurika Ranasinghe, Dineth Weerakoon, Malsha Bandara, Thivanka Gunawardana

Abstract: Biomedical imaging systems generate large volumes of sensitive data that must be securely stored, reliably retrieved, and retained for long durations to meet regulatory, clinical, and research demands. ZFS, a high-integrity, copy-on-write file system with integrated volume management, has emerged as a viable solution for long-term imaging storage in healthcare and biomedical research institutions. This review explores the suitability of ZFS for managing medical imaging archives highlighting its built-in features such as end-to-end checksumming, atomic snapshots, native encryption, and tiered storage capabilities. The paper examines ZFS's alignment with regulatory requirements like HIPAA, GDPR, and FDA 21 CFR Part 11, and discusses how its auditability, snapshot lifecycle management, and disaster recovery features help ensure compliance and data integrity. We delve into ZFS performance tuning for imaging workloads, including optimizations using ARC, L2ARC, SLOG, and record size configuration, which are critical for high-throughput radiology and pathology systems. Integration with PACS, RIS, and AI processing pipelines is reviewed, along with real-world deployments in clinical and research environments. Operational challenges such as resource overhead, secure deletion limitations, and administrative complexity are addressed, alongside emerging trends like object storage extensions, support for storage-class memory, and container-native workflows. Through this comprehensive review, ZFS is positioned not only as a technically robust and scalable imaging storage platform, but also as a strategic foundation for future-proof, compliant biomedical data management.

DOI: https://doi.org/10.5281/zenodo.15847617

Published by:
× How can I help you?