IJSRET » July 19, 2025

Daily Archives: July 19, 2025

Uncategorized

Block Chain System _799

Authors: Yashaswini.P, Yathiraj M N

Abstract: This paper provides a comprehensive overview of blockchain technology and explores its application in developing a robust, transparent, and tamper-resistant system to combat the growing issue of counterfeit products in the pharmaceutical industry. Over the past decade, pharmaceutical companies around the world have been grappling with significant challenges in monitoring and tracking their products across the supply chain. These vulnerabilities have created opportunities for counterfeiters to infiltrate the market with fake or substandard medicines, posing serious risks to public health and causing substantial economic losses to legitimate manufacturers. Counterfeit drugs represent a critical global challenge, undermining the integrity of healthcare systems and endangering the lives of millions. These illegitimate products often contain incorrect dosages, harmful ingredients, or no active pharmaceutical ingredients at all, leading to ineffective treatment, prolonged illness, and in some cases, fatal consequences. According to industry statistics, counterfeit drugs are responsible for an estimated $200 billion in annual losses to pharmaceutical companies in the United States alone. Moreover, a World Health Organization (WHO) survey report reveals that in many underdeveloped countries, approximately one out of every ten medicines consumed by patients is counterfeit or of low quality—highlighting the urgent need for a reliable and tamper-proof solution. In response to this pressing issue, our research proposes and implements a blockchain-based drug supply chain management system that leverages the core features of blockchain technology—immutability, decentralization, transparency, and traceability. In conclusion, this research highlights the transformative potential of blockchain technology in securing pharmaceutical supply chains.

DOI:

 

 

Published by:
Uncategorized

Assessment Of Pollution Levels Using Biomarkers In Callinectes Sapidus From Estuaries In Rivers State, Nigeria_635

Authors: Doris Ugochi Obinna, Dike Henry Ogbuagu, Enos I. Emereibeole, Chris Chibuzor Ejiogu

Abstract: The increasing anthropogenic pollution in estuarine ecosystems poses a significant threat to aquatic life and ecosystem health. This study aims to assess the pollution levels in selected estuaries of Rivers State, Nigeria, using biomarkers in Callinectes sapidus (blue crab) as an indicator of environmental contamination. In situ measurements for some water quality variables were made at the sampling locations. 48 female crabs (weight 149.20 ± 0.02 g) harvested for the estimation of biomarker levels. Mean concentrations of Total Petroleum Hydrocarbons (TPHs), Polycyclic Aromatic Hydrocarbons (PAHs), Zn and Cr (Sig. values=0.000 each), Cd, Pb, and Fe (Sig. t-values=0.003, 0.019 & 0.009 respectively) were significantly higher at the impacted than reference locations, while that of Monocyclic Aromatic Hydrocarbons (MAHs) and Fe (Sig. t- values=0.032 & 0.014 respectively) differed seasonally at p<0.05. Though there was no significant difference in accumulations of the heavy metals and hydrocarbons in tissues of the heavy metals and hydrocarbons in tissues of the organism, numerical accumulations of Zn (5.73±2.60 µg/g) and TPHs (1.84±1.08 µg/g) were highest in the digestive than the other tissues sampled. Mean levels of Lactate Dehydrogenase (LDH), Alanine Aminotransferase (ALT), Aspartate Aminotransferase (AST), Alkaline Phosphatase (ALP) and Malondialdehyde (MDA) (sig=0.000 each) at the OSD locations, and that of total proteins (Sig. t- value=0.030) in the rainy season were all markedly higher in the organism (p<0.05). Elevated MAHs appeared to induce the production of less ALT (r=-0.584) and AST (r=-0.519), Cr induced the production of less AST (r=-0.513) (p<0.05), while MAHs induced the production of less MDA (r=-0.634) (p<0.01). Lead and PAHs recorded very high Pollution indices (240,000 & 790,000) in sediments, while Zn and TPHs recorded high toxicity quotients of 1.59 and 2.83 in the organism. Allochthonous input of pollutants from petroleum sources into the creek caused biological disruptions, including tissue bioaccumulation and other biochemical disruptions in proteins and enzyme activities of C. sapidus, and these disruptions could rightly infer pollution. Treatment of oily effluents before discharge into the creek is recommended.

DOI: http://doi.org/

 

 

Published by:
Uncategorized

Performance Profiling Of Large-Scale Puppet Deployments In UNIX Data Centers

Authors: Santhosh M.,, Keerthana R, Divya Prasad, Ajay Krishna

Abstract: As enterprise UNIX data centers scale to manage thousands of nodes, the performance of automation frameworks like Puppet becomes critical to ensure consistency, speed, and resilience. Puppet, a leading configuration management tool, plays a pivotal role in implementing infrastructure-as-code across Solaris, AIX, and Linux environments. However, large-scale deployments introduce performance challenges due to the complexity of resource catalogs, variable agent execution times, and infrastructure-induced latency. Performance profiling becomes essential to identify and resolve inefficiencies that affect convergence speed, system reliability, and orchestration throughput. This review explores the key dimensions of profiling Puppet in UNIX data centers, including catalog compilation time, agent runtime, resource evaluation delay, and infrastructure throughput. It outlines available profiling tools such as the Puppet profiler, Facter benchmarking, and external instrumentation using DTrace and perf, as well as real-time logging and observability integrations. By examining performance metrics and common bottlenecks—ranging from plugin synchronization delays to fact resolution issues—this article highlights optimization strategies including manifest refactoring, compile master pools, and External Node Classifier (ENC) tuning. Furthermore, it analyzes real-world deployment scenarios from financial, academic, and hybrid UNIX-cloud environments to contextualize challenges and solutions. The review also contrasts Puppet with other configuration management tools like Ansible and Chef, while addressing limitations such as visibility gaps in custom resources and version-specific regressions. Finally, future directions such as ML-based run prediction and integration with AIOps and observability platforms are proposed to advance performance-aware automation at scale. This article aims to provide system architects and automation engineers with practical insights for maintaining high-performing Puppet environments in mission-critical UNIX infrastructures.

DOI: https://doi.org/10.5281/zenodo.16157635

 

Published by:
Uncategorized

Self-Healing Infrastructure Using Custom Shell Automation In Red Hat

Authors: Meenakshi Sundaram, R. Jayanthi, Pradeep Das, Soundarya G.

Abstract: – In modern enterprise environments, ensuring high availability and service continuity has become a mission-critical requirement. Red Hat Enterprise Linux (RHEL), a cornerstone of many IT infrastructures, provides a robust platform for implementing self-healing mechanisms through lightweight and flexible automation. This review explores the concept of self-healing infrastructure with a focus on shell-based automation techniques tailored to RHEL environments. By leveraging native tools such as Bash scripting, systemd, cron jobs, inotify, and Ansible hooks, administrators can design reactive and proactive remediation systems capable of detecting, isolating, and correcting faults without human intervention. The increasing complexity of enterprise deployments, often compounded by limited operational windows and lean support teams, necessitates automation that is both scalable and transparent. Shell scripting remains a powerful ally due to its direct access to system resources, speed, and platform compatibility. Use cases examined in this review include automatic service restarts upon failure, filesystem monitoring and cleanup, dynamic network reconfiguration, and log anomaly detection all driven by lightweight shell scripts. Additionally, the paper examines how these automation techniques can integrate with broader observability frameworks such as ELK and Prometheus for telemetry-driven decision making. Scalability considerations, security constraints, execution reliability, and the evolution of event-driven remediation are discussed to position shell automation as a foundational element of resilient, self-healing systems. The study concludes by reflecting on emerging directions, such as AI-enhanced automation and Red Hat’s event-driven Ansible, and evaluates the continued relevance of shell scripting in modern DevOps and hybrid cloud architectures.

DOI: https://doi.org/10.5281/zenodo.16157141

 

Published by:
Uncategorized

Role-Based Access Control in Multi-Zone Solaris Networks

Authors: Bhavya Iyer, Pradeep Sinha, Krithika Sharma, Anand Joshi

Abstract: Role-Based Access Control (RBAC) is a crucial security model used to manage user access and permissions in complex network architectures. In multi-zone Solaris networks, RBAC plays a key role in ensuring that users only have access to the resources they need based on their designated roles. Solaris zones allow for the isolation of different virtual environments on the same physical machine, providing greater security and operational flexibility. However, managing access control in such segmented environments can be challenging. This paper explores the implementation of RBAC in multi-zone Solaris networks, discussing the configuration of roles and permissions across different zones, the tools available for managing RBAC, and the challenges and benefits of applying this access control model. Best practices for creating, managing, and auditing roles within Solaris zones are also outlined, demonstrating how RBAC enhances security and operational efficiency in multi-zone infrastructures.

DOI: https://doi.org/10.5281/zenodo.16156315

Published by:
Uncategorized

Redundant Monitoring Strategies Using Sl1 And Solarwinds

Authors: Magesh S., Rithika G, Saravanan N.,, Kalpana Devi

Abstract: In complex enterprise IT environments, the reliability of monitoring systems is paramount. As businesses increasingly rely on uninterrupted digital services, monitoring tools themselves must be resilient to failure. Traditional single-platform monitoring architectures risk becoming single points of failure, jeopardizing visibility when incidents occur. To mitigate this risk, organizations are turning to redundant monitoring strategies, deploying parallel observability platforms such as SL1 (ScienceLogic) and SolarWinds. These platforms, while functionally overlapping, offer complementary strengths in data collection, event correlation, visualization, and integration, making them well-suited for redundant and failover-ready deployments. This review explores the strategic deployment of SL1 and SolarWinds in active-active and active-passive configurations to ensure continuous visibility into infrastructure performance, network health, and application availability. By using both platforms in tandem, enterprises can cross-validate data, ensure continuity during platform-specific outages, and reinforce the reliability of alerts and notifications. Integration points such as shared collectors, APIs, and ITSM toolchains (e.g., ServiceNow, Jira) allow seamless cooperation between platforms while preserving operational efficiency. The review also covers key areas such as collector redundancy, alert de-duplication, data consistency, and cross-platform correlation, especially in environments supporting heterogeneous systems like UNIX, Windows, and hybrid cloud workloads. Real-world case studies from healthcare, government, and financial sectors are examined to demonstrate the impact of redundant monitoring in mission-critical infrastructures. Furthermore, the article outlines integration with external observability platforms such as Prometheus and ELK, discusses scalability and fault isolation, and assesses future trends in AIOps-enhanced monitoring. Ultimately, this review positions SL1 and SolarWinds not as competing solutions but as complementary components in a modern, resilient, and intelligent monitoring architecture.

DOI: https://doi.org/10.5281/zenodo.16156139

 

Published by:
Uncategorized

BLOCK CHAIN BASED: SUPPLY CHAIN MANAGEMENT SYSTEM

Authors: Mahammad Rafeeq, Pavana Kumar H A, Kumarswamy S

Abstract: This study aims to explore the current status, potential applications, and future directions of blockchain technology in supply chain management. A comprehensive literature survey, along with an analytical review, of blockchain-based supply chain research was conducted to better understand the trajectory of related research and shed light on the benefits, issues, and challenges in the blockchain–supply chain paradigm. A selected corpus comprising 106 review articles was analysed to provide a holistic overview of the integration of blockchain and smart contracts into supply chain ecosystems. The findings reveal that blockchain technology has attracted significant attention from researchers, engineers, and industry practitioners due to its potential to revolutionize traditional supply chain operations through decentralized trust, immutable data records, and improved automation. The diverse industrial applications of blockchain span across sectors such as agriculture, pharmaceuticals, automotive, logistics, and food safety, each utilizing blockchain’s capabilities to enhance operational transparency, real-time data sharing, fraud prevention, and regulatory compliance. Four major thematic issues have emerged as pivotal to the future trajectory of blockchain adoption in supply chains: (i) traceability and transparency, which are essential for product authenticity and regulatory adherence; (ii) stakehold.

DOI:

 

Published by:
Uncategorized

CentrifyDC Authentication Failures: Patterns, Prevention, and Protocols

Authors: Vinay Kulkarni, Sneha Patange, Meera Salgaonkar, Rajat Nair

Abstract: Authentication is a critical component of enterprise security, ensuring that only authorized users gain access to sensitive data and systems. CentrifyDC is an identity and access management solution that integrates with Active Directory (AD) to manage user authentication, offering features like single sign-on (SSO) and role-based access control (RBAC). However, authentication failures in CentrifyDC can arise due to various factors such as incorrect credentials, time synchronization issues, network connectivity problems, and misconfigured protocols. These failures can disrupt business operations and pose security risks. This paper explores the common patterns of authentication failures in CentrifyDC, including their root causes, troubleshooting methods, and prevention strategies. It also discusses key protocols involved in CentrifyDC authentication, such as Kerberos, LDAP, and RADIUS, and highlights best practices for minimizing failures and enhancing system reliability.

DOI: https://doi.org/10.5281/zenodo.16155649

Published by:
Uncategorized

Forensic Readiness Using Tcpdump, Wireshark, and Log Analysis

Authors: Shalini Mehra, Pavan Krishnan, Rituja Deshpande, Anil Borkar

Abstract: Forensic readiness is a crucial component of modern cybersecurity, enabling organizations to effectively detect, analyze, and respond to security incidents. In a landscape where cyber threats are becoming increasingly sophisticated, forensic readiness ensures that organizations are prepared to collect and preserve digital evidence in a way that supports investigative processes and legal proceedings. This paper explores the role of network traffic capture tools, such as tcpdump and Wireshark, alongside log analysis, in forensic readiness. Tcpdump, a command-line tool for network packet capture, and Wireshark, a graphical network protocol analyzer, are instrumental in collecting real-time network data and identifying suspicious activities during security incidents. Log analysis plays a complementary role by providing detailed records of system and application events, helping investigators build a comprehensive timeline of the attack. Together, these tools enable organizations to monitor network traffic, correlate system activities, and preserve evidence, ensuring a rapid and efficient response to cyber threats. This paper discusses the features, practical applications, and benefits of using tcpdump, Wireshark, and log analysis in forensic investigations, highlighting their critical role in enhancing cybersecurity defenses and ensuring regulatory compliance.

DOI: https://doi.org/10.5281/zenodo.16154989

Published by:
Uncategorized

Data Protection Strategies with EMC, Hitachi, and NetApp in Unix Infrastructure

Authors: Tahmidul Islam, Sadia Mahjabeen

Abstract: In today's data-driven world, protecting sensitive information and ensuring its integrity has become a core responsibility for organizations, especially as they scale and rely more on complex infrastructures. As businesses transition their workloads to more hybrid and multi-cloud environments, the need for reliable and robust data protection strategies has never been more crucial. Data is increasingly stored on highly distributed systems, and to ensure its security and availability, organizations must adopt advanced solutions that allow them to back up, secure, and recover critical data quickly and efficiently. Unix-based infrastructures are at the heart of many enterprise IT systems, powering everything from database servers to web hosting environments, and these systems need tailored data protection strategies. In this context, EMC (now part of Dell Technologies), Hitachi Vantara, and NetApp are three key players in the data protection space, each offering comprehensive solutions for backup, disaster recovery, and storage management in Unix environments. This paper explores the data protection strategies offered by these vendors, with a focus on their solutions for Unix infrastructures. The discussion will highlight the key features of each vendor's offering, including their approaches to data backup, replication, disaster recovery, and cloud integration. Additionally, the paper will assess the scalability, performance, and flexibility of their solutions, providing organizations with insights into which vendor best suits their operational and technical requirements. As businesses strive to manage their data securely and efficiently in an ever-evolving IT landscape, the choice of the right data protection strategy becomes crucial. This paper aims to help organizations understand how these industry leaders are addressing the challenges of modern data management and protection in Unix-based infrastructures.

DOI: https://doi.org/10.5281/zenodo.16152709

Published by:
× How can I help you?