IJSRET » Blog Archives

Author Archives: vikaspatanker

Backup Optimization Using VSP G900/G1000 Arrays in Healthcare IT

Uncategorized

Authors: Nasrin Jahan, Salman Hossain

Abstract: In healthcare organizations, the protection of patient data and operational continuity is paramount. The advent of digital health systems has led to a vast increase in data generation, from Electronic Health Records (EHRs) to medical imaging, creating challenges for data protection. Healthcare IT systems must meet stringent regulatory compliance standards like HIPAA (Health Insurance Portability and Accountability Act), which dictate that data be secure, available, and recoverable in case of failure. With data volumes increasing, ensuring fast and efficient backup processes has become critical, as it directly impacts an organization’s ability to respond to unforeseen disruptions, disasters, or cyberattacks. One of the leading solutions for optimizing backup operations in healthcare IT is the VSP G900/G1000 arrays from Hitachi Vantara. These storage systems are designed to provide high performance, scalability, and reliability in complex environments, offering a combination of features such as data deduplication, replication, cloud integration, and high availability. These features enable healthcare organizations to optimize their backup workflows, reduce storage costs, and ensure rapid data recovery, making them a valuable asset in modern healthcare infrastructures. This paper explores how VSP G900/G1000 arrays optimize backup strategies within healthcare IT environments. It examines how these arrays facilitate data protection, meet regulatory compliance standards, and enhance disaster recovery capabilities. Additionally, the paper discusses the scalability and flexibility of these arrays in handling large healthcare data sets, enabling efficient backup and recovery while ensuring that the IT systems stay operational with minimal downtime. As healthcare organizations increasingly adopt cloud-first strategies, the VSP G900/G1000 arrays are positioned as a vital tool for protecting sensitive data, ensuring business continuity, and optimizing backup processes across the healthcare sector.

DOI: https://doi.org/10.5281/zenodo.16152432

Published by:

Comparing Snapshot Technologies: TSM vs Commvault in Large-Scale Environments

Uncategorized

Authors: Ishara Jayasuriya, Chamika Dissanayake

Abstract: In today’s ever-evolving technological landscape, enterprises are increasingly relying on cloud computing for its scalability, flexibility, and cost-effectiveness. As organizations grow and expand, managing large-scale data environments efficiently becomes a significant challenge. One of the most crucial aspects of managing such environments is data protection, which is why snapshot technologies have become essential components of modern IT infrastructures. Snapshots enable organizations to create point-in-time copies of their systems and data, allowing for quick recovery in case of failure, and are particularly useful for backup, disaster recovery, and ensuring data integrity. Among the most widely used snapshot technologies are IBM Tivoli Storage Manager (TSM) and Commvault, both of which offer advanced backup and snapshot management solutions suited for large-scale environments. IBM Tivoli Storage Manager (TSM) has been an industry leader in backup and recovery for enterprises, with a strong emphasis on data deduplication and incremental backup technologies. On the other hand, Commvault is renowned for its cloud-first approach to data protection, providing comprehensive backup, recovery, and snapshot solutions across on-premises, hybrid, and cloud environments. This paper aims to compare the snapshot capabilities of TSM and Commvault, focusing on their performance, scalability, integration with cloud environments, data integrity features, compliance support, and overall suitability for large-scale IT environments. The comparison will explore the differences in how these two snapshot technologies handle large volumes of data and complex infrastructure setups, such as multi-cloud and hybrid cloud architectures. By examining the strengths and weaknesses of both solutions, this paper will provide organizations with valuable insights to make informed decisions about which snapshot technology best fits their operational needs. For businesses dealing with high-volume workloads, the ability to perform fast, reliable backups and recovery is critical. This analysis will also explore how both TSM and Commvault contribute to efficient disaster recovery, support regulatory compliance, and ensure business continuity in large-scale environments. Ultimately, this comparison will assist enterprises in selecting the most appropriate snapshot technology for their infrastructure, ensuring data protection while optimizing cost, performance, and scalability.

Published by:

Auto-Remediation of Backup Failures Using Shell-Based Schedulers

Uncategorized

Authors: Bhanuka Silva, Sanduni Jayalath

Abstract: In the digital age, data integrity and availability are the cornerstones of business continuity. Organizations rely on backup systems to ensure that their data is always protected and recoverable in the event of a failure. However, backup failures pose a significant risk, especially when they go unnoticed for extended periods, leading to data loss or extended downtime. Traditional methods of managing backup failures often involve manual intervention, which is both time-consuming and prone to human error. The advent of automation provides a solution to these challenges by enabling the auto-remediation of backup failures. This paper explores the concept of auto-remediation of backup failures using shell-based schedulers, particularly cron in Unix-like systems, to detect, address, and resolve backup issues automatically. Auto-remediation refers to the use of automation to detect backup failures and trigger predefined remediation actions, such as restarting failed jobs, adjusting configurations, or alerting system administrators. By combining cron jobs and shell scripts, businesses can automate the backup monitoring process, reducing the time and effort required to manage backup systems manually. The integration of this automation into a larger backup management framework ensures that failures are promptly addressed, minimizing the risks associated with data loss and downtime. This paper highlights how cron-based automation can create efficient and scalable backup systems that are resilient, proactive, and reliable, ultimately supporting business continuity and improving data protection efforts.

DOI: https://doi.org/10.5281/zenodo.16151937

Published by:

The Effectiveness of Artificial Intelligence Methods in Software Testing: An In-Depth Review

Uncategorized

Authors: Dr. Kumarswamy S, Darshith L, Aditya B N, Mohammed Waseem, Nagraj, Nitin Reddy

Abstract: This review explores the effectiveness of Artificial Intelligence (AI) methods in software testing, addressing the high costs and challenges of traditional approaches. It examines key AI and Machine Learning (ML) techniques, their applications in test suite optimization, test case generation, and test case prioritization, highlighting quantitative improvements. The paper also discusses current challenges like data availability and model bias, and outlines future research directions for more adaptable and scalable AI solutions in software engineering.

DOI: https://doi.org/10.5281/zenodo.16149448

Published by:

AI-Driven Anomaly Detection in Nagios and Zabbix Logs

Uncategorized

Authors: Anirudh Narayan, Bindu Lakshmi, Haritha Gopal, Vivek Vardhan

Abstract: In the evolving landscape of IT infrastructure monitoring, the volume and velocity of log data generated by tools such as Nagios and Zabbix present significant challenges for timely and accurate anomaly detection. Traditional rule-based approaches, which rely on static thresholds and manual configurations, often fail to capture subtle or emerging issues, leading to alert fatigue or missed incidents. To address these limitations, the integration of artificial intelligence, particularly machine learning, into log-based monitoring has emerged as a transformative solution. By analyzing patterns in historical logs and adapting dynamically to changes in system behavior, AI models ranging from supervised classifiers to unsupervised clustering algorithms and deep learning architectures can enhance the detection of anomalies within Nagios and Zabbix environments. This review examines the application of AI to anomaly detection in logs generated by Nagios and Zabbix, focusing on key log types such as performance metrics, event logs, alert logs, and syslogs. It explores how AI improves detection precision, reduces false positives, and enables earlier incident prediction. The paper also compares data handling mechanisms in both tools and outlines common AI integration pipelines including log preprocessing, model training, and real-time inference. Furthermore, implementation case studies and evaluation metrics are discussed to highlight real-world benefits and performance trade-offs. Ultimately, this article positions AI-driven anomaly detection as a critical enabler for modern observability and proactive IT operations, especially in large-scale or mission-critical infrastructures.

Published by:

Deep Neural Networks Architecture, Applications, and Challenges

Uncategorized

Authors: Mayank Shakkerwal

Abstract: The Deep Neural Networks (DNNS) has revolutionized the field of artificial intelligence by enabling machines to learn from large amounts of data with human-level accuracy in tasks such as image recognition, Natural Language Processing (NLP), and game playing. This Research Paper represents a comprehensive observation of the structure and function of DNN, great advances in their development, popular architecture, real -world applications and major challenges that hinder their widespread adoption. We also highlight future instructions in DNN research, including interpretation, efficiency and ethical/moral implications.

Published by:

Autochef AI: Multi-Modal Attention For Visual Ingredient Recognition And Recipe Generation From Food Images

Uncategorized

Authors: Bhaskara B, Vinith M, Kumarswamy S

Abstract: – Understanding food from images poses marvellous challenge in the region of recipe search, with impactful applications in smart kitchens, dietary monitoring, and automated cooking assistance. Traditional approaches typically handle ingredient recognition and instruction generation as separate tasks, often resulting in incoherent or disjointed outputs. Here, we bring Autochef AI, multi-modal attention toll which seamlessly join visual and textual information to accurately identify ingredients and generate step-bystep cooking instructions from food images. By incorporating attention mechanisms across both image and text modalities, our model captures fine-grained features essential for coherent and contextually grounded recipe generation. Experimental results demonstrate that our approach significantly improves both ingredient prediction accuracy and instruction quality across a wide variety of recipes and cuisines.

DOI: https://doi.org/10.5281/zenodo.16522342

 

Published by:

Building Cross-Functional Dashboards in Workday: from Time off Analytics to Compensation Reviews

Uncategorized

Authors: Santhosh Kumar Maddineni

Abstract: In today’s data-driven HR environment, organizations require unified insights across functions like time tracking, compensation, and workforce planning. This paper explores how to build cross-functional dashboards in Workday that provide real-time visibility from time off analytics to compensation reviews. Leveraging Workday’s native reporting tools—Worklets, Worksheets, and Prism Analytics—the paper outlines best practices for designing dashboards that integrate diverse datasets while maintaining user security and performance. Key design considerations include data sourcing via calculated fields and custom reports, security group configuration, and intuitive layout strategies for executive and operational users. Use cases include visualizing PTO trends by department, correlating time off with compensation patterns, and enabling managers to make informed decisions during merit reviews. The paper also discusses versioning, stakeholder feedback loops, and mobile responsiveness for field accessibility.

DOI: https://doi.org/10.5281/zenodo.16079810

Published by:

1.58 Bit Large Language Model(LLM)

Uncategorized

Authors: Kumarswamy S, Vidya Laxman Gadekar, Manasi

Abstract: Large Language Models (LLMs) have changed the landscape of natural language processing (NLP) with state of the art performance across numerous applications. Nonetheless, the computational and memory requirements for deployment in resource constrained environments are still a barrier. In this paper we describe the development of a 1.58-bit LLM which utilizes various quality quantization aware tuning and training techniques, and low- rank adaptation (LoRA), with additional memory efficient techniques (e.g., Flash Attention). The LLM quantization methods provide significant savings in both memory and energy consumption and retains competitive accuracy. Our experimental benchmarking demonstrates that effective training and quantization of LLMs can be applied to edge computing and other resource limited deployment methods.The advancement of Large Language Models (LLMs) has significantly transformed natural language processing (NLP) by achieving state-of-the-art results in multiple domains. Nevertheless, the highly computationally and memory-intensive nature of these models makes their deployment in resource-limited settings challenging. In this paper, we introduce the design of a 1.58-bit precision LLM with the state-of-the-art quantization approach and memory-efficient techniques including low- rank adaptation (LoRA) and Flash Attention. The proposed model offers a substantial cut in memory footprint and energy consumption, while maintaining a competitive accuracy. Experimental evaluations on benchmark datasets validate the effectiveness of this approach, demonstrating its applicability in edge computing and other resource-sensitive deployments.

Published by:

Blockchain-Enabled Final Seal Verification for Tea Supply Chain Integrity

Uncategorized

Authors: Abhijit Kakoty

Abstract: This paper presents a blockchain-based prototype for enhancing traceability and data integrity in the tea supply chain through a Final Seal Verification mechanism. The system integrates Ethereum blockchain (via Ganache and Truffle), Node.js, Solidity smart contracts, and Laravel to ensure the authenticity of critical supply chain events. By hashing and sealing traceability data onto an immutable blockchain, the system can detect tampering attempts and provide customers with a reliable method of batch verification via QR codes.

Published by:
× How can I help you?