IJSRET » Blog Archives

Author Archives: vikaspatanker

IJSRET Volume 4 Issue 1, January-2018

Uncategorized

Chaotic Function Based Data Hiding Approach at Least Significant Bit Positions

Authors: Dilip Kumar Mishra, P.G.Scholar, Sriram Yadav, A.P. & Head

Abstract: With the increase in the digital media transfer and modification of image is very easy. So one major issue of proprietorship is raised, as copying and transferring is very soft practice. Here this paper has resolve proprietorship problem by embedding the digital data with encryption. In this work embedding of data is done by applying the Arnold’s Cat Map algorithm for randomization of pixel values. Then robustness is provided by using the AES algorithm. Finally  using spatial technique embedding of digital data is done in encrypted image. Embedding in LSB portion of the pixel this research work is robust against various attacks. Experiment is done on real data-set image. Evaluation parameter values shows that this research work has maintain the SNR, PSNR values with high robustness of the data.

A Survey on: Social Feature Based Service Rating Prediction Techniques

Authors: Madhu Rajput, P.G.Scholar, Amit Thakur, A.P.

Abstract: The Internet has made it possible to discover opinions of others on a wide range of subjects, through social media websites, such as review sites, wikis, and through online social networks. Some of website provide user rating for different product or services but they do not recommend any user to purchase. This paper focus on elaborating the user rating behavior of particular kind of services. Here techniques developed by various researchers are discussed with there requirements. In social network some inherent features are also detailed with the help of which prediction percentage may be increase.

A Survey on Different Features and Techniques for Web Service Prediction

Authors: Sonika Baisakhiya, P.G.Scholar, A.P. Jayshree Boaddh, Prof. Durgesh Wadbude.

Abstract: – Quality of Service (QoS) assurance is an important factor of service recommendation. The web services which are never been used before by users have some indefinite QoS values for that service, and hence the accurate prediction of indefinite QoS values is important for the successful consumption of Web service-dependent applications. Collaborative filtering is the technique which is broadly accepted in the prediction of indefinite QoS values as it is significant for predicting missing values. Though, collaborative filtering derived from the processing of subjective data. In this paper, we describe various collaborative filtering by QoS rating techniques applied to web service mining and addresses various collaborative filtering problems.

Internet of Things Based Smart Home Automation

Authors: Harshal S.Bhosale, Mrs. V. RPalundarkar, Priyesh S. Surve, Rahul B. Biswas

Abstract: – Home Automation is conveniences installed and designed to perform chore in your living place. Smart homes are often referred to as intelligent homes as they perform services that become part of our life. Many of the automated systems that silently perform their jobs unnoticed this is automation at its best. We live in an exciting time where more and more everyday items “things” are becoming smart! “Things” have sensors and can communicate to other “things” and can provide control to more “things”. The Internet of Things, IoT, is upon us in a huge way and people are rapidly inventing new gadgets that enhance our lives. The price of micro controllers with the ability to talk over a network keeps dropping and developers can now tinker and build things inexpensively. IoT based home automation can be achieved by using low cost ESP8266 ESPino ESP-12 WiFi Module, AVR Controller and Relay’s.

Review Article of Basic ADC Design and Issue of Old Algorithm

Authors: M. Tech. Madhusudan Singh Solanki, Associate Prof. Priyanshu Pandey

Abstract: –Analog to-digital converters (ADCs) are key design blocks and are at present embraced in numerous application fields to enhance computerized frameworks, which accomplish better exhibitions with regard the analog arrangements. With the quick progression of CMOS manufacture innovation, more signal processing capacities are actualized in the computerized space for a lower cost, bring down power utilization, higher yield, and higher re-configurability. Across the board use gives awesome significance to the design exercises, with these days to a great extent adds to the generation cost in coordinated circuit gadgets. This has as of late generated an extraordinary interest for low-control, low-voltage ADCs that can be acknowledged in a standard deep submicron CMOS innovation. Different cases of ADC applications can be found in information securing frameworks, estimation frameworks and digital correspondence frameworks likewise imaging, instrumentation frameworks. Subsequently, this work need to considered every one of the parameters and enhancing the related execution may fundamentally decrease the modern cost of an ADC producing process and enhanced the determination and configuration extraordinarily control utilization . This paper displays a 4 bit Pipeline ADC with low power dissipation executed in <0.18μm CMOS innovation with a power supply of 1.2V.

A Survey on Various Techniques and Characteristics of Text Document Fetching

Authors: M. Tech. Scholar Rajeev Kumar, Prof. Durgesh Wadbude, A.P. Jayshree Boaddh

Abstract: –Search engines are the major breakthrough on the web for retrieving the information. But List of retrieved documents contains a high percentage of duplicated and near document result. So there is the need to improve the performance of search results. In this paper text document retrieval study is done with various techniques of fetching with there implementations. Here different features for the text document retrieval is explained in detailed with there requirements as feature vary as per text analysis. Paper has brief different evaluation parameters for the study and comparison of relevant documents techniques.

An Rob-frugal Based Data Distribution Approach By Utilizing Frequent Pattern Tree Rules

Authors: M. Tech. Scholar Ankit Saini, A.P. Jayshree Boaddh, Prof. Durgesh Wadbude

Abstract: –– Information sharing among the organizations is a general movement in a few zones like business advancement and showcasing. As portion of the sensitive rules that should be kept private might be revealed and such revelation of sensitive patterns may impacts the benefits of the organization that possess the information. Consequently the principles which are sensitive must be covered before sharing the information. In this paper to provide secure data sharing sensitive rules are perturbed first which was found by frequent pattern tree. Here sensitive set of rules are perturbed by substitution with rob-frugal mixing. This type of substitution reduces the risk and increase the utility of the dataset as compared to other methods. Analysis is done on genuine dataset. Results demonstrates that proposed work is better as contrast with different past methodologies on the premise of assessment parameters.

A Implementation of High Speed On-Chip Monitoring Circuit By Using SAR Based ADC Design

Authors: M. Tech. Madhusudan Singh Solanki, Associate Prof. Priyanshu Pandey

Abstract: –– An 16-bit pipelined analog-to digital converter (ADC) is designed in this thesis. The pipelined architecture realizes the high-speed and high-resolution. To reduce some complexities of flash ADC pipeline ADC is used. The calibration schemes of pipelined ADC limit absolute and relative accuracy. Deviations in residue amplifier gain results due to low intrinsic gain of transistors, and mismatching between all the capacitors of capacitance 1pF result in both deviations in residue amplifier gain and DAC nonlinearity in a pipelined ADC. To image the world, a low-power CMOS image sensor array is required in the vision processor. The image sensor array is typically formed through photo diodes and analog to digital converter (ADC). To achieve low power acquisition, a low-power mid-resolution ADC is necessary. Digital correction allows also to use very low power dynamic comparators. The multiplying D/A converters (MDACs) utilize a modified folded dynamic amplifier .In this enables to the use of dynamic amplifiers in a pipeline ADC. In addition, the dynamic amplifier offers both clock scalability and high-speed operation even with a scaled supply voltage. Using the above techniques, a 16 bit prototype ADC achieves a conversion rate of 160 MS/s with a supply voltage of 0.55 V. Therefore, the combination of interpolated pipeline architecture and dynamic residue amplifiers demonstrates the feasibility of ultra-low voltage high-speed analog circuit design. To implement the whole system a low-power and small size capacitance value sensing readout circuit is required. Also, it has to be integrated together with the back-end low-power current-mode ADC on the same chip. The low-power current-mode ADC has been designed and fabricated with TSMC 0.18um CMOS technology. In the simulation result, the power consumption for 6-bit ADC was 6 W, with a power supply of 0.65 V.

The Impact Of AI-based Workload Schedulers On Energy-efficient Data Centers

Authors: Arjun Prasad

Abstract: Artificial intelligence (AI) has emerged as a transformative force across numerous technological domains, with its impact acutely felt in the design and operation of modern data centers. As the demand for cloud services, big data analytics, and internet-based applications surges, data centers have grown exponentially in size and complexity, concurrently escalating their energy consumption. Addressing energy efficiency within these large-scale computing infrastructures is paramount not only from an operational cost perspective but also for environmental sustainability. AI-based workload schedulers have been increasingly adopted as innovative solutions to optimize resource utilization and curtail energy wastage. These intelligent schedulers leverage machine learning algorithms, predictive analytics, and real-time monitoring to dynamically allocate workloads based on energy profiles, cooling capacities, and computing requirements. The integration of AI fosters adaptive scheduling strategies that can respond to fluctuating workloads, minimize idle hardware, and optimize server usage, thereby enhancing energy efficiency. This article comprehensively explores the multifaceted impact of AI-driven workload scheduling on the operation of energy-efficient data centers. It delves into state-of-the-art AI scheduling techniques, mechanisms for workload prediction, energy consumption modeling, and the synergies between hardware infrastructure and intelligent scheduling systems. Furthermore, the article discusses challenges such as scalability, algorithmic complexity, and integration with existing data center management frameworks. By synthesizing contemporary research findings and industry practices, this work aims to provide a detailed understanding of how AI can revolutionize energy management in data centers, ultimately contributing to reduced carbon footprints and sustainable growth in the digital era.

DOI: http://doi.org/10.5281/zenodo.17777795

The Impact Of AI On Predictive Performance Tuning In Cloud Computing Environments

Authors: Ashok Kumar

Abstract: Artificial Intelligence (AI) has revolutionized predictive performance tuning in cloud computing environments, offering significant advancements in resource allocation, fault detection, and autonomic optimization. In an era marked by increasing computational complexity, unpredictable traffic patterns, and heightened demands for availability, integrating AI into cloud operations enables proactive identification and mitigation of latency, bottlenecks, and system inefficiencies. This abstract provides a concise overview of how AI-driven techniques—such as machine learning models, deep neural networks, and reinforcement learning algorithms—have become indispensable for predictive analytics, facilitating dynamic resource scaling, workload balancing, and anomaly detection. AI systems leverage vast datasets generated by cloud infrastructures to uncover hidden patterns, optimize service level agreements (SLAs), and deliver high-performance computing with reduced costs and improved reliability. Challenges remain, especially regarding model interpretability, real-time adaptability, and ethical deployment. Nevertheless, the synergistic evolution of AI and cloud computing stands poised to redefine best practices in predictive performance tuning, fostering new paradigms of automation, resilience, and intelligence in the digital ecosystem.

DOI: http://doi.org/10.5281/zenodo.17777827

The Role Of AI And Automation In Adaptive Unix And Linux System Governance

Authors: Ramesh L. Subedi

Abstract: The integration of Artificial Intelligence (AI) and automation into Unix and Linux system governance has revolutionized how system administrators manage, monitor, and optimize infrastructure. These traditionally command-driven systems are now empowered with intelligent tools capable of adaptive learning, self-optimization, and proactive management. AI enhances performance monitoring, predictive maintenance, anomaly detection, and resource allocation, while automation streamlines repetitive administrative tasks, reducing human error and improving efficiency. Together, they establish a self-sustaining governance model that adapts dynamically to workloads and evolving cybersecurity challenges. This review explores the foundational concepts of AI integration, automation frameworks, and adaptive governance mechanisms within Unix and Linux environments. Furthermore, it examines their impact on performance, scalability, and compliance, alongside discussing real-world implementations and future perspectives. As enterprises shift towards AI-driven operations, the adaptive governance of Unix and Linux systems emerges as a vital frontier, blending intelligence with reliability to build resilient, autonomous computing infrastructures.

DOI: http://doi.org/10.5281/zenodo.17840389

Architectural And Workload-Driven Optimization Of SQL Server For High-Performance Enterprise Systems

Authors: Hema Latha Boddupally

Abstract: Enterprise applications increasingly depend on relational database systems to process large volumes of both transactional and analytical workloads while meeting stringent performance, scalability, and availability requirements. Microsoft SQL Server, widely adopted across industries such as finance, healthcare, retail, and manufacturing, provides a comprehensive set of optimization mechanisms that span query processing, cost-based optimization, indexing strategies, storage and I/O layout, and automated tuning facilities. Despite the maturity of these capabilities, many enterprise deployments suffer from suboptimal configurations, poorly designed schemas, stale statistics, and ineffective maintenance practices, which collectively lead to latency bottlenecks, excessive disk and memory utilization, and limited scalability under peak loads. This article presents a systematic study of SQL Server optimization techniques for high-performance enterprise workloads, focusing on core areas including query execution architecture, index and statistics design, table partitioning strategies, and operational maintenance practices such as monitoring and automation. By synthesizing foundational academic research, Microsoft technical whitepapers, and widely adopted industry case studies published between 2000 and 2016, the paper distills practical, experience-driven guidance for designing, tuning, and sustaining SQL Server systems that can reliably operate under high concurrency, large data volumes, and evolving business demands.

DOI: http://doi.org/10.5281/zenodo.18042490

Published by:

Architectural And Workload-Driven Optimization Of SQL Server For High-Performance Enterprise Systems

Uncategorized

Authors: Hema Latha Boddupally

Abstract: Enterprise applications increasingly depend on relational database systems to process large volumes of both transactional and analytical workloads while meeting stringent performance, scalability, and availability requirements. Microsoft SQL Server, widely adopted across industries such as finance, healthcare, retail, and manufacturing, provides a comprehensive set of optimization mechanisms that span query processing, cost-based optimization, indexing strategies, storage and I/O layout, and automated tuning facilities. Despite the maturity of these capabilities, many enterprise deployments suffer from suboptimal configurations, poorly designed schemas, stale statistics, and ineffective maintenance practices, which collectively lead to latency bottlenecks, excessive disk and memory utilization, and limited scalability under peak loads. This article presents a systematic study of SQL Server optimization techniques for high-performance enterprise workloads, focusing on core areas including query execution architecture, index and statistics design, table partitioning strategies, and operational maintenance practices such as monitoring and automation. By synthesizing foundational academic research, Microsoft technical whitepapers, and widely adopted industry case studies published between 2000 and 2016, the paper distills practical, experience-driven guidance for designing, tuning, and sustaining SQL Server systems that can reliably operate under high concurrency, large data volumes, and evolving business demands.

DOI: http://doi.org/10.5281/zenodo.18042490

Published by:

Computer Science

Uncategorized

Latest Research Topic for computer science are:

 

Data Mining include analysis of large amount of unorganized data in form of text files, image, tabular data, etc. Here further classification of this is done by working in specific area of research such as

  • Web Mining:

    Here website related information like page content optimization can be done by using its features of web log, web content, web structure.

    Web Page Prediction: This comes under web mining where web log is use for understanding the user behavior on the website, in this step page content was also used. Some algorithm like Ant colony,  markov modal, etc are use for the same.

    Web Page Ranking: In this work website various pages are analyzed for ranking the pages of the site by using methods of google rank, linear rank, page rank etc.

  • Text Mining:

    Here documents are either arrange, summarized, fetch, etc by using pattern or Term feature.

    Content Retrieval / Document Retrieval / Information Retrieval: In this work text files are either arrange in specific order OR fetch list of files based on the query of user.

  • Temporal Mining:

    Here analysis done on the basis of time stamp where various information are summarized as per there happening and there causes.

    Event Activity Happening: In this work events are proposed with there probability where chance of data going was done.

    Nature Prediction: Large amount of information gather from the satellite images for predicting the glacier movements, galaxy analysis.

    Dataset Available for Research in Computer Science are

  • Adult Dataset (code 101): for pattern recognition, privacy preserving mining, Description: It contain 32560 number of session with 11 fields where attribute contain both number and textual data. To Get Data / Download Data Send Request
  • Information Retrieval Dataset (code 102): Its an collection of images which contain 1000 images of ten category like people, elephant, bus, etc. where each subcategory have 100 images. This dataset is use for image retrieval, fetching, clustering, etc. To Get Data / Download Data Send Request
  • Text File Classification (code 103): In this dataset small set of text files are collect where 1000 text file contain article on hockey different worldcup debate. This dataset is used for text mining such as classification of text files, fetching of relevant document as per user query, conclusion / abstract creation, disputant identification, etc. To Get Data / Download Data Send Request
  • Tax Relation Dataset (code 104): In this dataset tax payer relations are present with company details, transaction between them, individual relations, here dataset is used in generating association rules, Tax evasion identification, Transaction identification for business, etc. To Get Data / Download Data Send Request
  • Brain Tumor Segmentation Dataset (code 105): In this dataset 50 images with there ground truth were present, so dataset has 100 images. Image have skull, brain and tumor section. Hence scholar can segment image into three class.  To Get Data / Download Data Send Request
  • Jaffe Images (code 106): In order to study the expression of the face this set of dataset is very helpful where seven emotion are expressed by 11 Japaneses girls. Here this dataset used for image fragmentation, expression recognition, face recognition, etc. To Get Data / Download Data Send Request
  • Chess Dataset (code 107): This dataset is collection of 76 items where 3196 number of transactions are present, its an set of numeric id of the frequent items purchase by the store. This is used for Association Rule Mining, Privacy Preserving, FP-Tree, etc. To Get Data / Download Data Send Request
  • Image Watermarking (code 108): in this dataset standard set of images are present both in color and gray format, with two size 256×256 and 512×512 name of those images are mandrilla, lena, etc. This is used for image watermarking, cryptography, encryption, decryption, etc.  To Get Data / Download Data Send Request
  • Character Recognition (code 109): Its an collection of images for identifying the character present in the hand expression used by dumb people. This is used in image processing for shape identification.  To Get Data / Download Data Send Request
  • Object Detection (code 110): In this dataset few set of video are present where people perform various action such as running, walking, jumping, etc. To Get Data / Download Data Send Request
  • Web Log (code 111): In this dataset web log of famous nasa site was present with there complete path of various random surfer, it contain 10000 sessions in the dataset in text file.  To Get Data / Download Data Send Request
  • Twitter Sentiment (code 112): Here one can get sentiment of the tweets done by the user on different emotions. This can be used in sentiment analysis, classification, emotion identification, etc. To Get Data / Download Data Send Request
  • Geosptial Tagging (code 113): Here as per the geographical co-ordination various set of image are tag by specifying its longitude and latitude value. This is used in geographical location based learning,  etc. To Get Data / Download Data Send Request
  • SAR Image (Code 114): In this dataset set of SAR image is present with different time span for the study of ice / snow precipitation, melting rate, etc. Here researcher can use this for segmentation, rate identification, etc. To Get Data / Download Data Send Request
  • Wisconsin Breast Cancer (code 1014): Its an numeric value dataset used for identifying the cluster of data which tends towards breast cancer. This dataset is implement for pattern recognition, binary classification testing techniques, etc. To Get Data / Download Data Send Request
Published by:

IJSRET Volume 3 Issue 6, November-2017

Archive Volume 3 Issue 6

An Approach for Trusted Computing of Load Balancing in Cloud Environment

Authors: Manoj Kumar Selkare, Vimal Shukla

Abstract: Cloud computing is a novel approach in order to use the resource of computing where these resources may be hardware or software. This facility is delivered as a service in the communication network. This facility known as cloud, which occurred from the use of a service as a cloud, which is an abstraction for the complex infrastructure system containing diagrams. Services of cloud computing involve trusted remote user data, and computer software. This paper proposed to an approach to efficient load balancing in cloud computing

Low Power Three Input XOR Gate For Arithmetic And Logical Operation

Authors:Ms. Poojashree Sahu, Mr. Ashish Raghuwanshi

Abstract: With advancement of microelectronics technology scaling, the main objective of design i.e. low power consumption can be easily acquired. For any digital logic design the power consumption depends on; Supply voltage, number of transistors incorporated in circuit and scaling ratios of the same. As CMOS technology supports inversion logic designs; NAND & NOR structures are useful for converting any logic equation into physical level design that comprises of PMOS and NMOS transistors. In similar way, logic can be implemented in other styles as well, with the difference in number of transistors required. The conventional CMOS design for three input XOR logic can be possible with 10 or more than 10 transistors, with the methodology discussed in this paper, the same design for three inputs XOR logic can be made possible with 16 transistors. The proposed methodology consists of transmission gate and systematic cell design methodology (SCDM). This design consumes 45% (35%) less power dissipation than that of conventional LPHS-FA and SCDM based XO10 XOR logic design with CMOS technology. Since the design for XOR logic, is useful for variety of applications such as Data encryption, Arithmetic circuits, Binary to Gray encoding etc. the XOR logic has been selected for design. The design explained in this paper is simulated with 130nm technology.

Low Read Power Delay Product Based Differential Eight Transistor SRAM cell
Authors: Ms. Jaya Sahu,Mr. Ashish Raghuwanshi

Abstract: SRAM is designed to provide an temporary storage for Central Processing Unit and replace Dynamic systems that require very low power consumption. Low power SRAM design is critical aspect since it takes a large fraction of total power and die area in high performance processors. This paper include the work on eight transistor SRAM cell that of smaller read power delay product due to cascading of pull offers 28% (74%) smaller read ‘0’ (‘1’) than exiting 7T. The SRAM cell read and cycle is characterized at 45nm technology using SPICE EDA tool.

A Robust Classification Algorithm for Multiple Type of Dataset

Authors:M.Tech. Scholar Afshan Idrees, Prof. Avinash Sharma

Abstract: With the increase in different internet services number of users are also increasing. Although while taking service user may be on risk for sharing data. So this work focus on increasing the security of the user data while taking classification service. Here algorithm provide robustness by encrypting the data and send to server, while server classify the data in encrypted form. One more security issue is that instead of transferring whole encrypted data, features are extract from the data first then encrypt and send to server for classification. Here proposed work successfully classify all type of user data in form of text, image, numeric.

An Unsupervised TLBO Based Drought Prediction By Utilizing Various Features

Authors:M.Tech. Scholar Shikha Ranjan Patel, Prof. Priyanka Verma

Abstract: Agricultural vulnerability is generally referred to as the degree to which agricultural systems are likely to experience harm due to a stress. In this work, an existing analytical method to quantify vulnerability was adopted to assess the magnitude as well as the spatial pattern of agricultural vulnerability to varying drought conditions. Based on the standardized precipitation index (SPI) was used as a measure of drought severity. A number of features including normalized difference vegetation index (NDVI), vegetation condition index (VCI), and SPI will be use for classification. Here proposed modal use Teacher Learning Based Optimization genetic approach for classify the different location present in geospatial dataset. By use of  this TLBO approach prior knowledge is not required. Experiment results shows that proposed work is better as compare to previous work.

Optimizing Material Management through Advanced System Integration, Control Bus, and Scalable Architecture
Authors:RamaKrishna Manchana

Abstract:This paper presents an advanced approach to material management by leveraging modern system integration techniques and scalable microservices architecture. The proposed solution addresses the limitations of traditional monolithic systems by introducing microservices, control bus mechanisms, and event-driven designs that enhance operational efficiency and scalability. By utilizing advanced integration patterns, including synchronous and asynchronous services, the system improves real-time data processing and decision-making capabilities. This document outlines the technical architecture, key components, integration designs, and implementation strategies that underpin a robust and adaptable material management system, demonstrating significant improvements in scalability, performance, and responsiveness.

DOI: 10.61137/ijsret.vol.3.issue6.200

Optimizing Healthcare Data Warehouses for Future Scalability: Big Data and Cloud Strategies
Authors:Srinivasa Chakravarthy Seethala

Abstract:Healthcare organizations generate vast amounts of data, driven by regulatory compliance, patient care needs, and advances in medical technology. Legacy data warehouses, while central to healthcare data management, often struggle to accommodate escalating data volumes, new data types, and real-time processing demands. This paper presents strategic insights into leveraging Big Data and cloud computing to modernize healthcare data warehouses for future scalability. We examine technical approaches, review cloud and Big Data integration techniques, and propose a roadmap for healthcare data scalability, addressing concerns of security, compliance, and data interoperability.

DOI: 10.61137/ijsret.vol.3.issue6.201

Driving Business Decisions With Data: A Practical Framework For Successful Power BI Adoption

Authors: Anjali Thomas

 

Abstract: In today’s competitive business landscape, data-driven decision-making has become a strategic imperative. Organizations are increasingly turning to business intelligence (BI) platforms to transform raw data into actionable insights that guide growth, efficiency, and innovation. Among these platforms, Power BI stands out as a versatile solution that bridges the gap between technical complexity and user accessibility. This review article presents a comprehensive framework for successful Power BI adoption, emphasizing the interplay between governance, integration, scalability, and organizational readiness. The paper begins by outlining the challenges enterprises face when shifting from intuition-based management to data-centric practices, highlighting issues of data silos, inconsistent reporting, and resistance to cultural change. It then explores how Power BI’s architecture—spanning ETL processes, SQL integration, cloud deployment, and security mechanisms—can serve as the backbone for a sustainable BI strategy. The review further examines practical use cases across industries, DevOps-driven automation, and the role of training programs in fostering a self-service analytics culture. Through a critical discussion of opportunities and limitations, the article underscores that successful Power BI adoption requires more than technology; it demands alignment between people, processes, and platforms. By providing a structured roadmap, this study offers organizations a pragmatic guide to embedding Power BI within their BI lifecycle. The conclusion reaffirms that Power BI is not simply a reporting tool but a catalyst for building data-driven cultures that enhance agility, competitiveness, and long-term decision-making excellence.

DOI: https://doi.org/10.5281/zenodo.17275574

 

Creating A Single Source Of Truth: Data Governance With Power BI, SQL, And Effective ETL Processes

Authors: Vivek Sharma

 

Abstract: In contemporary enterprises, data fragmentation across multiple systems, departments, and formats poses significant challenges to decision-making, reporting accuracy, and operational efficiency. A Single Source of Truth (SSOT) addresses these challenges by consolidating heterogeneous data into a centralized, authoritative repository. This review examines the implementation of SSOT using SQL databases, robust ETL pipelines, and Power BI for visualization and governance. It explores the principles of data governance, including data ownership, quality control, role-based security, and regulatory compliance, emphasizing their critical role in maintaining data integrity and trustworthiness. The review also details best practices for relational database design, performance optimization, and ETL automation to ensure timely and accurate data delivery. Case studies across healthcare, financial services, and retail illustrate practical applications, demonstrating improved reporting efficiency, operational responsiveness, and decision-making capabilities. Furthermore, the integration of SSOT across enterprise workflows, combined with monitoring, audit trails, and automated alerts, underscores the value of a governed, centralized data ecosystem. The article highlights current challenges, including system complexity, adoption barriers, and legacy integration, and offers strategies for mitigation. Looking forward, emerging trends such as cloud-native architectures, real-time streaming, AI-enhanced analytics, and hybrid or federated data models suggest new avenues for enhancing SSOT utility and scalability. By providing a comprehensive framework, this review underscores the strategic, operational, and compliance benefits of SSOT, positioning it as a cornerstone for modern, data-driven enterprises seeking reliability, agility, and insight-driven decision-making.

DOI: https://doi.org/10.5281/zenodo.17275623

 

Tableau’s Secret Sauce: Leveraging RHEL And Centos For High-Performance Data Visualization

Authors: Kavya Menon

 

Abstract: Modern enterprises increasingly rely on business intelligence (BI) platforms to transform raw data into actionable insights. Tableau, as a leading BI tool, offers sophisticated visualization, analytics, and reporting capabilities. However, the underlying operating environment significantly impacts performance, scalability, security, and cost efficiency. This review explores the strategic advantages of deploying Tableau on Linux-based systems, specifically Red Hat Enterprise Linux (RHEL) and CentOS, for enterprise-grade BI implementations. It examines the role of Linux in enhancing system stability, providing robust security frameworks, supporting modular and automated workflows, and enabling high availability and scalability. The article analyzes data integration strategies, ETL pipelines, and dashboard optimization practices tailored to Linux environments, emphasizing both operational efficiency and user experience. Case studies across healthcare, finance, and retail illustrate real-world applications, demonstrating how Linux-based Tableau deployments support secure, high-performance analytics, regulatory compliance, and business agility. Furthermore, the review addresses monitoring, maintenance, and performance tuning, highlighting best practices for sustained system reliability. Future trends, including AI integration, real-time streaming, hybrid cloud architectures, and advanced automation, are discussed to illustrate the evolving landscape of enterprise BI. By combining Tableau’s visualization capabilities with Linux’s reliability and flexibility, organizations can achieve cost-effective, scalable, and secure BI solutions. This article underscores the importance of selecting an appropriate operating environment to maximize Tableau’s potential and provides a comprehensive guide for IT professionals, analysts, and business leaders seeking to optimize their BI infrastructure.

DOI: https://doi.org/10.5281/zenodo.17275816

 

Migrating Legacy Information Management Systems To AWS And GCP: Challenges, Hybrid Strategies, And A Dual-Cloud Readiness Playbook

Authors: Sudhir Vishnubhatla

Abstract: Legacy Information Management Systems (IMS) remain central to operations in banking, healthcare, public sector, and media, yet their monolithic design, proprietary data formats, and brittle integrations have become barriers to agility and intelligent analytics. Earlier literature identified the persistent costs and risks of legacy IMS and proposed incremental modernization through wrapping, service extraction, and reengineering, while empirical studies in the early 2010s established the feasibility and business value of infrastructure-as-a-service (IaaS) rehosting. As of late 2017, however, the migration decision space has widened: enterprises are not merely choosing whether to move to the cloud, but how to distribute workloads across multiple hyperscalers, primarily Amazon Web Services (AWS) and Google Cloud Platform (GCP). This article synthesizes more than a decade of academic and industrial work on legacy migration and proposes a Dual-Cloud Readiness Playbook tailored to IMS modernization. The playbook comprises a readiness scorecard, a five-phase migration lifecycle, and a layered hybrid architecture that balances compliance, cost, and capability while reducing vendor lock-in. The result is a pragmatic framework that aligns with the realities of petabyte-scale content archives, metadata-heavy workflows, and emerging regulatory constraints, offering a credible path from monolithic legacy platforms to modern, cloud-native information management

DOI: https://zenodo.org/records/17298069

The Impact Of Strategic Stress Management On Employee Retention In High-Pressure Global Service Sectors

Authors: Dipikaben Solanki

Abstract: The world service industry has had a psychological strain on it due to economic unpredictability, technological growth and the augmented performance standards. The banking, information technology and healthcare industries are some of the high-pressure sectors where cases of burnout and high employee turnover have been reported, posing serious organisational and economic problems. This study has reviewed how strategic management of stress influences retention of employees in these industries using a comparative cross-national study. By assuming an interpretivist and an inductive philosophical approach, the research has employed secondary qualitative data in order to draw comparisons among the prevalence of stress, turnover patterns, and organisational reactions to these patterns in the various national settings. The results have shown that there is a close correlation between stress at work and turnover. It has been demonstrated that emotional exhaustion, heavy workload and lack of managerial support have decreased organisational commitment and out-of-organisation intentions. Nevertheless, organized stress management solutions such as supportive leadership behaviour and workload modification have resulted in a positive change in short-term retention. The cross-national differences have also brought out the fact that organisational design, economic stress and sectoral characteristics determine the intensity of burnout and workforce stability. The analysis has found that sustainable management of human resources should consider including mental health strategies as an organisational priority and not a response welfare intervention. Some policy implications are stress audits, inclusion of mental health performance indicators and creation of crisis responsive HR structures. Strategic stress management has hence gained prominence as an economic stability mechanism and a sustainability tool to the workforce that is in the global service industries.

DOI: http://doi.org/10.5281/zenodo.19251151

Published by:
× How can I help you?