IJSRET » March 18, 2026

Daily Archives: March 18, 2026

Uncategorized

Robust Digital Foundation Architectures Supporting Enterprise Java And Spring Engineering

Authors: Ramani Teegala

Abstract: This research examines the growing need for structured, reliable, and productivity focused engineering environments during a period when organizations were transitioning from traditional development models to more collaborative and tool integrated ecosystems. It investigates the fragmentation of workflows, uneven tool adoption, and the absence of standardized development practices that often constrained Java and Spring engineers, resulting in delivery delays, quality inconsistencies, and extended onboarding cycles. The research aims to establish a systematic approach for designing and implementing developer experience platforms that unify development workflows, automate repetitive tasks, and promote consistency across large engineering teams. Using a mixed methods approach that integrates qualitative interviews with engineering leaders and quantitative analysis of productivity indicators, the study identifies platform capabilities that most effectively enhance developer outcomes, including integrated build automation, standardized project templates, curated toolchains, and centralized knowledge resources. The findings show that well structured developer experience platforms reduce cognitive load, improve code quality, and accelerate delivery cycles while strengthening collaboration and engineering confidence. Academically, the work contributes a structured model for evaluating developer experience within Java and Spring ecosystems and provides a foundational basis for future research in platform centric software engineering. Strategically, it demonstrates how organizations can leverage such platforms to modernize engineering culture, reinforce architectural alignment, and achieve sustainable delivery performance. The study concludes that formalizing developer experience as an operational discipline holds long term significance for both industrial practice and academic advancement.

DOI: https://doi.org/10.5281/zenodo.19100556

Published by:
Uncategorized

AI-Augmented Software Quality Engineering: Data-Driven Risk, Prediction, And Continuous Assurance In Modern Software Systems

Authors: Ramani Teegala

Abstract: By April 2021, software quality engineering was under increasing pressure from the combined effects of accelerated release cadences, widespread adoption of microservices, and the operational realities of cloud-native deployments. Banking and other regulated industries faced a particularly acute version of this tension: delivery speed had become a competitive requirement, yet failures carried outsized consequences in customer harm, financial loss, and regulatory exposure. In this context, conventional quality practices such as manual test authoring, rule-based static analysis, and human-driven code review remained necessary but frequently insufficient to scale with system complexity. The problem was not that these practices were ineffective in principle, but that they depended heavily on human attention and stable system boundaries, both of which were increasingly scarce in modern delivery pipelines. AI-augmented software quality refers to the application of machine learning and statistical techniques to improve the effectiveness, coverage, and timeliness of quality controls across the software lifecycle. Unlike general automation, which executes predefined checks, AI-augmentation aims to learn from historical artifacts such as defects, test outcomes, telemetry, and code change patterns in order to anticipate risk and prioritize interventions. By 2021, the software engineering community had accumulated substantial research and industry experience in areas such as defect prediction, anomaly detection in operational metrics, automated test prioritization, and mining software repositories. These approaches did not eliminate the need for engineering judgment or rigorous testing, but they offered a way to focus limited quality effort on the changes, components, and execution paths most likely to fail.Within regulated domains, AI-augmentation for quality must be evaluated through constraints that are distinct from those of consumer software. Quality signals and decisions often need to be explainable, auditable, and reproducible, especially when they influence production readiness, control effectiveness, or incident management. Data used for training and inference can include sensitive operational and development artifacts, requiring governance controls comparable to those used for security and compliance data. Moreover, quality failures in financial systems tend to cluster around concurrency, distributed consistency, configuration drift, and integration boundaries, meaning that a quality system must reason not only about code-level correctness but also about system-level behavior under partial failure. AI methods applied without regard to these constraints risk producing brittle signals that cannot be operationalized, trusted, or defended during audit and post-incident review. This paper examines AI-augmented software quality as understood and practiced by April 2021, with particular attention to how such techniques integrate into modern delivery pipelines and operational feedback loops. It synthesizes research in software analytics, defect prediction, test optimization, and anomaly detection, and it situates these methods within the architectural trends that shaped software systems from 2000 through 2021. The paper proposes a conceptual model in which AI-driven signals complement, rather than replace, established quality controls, and it describes a layered architecture that connects development artifacts, CI/CD execution data, and production telemetry into a cohesive quality intelligence capability. Special attention is given to the interactions between AI-generated quality signals and governance requirements common in regulated environments, including traceability, change control, and evidence preservation. The analysis further explores the practical trade-offs associated with AI-augmented quality, including data quality and labeling challenges, feedback delays, model drift under frequent system change, and the risk of embedding organizational biases into automated decision-making. It evaluates these challenges alongside potential benefits such as earlier risk detection, more efficient test allocation, and improved incident prevention. By framing AI-augmentation as an engineering discipline grounded in measurable outcomes and controlled deployment practices, the paper aims to provide a historically accurate and technically rigorous account of how machine learning techniques can strengthen software quality programs as of April 2021, without relying on later generative AI developments or post-2021 tooling assumptions.

DOI: https://doi.org/10.5281/zenodo.19100296

Published by:
Uncategorized

Hybrid Knowledge Graph And Vector Similarity Architectures For End-to-End Financial Transaction Journey Analysis

Authors: Ramani Teegala

Abstract: By December 2021, financial institutions were operating transaction platforms whose end to end behavior increasingly resembled distributed journeys rather than single system events. A single customer initiated action, such as a card purchase, an account to account transfer, or a cross border remittance, could traverse channels, risk engines, limits services, payment rails, settlement systems, dispute workflows, and compliance controls across both internal and external counterparties. This fragmentation created persistent challenges in observability, auditability, and root cause analysis because the underlying data was split across event logs, relational ledgers, message queues, fraud features, and case management systems, each with different identifiers and retention policies. Knowledge graphs matured as a practical representation for integrating heterogeneous entities and relationships, enabling banks to model accounts, customers, devices, merchants, authorizations, postings, reversals, chargebacks, and compliance decisions as a coherent linked structure. In parallel, vector similarity search and embedding based retrieval became increasingly accessible due to open source libraries and emerging vector store implementations, providing a complementary mechanism for approximate matching over high dimensional representations of transactions, sequences, and behavioral signatures. This paper examines how knowledge graphs and vector stores can be combined to represent and analyze financial transaction journeys as understood and practicable by December 2021. The analysis frames the problem through regulated banking constraints, including PCI DSS requirements for cardholder data protection, GLBA expectations for safeguarding customer information, SOX oriented control evidence, Basel Committee guidance on operational risk, and FFIEC style expectations for resilient operations and audit readiness. The paper proposes a conceptual model in which a graph centric system of record captures identity resolution and explicit relationships, while a vector retrieval layer supports similarity based enrichment, anomaly surfacing, and candidate linking for incomplete or ambiguous journey traces. It evaluates architectural trade offs related to consistency, latency, governance, and explainability, emphasizing that approximate methods must be bounded by deterministic controls when outcomes influence fraud actions, customer impact, or regulatory reporting.

DOI: https://doi.org/10.5281/zenodo.19100103

Published by:
Uncategorized

Design And Simulation Of Asynchronous And Synchronous FIFO Using Verilog HDL

Authors: Swathi.G, Ch. Keerthana, A.Tarun Teja Charry, B.Lokesh Nagavenkata Sai

Abstract: The fast development of integrated circuits, Synchronous and Asynchronous first input first output, or FIFO, is widely used to solve the problem of data transmission across the clock domain. An important problem with asynchronous FIFO architecture is the generation of empty-full signals, which is the subject of this paper. Achieving signal synchronization across clock domains and converting binary code into Gray code are crucial in reducing the probability of a metastable state. Due to the greatest performance, thrills, and medium end for a large market, as well as the versatility of applications. as a basic foundation for memory. In FPGA-based projects, the FIFO is frequently utilized. However, the issue of inadequate memory despite the aggregate capacity is frequently sufficient occurs in the implementation of multi-channel FIFO due to chip resources and flaws in development tools. This paper implemented the Synchronous and Asynchronous FIFO applications and proposes the use of FIFO in System-on chip memory. These simulations are typically verified using Verilog HDL test benches that generate random data, varying write/read speeds, and asserting boundary conditions, confirming the FIFO's ability to maintain data integrity.

Published by:
Uncategorized

Next-Generation Satellite Link Budget Analysis for Transcontinental Communications

Authors: Pratikbhai Patel

Abstract: In this research paper, the proposed link budget framework is an elaborate link budget analysis framework of a next-generation Low Earth Orbit (LEO) satellite constellations to facilitate seamless transcontinental communications. The paper examines the technical needs required to support high-availability broadband coverage on Earth in dynamic orbital and atmospheric conditions. It combines the free-space path loss models, rain fade models, atmospheric attenuation models, orbital mechanics, adaptive modulation models, optical inter-satellite links integration, and interference resilience in an International Telecommunication Union (ITU)-compatible framework. The results indicate that dynamic environmental modeling, dynamic transmission methods as well as propulsion-enhanced orbital stability are important in ensuring that link margins are consistent in geographically dispersed areas. The study also gives prominence to the need to incorporate climate sensitive attenuation forecasting, spectrum agility, and security-oriented interference mitigation in order to make the system more robust. The paper concludes with the discovery that the next-generation LEO constellations have the potential to scale to low-latency and resilient transcontinental connectivity in case it is backed by an integrated and dynamic link budget design methodology. This framework offers a technically rigorous basis to satellite communication systems in the future in the whole world.

DOI: https://doi.org/10.5281/zenodo.19093210

Published by:
Uncategorized

This Analysis Evaluates The Architectural And Functional Distinctions Between The Procedural Efficiency Of C And The High-level Abstraction Of Python. It Examines How C Provides Low-level Memory Control And Performance, While Python Emphasizes Developer Productivity And Rapid Application Development.

Authors: Sachin Kumar

Abstract: Programming languages are essential tools for developing software and applications. They are generally classified based on their level of abstraction and programming paradigm. Procedural and high-level programming languages represent two important categories in computer science education and practice. This research paper presents an analysis of C, a procedural programming language, and Python, a high-level programming language. The paper explains their basic concepts, features, execution models, memory management techniques, advantages, limitations, and application areas. The objective of this study is to help students and beginners understand the fundamental differences between procedural and high-level languages through the comparison of C and Python, enabling them to select an appropriate language based on learning and application requirements.

DOI: https://doi.org/10.5281/zenodo.19091567

 

Published by:
Uncategorized

The Dual Role Of Artificial Intelligence In Cyber Security: From Automated Defense To Adversarial Exploitation

Authors: Sachin Kumar

Abstract: The Dual Role of Artificial Intelligence in Cyber Security: From Automated Defense to Adversarial Exploitation Abstract The rapid integration of Artificial Intelligence (AI) into the digital landscape has fundamentally transformed the field of cyber security. This paper examines the bidirectional impact of AI: its role as a powerful defensive mechanism capable of real-time threat detection and response, and its emergence as a sophisticated tool for adversarial exploitation. By analyzing Machine Learning (ML) models in intrusion detection, the rise of "Agentic" autonomous security systems, and the threats posed by adversarial ML and deepfakes, this study proposes a framework for AI-resilient security operations. The research concludes that while AI significantly enhances defensive capabilities, it also necessitates a new era of proactive, adaptive security strategies to counter AI-driven threats.

DOI: https://doi.org/10.5281/zenodo.19091390

 

Published by:
Uncategorized

Hardening The Core: Strategic Defense-in-Depth For Windows-Based Domain Controllers

Authors: Sachin Kumar

Abstract: Hardening the Core: Strategic Defense-in-Depth for Windows-Based Domain Controllers Abstract In the modern enterprise landscape, the Active Directory (AD) infrastructure and its constituent Domain Controllers (DCs) represent the "crown jewels" of organizational identity and access management. As the central repository for user credentials, group policies, and authorization data, a compromised Domain Controller grants an adversary virtually unlimited "keys to the kingdom." This paper provides a comprehensive analysis of the threat landscape targeting Windows-based Domain Controllers and proposes a robust, multi-layered defense-in-depth framework. By integrating administrative isolation, host-level hardening, network segmentation, and advanced monitoring, organizations can significantly reduce the attack surface. The study concludes with a strategic roadmap for implementing these defenses without compromising the high availability required for critical identity services.

DOI: https://doi.org/10.5281/zenodo.19091242

 

Published by:
Uncategorized

Big Data Analytics In Healthcare Systems: Architectures, Applications, Challenges, And Future Directions

Authors: Ragul. M, Amna Saliha P I K, Dr. K. Brindha

Abstract: Digital health data grows fast. From patient files to scans, genes, fitness trackers, and billing logs – each piece adds up quick. Not just more information – but faster flows, messier formats. Yet within that chaos sit chances to do things differently. Hidden patterns start showing when tools can keep pace. Big data analytics steps into that role. Instead of static reports, it offers insights that shift as new facts arrive. Systems built on platforms like Hadoop or Spark handle loads regular software cannot. Cloud storage keeps the doors open for constant updates. Machine learning digs through noise to spot trends. Deep learning maps complex relationships in images or signals. Language parsers decode doctor notes once locked in freeform text. Five areas see clear change. One: guessing illness before symptoms show. Two: guiding long-term conditions day by day. Three: smoothing how hospitals run – from beds to staff shifts. Four: tracking drug effects after release. Five: treatments shaped around individual biology. Evidence comes from sifting 112 studies published between 2015 and 2024. Patterns emerge only when scale meets smart design. Raw power alone does nothing. It takes thoughtful layers – a stack where speed, structure, and smarts connect. Tests on standard collections like MIMIC-III, NIH Chest X-Ray, and eICU show accuracy between 87.6% and 94.1% for core predictions. Yet problems remain – privacy concerns linger just as much as biased models do. Different systems still struggle to work together while rules keep shifting. On top of that, new paths are forming: shared learning setups pop up alongside tools making AI clearer and analysis at the device level grows more common. For those working in health data, science, or hospital operations, this piece lays out how to grasp, judge, fit in big data methods where things never stay simple.

DOI: https://doi.org/10.5281/zenodo.19091089

 

Published by:
Uncategorized

AI-Powered Smart Attendance Management System Using Facial Recognition

Authors: Vikasini E, Daniya U, Mr.P. Jayasheelan, Guide Dr.P.Jayasheelan

Abstract: Paper registers and card systems for taking student and employee attendance are slow and full of mistakes. People can fake entries. Proxy marking is easy. Schools and workplaces need something more reliable and automatic to track who shows up. So we built an AI-powered smart attendance management system that uses facial recognition to record attendance in real time. The system is written in Python and uses OpenCV and the face recognition library. A SQLite database stores the structured data. A camera-enabled desktop app captures facial images. It matches people against a pre-registered face database and logs attendance with timestamps. No manual data entry. No easy way to mark attendance for someone else. The graphical interface uses Tkinter. Admins can manage records and run reports. They can also view attendance history. Tests show the system reaches high recognition accuracy under controlled lighting. It also cuts down administrative work a lot. This research shows how artificial intelligence and computer vision can be applied to institutional management systems to improve efficiency, reliability and accountability.

DOI: https://doi.org/10.5281/zenodo.19090670

 

Published by:
× How can I help you?