IJSRET » July 18, 2025

Daily Archives: July 18, 2025

Uncategorized

Autochef AI: Multi-Modal Attention For Visual Ingredient Recognition And Recipe Generation From Food Images

Authors: Bhaskara B, Vinith M, Kumarswamy S

Abstract: – Understanding food from images poses marvellous challenge in the region of recipe search, with impactful applications in smart kitchens, dietary monitoring, and automated cooking assistance. Traditional approaches typically handle ingredient recognition and instruction generation as separate tasks, often resulting in incoherent or disjointed outputs. Here, we bring Autochef AI, multi-modal attention toll which seamlessly join visual and textual information to accurately identify ingredients and generate step-bystep cooking instructions from food images. By incorporating attention mechanisms across both image and text modalities, our model captures fine-grained features essential for coherent and contextually grounded recipe generation. Experimental results demonstrate that our approach significantly improves both ingredient prediction accuracy and instruction quality across a wide variety of recipes and cuisines.

DOI: https://doi.org/10.5281/zenodo.16522342

 

Published by:
Uncategorized

Building Cross-Functional Dashboards in Workday: from Time off Analytics to Compensation Reviews

Authors: Santhosh Kumar Maddineni

Abstract: In today’s data-driven HR environment, organizations require unified insights across functions like time tracking, compensation, and workforce planning. This paper explores how to build cross-functional dashboards in Workday that provide real-time visibility from time off analytics to compensation reviews. Leveraging Workday’s native reporting tools—Worklets, Worksheets, and Prism Analytics—the paper outlines best practices for designing dashboards that integrate diverse datasets while maintaining user security and performance. Key design considerations include data sourcing via calculated fields and custom reports, security group configuration, and intuitive layout strategies for executive and operational users. Use cases include visualizing PTO trends by department, correlating time off with compensation patterns, and enabling managers to make informed decisions during merit reviews. The paper also discusses versioning, stakeholder feedback loops, and mobile responsiveness for field accessibility.

DOI: https://doi.org/10.5281/zenodo.16079810

Published by:
Uncategorized

1.58 Bit Large Language Model(LLM)

Authors: Kumarswamy S, Vidya Laxman Gadekar, Manasi

Abstract: Large Language Models (LLMs) have changed the landscape of natural language processing (NLP) with state of the art performance across numerous applications. Nonetheless, the computational and memory requirements for deployment in resource constrained environments are still a barrier. In this paper we describe the development of a 1.58-bit LLM which utilizes various quality quantization aware tuning and training techniques, and low- rank adaptation (LoRA), with additional memory efficient techniques (e.g., Flash Attention). The LLM quantization methods provide significant savings in both memory and energy consumption and retains competitive accuracy. Our experimental benchmarking demonstrates that effective training and quantization of LLMs can be applied to edge computing and other resource limited deployment methods.The advancement of Large Language Models (LLMs) has significantly transformed natural language processing (NLP) by achieving state-of-the-art results in multiple domains. Nevertheless, the highly computationally and memory-intensive nature of these models makes their deployment in resource-limited settings challenging. In this paper, we introduce the design of a 1.58-bit precision LLM with the state-of-the-art quantization approach and memory-efficient techniques including low- rank adaptation (LoRA) and Flash Attention. The proposed model offers a substantial cut in memory footprint and energy consumption, while maintaining a competitive accuracy. Experimental evaluations on benchmark datasets validate the effectiveness of this approach, demonstrating its applicability in edge computing and other resource-sensitive deployments.

Published by:
× How can I help you?