Authors: Raghu V Kaspa, Ramya K Cherukuvada
Abstract: Simulated Annealing (SA) is a probabilistic technique used for approximating the global optimum of a given function, with origins in statistical mechanics. It has found widespread utility in optimization problems central to machine learning (ML), particularly where the solution space is large and complex. This paper investigates the theoretical underpinnings of SA, explores its applications within ML domains, compares it with other optimization algorithms, and evaluates its performance. The work concludes with a discussion on SA's strengths and limitations in the context of modern ML challenges. The purpose of this research is to position SA as a viable tool in the ML optimization toolkit, particularly for tasks involving large, multi-modal search spaces where deterministic methods may falter.