Authors: Rajat Takkar, Gunjan Lathwal, Devanshi Dadwal, Bhumika Aggarwal, Gaurang Batra
Abstract: AI chatbots and large language models show up almost everywhere these days – customer support, healthcare, schools, even hiring. While they’re good at handling language, there’s still a big question about bias. These systems often pick up biases from their training data and then reflect them back in their answers. That includes biases related to gender, jobs, places, or wealth. This study explores whether chatbots respond to demographic-based questions with built-in bias. We used a set of structured prompts and gathered answers from several AI chatbots, recording all responses for analysis. Every answer was examined using sentiment analysis and a neutrality scoring method. This measured how fair or unbiased each system was. We performed all our analysis using Python tools like Pandas, TextBlob, and Matplotlib. Our expectation was that chatbot responses would usually be objective, but some subtle biases could sneak in depending on how you ask the question or what the topic is. Some questions just lead to more bias than others. By scoring fairness, we can actually quantify differences and see which systems are more neutral. This approach helps assess how these AI tools deal with real-world issues and fairness.