AI Bylaws: A Framework For Ethical Governance

Uncategorized

Authors: Keshav Mittal, Jobanpreet Singh, Kartik Kumar, Jasnoor Kaur

Abstract: The field of Artificial Intelligence (AI) has been launched at a rapid pace in many areas including health, finance, administration, and law. Despite the efficacy and automation of AI technologies that remain unexamined, such technologies are accompanied by grave ethical and legal concerns such as algorithmic prejudice, misinformation, abuse of deepfakes, and cybersecurity concerns. These concerns have brought about the realization that there exists a great need in structured governance instruments and mechanisms that regulate AI practices and require prudent application. The other recent concept of the field is AI bylaws that can be described as operational guidelines and regulations of governance to regulate the development of AI systems, their implementation, and their interactions with users. The discussed research paper examines the concept of AI bylaws and addresses the problem of ethical compliance of AI systems with reference to the experimental data consisting of ethically sensitive prompts, related to discrimination, cybercrime, deepfake abuse, and harmful behavior.. The experiment measures the responses of AI and compares them against pre-established measures of ethical compliance. The findings show that AI systems tend to reject dangerous instructions and follow security protocols, but the discrepancies in the detail of the explanation and context-specific logic can be observed. Judging by these results, the present paper suggests a system of AI bylaws that is based on transparency, accountability, fairness, and prevention of misuse. The study indicates that the evaluation through experimentation would be useful in determining what is weak in the current AI governance methods and direct the creation of stronger ethical principles of AI systems.

DOI: https://doi.org/10.5281/zenodo.19909649

× How can I help you?