Beyond Statistical Fairness: A Systematic Review Of Novel Metrics For Identifying Algorithmic Bias In AI-Driven Governance

Uncategorized

Authors: Abubakar Sadiq Yusha’u, Aminu Aliyu Abdullahi

Abstract: Artificial Intelligence (AI) systems are increasingly embedded in public governance for decision-making in areas such as welfare distribution, predictive policing, taxation, immigration, and electoral administration. While these systems promise efficiency and scalability, they also introduce significant risks of algorithmic bias with direct implications for equity, accountability, and democratic legitimacy. This study presents a systematic literature review (SLR) on metrics for identifying algorithmic bias in AI-driven governance models, with a particular emphasis on novel and governance-aware measurement approaches. The review follows PRISMA guidelines and analyzes peer-reviewed journal articles, conference proceedings, and high-impact policy reports published between 2014 and 2025. Literature was sourced from Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and SpringerLink using structured search strings related to algorithmic bias, fairness metrics, and AI governance. After a multi-stage screening and eligibility process, the selected studies were subjected to qualitative thematic synthesis and comparative analysis. The results reveal that traditional statistical fairness metrics such as demographic parity, equalized odds, and predictive parity are widely used but insufficient for governance contexts due to their lack of contextual, temporal, and institutional sensitivity. The review identifies and classifies emerging bias metrics into five major categories: causal metrics, intersectional metrics, temporal and dynamic metrics, structural–institutional metrics, and explainability-driven indicators. These novel metrics demonstrate stronger alignment with governance principles, particularly in addressing power asymmetries, historical discrimination, and policy constraints. The study contributes a consolidated taxonomy of bias metrics and proposes an integrated, multi-dimensional framework for evaluating algorithmic bias in AI-driven governance systems. The findings offer practical guidance for policymakers, regulators, and system designers, while highlighting critical research gaps related to standardization, empirical validation, and Global South governance contexts.

DOI: http://doi.org/10.5281/zenodo.18312609

× How can I help you?