Bias Detection, Mitigation, and Auditing in Financial AI Systems
Kay Yan
CUCAI 2026 Proceedings - 2026
Abstract
Artificial intelligence systems in credit scoring and fraud detection can systematically disadvantage protected demo-graphic groups. We train standard classifiers (Logistic Regres-sion, Balanced Random Forest) on the MLG-ULB Credit Card Fraud dataset and demonstrate baseline violations of EU AI Act fairness thresholds. We apply pre-processing (SMOTE) and post-processing (threshold adjustment). While threshold adjustment achieves Disparate Impact compliance for Reweighted Logistic Regression (DI = 0.9057), all tested mitigation strategies fail to achieve EU AI Act compliance for Equalised Odds Difference (EOD). EOD remains a persistent violation despite post-processing, highlighting a fundamental limitation in satisfying all EU AI Act thresholds simultaneously on this dataset. We propose a lifecycle-based bias-audit framework aligned with the EU AI Act.