Vol. 2, Issue 1, Part A (2025)

Bias and fairness in automated decision-making: A data science perspective

Author(s):

Andrei Popescu, Ioana Marinescu and Radu Ionescu

Abstract:

Automated decision-making systems (ADMS) have become central to data-driven operations across sectors such as finance, healthcare, employment, and criminal justice. While these systems promise efficiency and consistency, they are equally susceptible to perpetuating and amplifying social and historical biases embedded in data or algorithmic design. This research, investigates how fairness and bias interact throughout the data science pipeline and proposes an integrated, multi-stage mitigation framework. The study employed three publicly available datasets—Adult Income, COMPAS, and Synthetic Health—to evaluate bias reduction techniques at pre-processing, in-processing, and post-processing stages. Quantitative fairness indicators such as statistical parity difference, equality-of-opportunity difference, and disparate impact were analyzed using statistical tools, and a Fairness Composite Index (FCI) was developed to assess aggregate fairness performance. Results revealed that multi-stage interventions substantially improved fairness metrics, with adversarial in-processing yielding the highest overall fairness without significant loss of predictive accuracy. In contrast, isolated or single-stage corrections exhibited limited capacity to balance fairness and accuracy simultaneously. The findings affirm that fairness must be embedded as a integrated principle across data science workflows rather than treated as an afterthought to model optimization. Moreover, the study underscores the importance of ongoing fairness auditing, explainable AI tools, and transparent documentation to ensure sustainable equity in automated decision outcomes. Practical recommendations emphasize integrating fairness-by-design methodologies, developing standardized auditing frameworks, promoting interdisciplinary collaboration, and establishing organizational accountability mechanisms to uphold responsible AI governance. Collectively, this research contributes to the broader discourse on ethical artificial intelligence by demonstrating that equitable automation is achievable through systemic design, continuous evaluation, and human-centered oversight in data-driven decision-making systems.

Pages: 45-49  |  13 Views  6 Downloads

How to cite this article:
Andrei Popescu, Ioana Marinescu and Radu Ionescu. Bias and fairness in automated decision-making: A data science perspective. J. Mach. Learn. Data Sci. Artif. Intell. 2025;2(1):45-49.