Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Unlabelled: Machine learning classifiers in healthcare tend to reproduce or exacerbate existing health disparities due to inherent biases in training data. This relevant issue has brought the attention of researchers in both healthcare and other domains, proposing techniques that deal with it in different stages of the machine learning process. Post-processing methods adjust model predictions to ensure fairness without interfering in the learning process nor requiring access to the original training data, preserving privacy and enabling the application to any trained model. This study rigorously compares state-of-the-art debiasing methods within the family of post-processing techniques across a wide range of synthetic and real-world (healthcare) datasets, by means of different performance and fairness metrics. Our experiments reveal the strengths and weaknesses of each method, examining the trade-offs between group fairness and predictive performance, as well as among different notions of group fairness. Additionally, we analyze the impact on untreated attributes to ensure overall bias mitigation. Our comprehensive evaluation provides insights into how these debiasing methods can be optimally implemented in healthcare settings to balance accuracy and fairness.

Supplementary Information: The online version contains supplementary material available at 10.1007/s41666-025-00196-7.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12290158PMC
http://dx.doi.org/10.1007/s41666-025-00196-7DOI Listing

Publication Analysis

Top Keywords

debiasing methods
12
machine learning
12
learning classifiers
8
classifiers healthcare
8
training data
8
learning process
8
group fairness
8
healthcare
5
empirical comparison
4
comparison post-processing
4

Similar Publications

Class incremental learning (CIL) offers a promising framework for continuous fault diagnosis (CFD), allowing networks to accumulate knowledge from streaming industrial data and recognize new fault classes. However, current CIL methods assume a balanced data stream, which does not align with the long-tail distribution of fault classes in real industrial scenarios. To fill this gap, this article investigates the impact of long-tail bias in the data stream on the CIL training process through the experimental analysis.

View Article and Find Full Text PDF

This paper develops an inferential framework for matrix completion when missing is not at random and without the requirement of strong signals. Our development is based on the observation that if the number of missing entries is small enough compared to the panel size, then they can be estimated well even when missing is not at random. Taking advantage of this fact, we divide the missing entries into smaller groups and estimate each group via nuclear norm regularization.

View Article and Find Full Text PDF

Rationale: Physicians sometimes encounter various types of gut feelings (GFs) during clinical diagnosis. The type of GF addressed in this paper refers to the intuitive sense that the generated hypothesis might be incorrect. An appropriate diagnosis cannot be obtained unless these GFs are articulated and inventive solutions are devised.

View Article and Find Full Text PDF

Background: The growing adoption of diagnostic and prognostic algorithms in health care has led to concerns about the perpetuation of algorithmic bias against disadvantaged groups of individuals. Deep learning methods to detect and mitigate bias have revolved around modifying models, optimization strategies, and threshold calibration with varying levels of success and tradeoffs. However, there have been limited substantive efforts to address bias at the level of the data used to generate algorithms in health care datasets.

View Article and Find Full Text PDF

Learning instrumental variable representation for debiasing in recommender systems.

Neural Netw

August 2025

organization=School of Computer Science and Technology, addressline=China University of Mining and Technology, city=Xuzhou, postcode=221116, state=Jiangsu, country=China.

Recommender systems are essential for filtering content to match user preferences. However, traditional recommender systems often suffer from biases inherent in the data, such as popularity bias. These biases, particularly those stemming from latent confounders, can result in inaccurate recommendations and reduce both the diversity and effectiveness of the system.

View Article and Find Full Text PDF