paper
arXiv cs.LG
November 18th, 2025 at 5:00 AM

Addressing Polarization and Unfairness in Performative Prediction

arXiv:2406.16756v3 Announce Type: replace Abstract: In many real-world applications of machine learning such as recommendations, hiring, and lending, deployed models influence the data they are trained on, leading to feedback loops between predictions and data distribution. The performative prediction (PP) framework captures this phenomenon by modeling the data distribution as a function of the deployed model. While prior work has focused on finding performative stable (PS) solutions for robustness, their societal impacts, particularly regarding fairness, remain underexplored. We show that PS solutions can lead to severe polarization and prediction performance disparities, and that conventional fairness interventions in previous works often fail under model-dependent distribution shifts due to failing the PS criteria. To address these challenges in PP, we introduce novel fairness mechanisms that provably ensure both stability and fairness, validated by theoretical analysis and empirical results.

#ai

Score: 2.80

Engagement proxy: 0

Canonical link: https://arxiv.org/abs/2406.16756