salle C-6149 (U. de Montréal, Pavillon Lionel-Groulx, 3150, rue Jean-Brillant)
Résumé : A supervised machine learning algorithm determines a model from a learning sample that will be used to predict new observations. To this end, it aggregates individual characteristics of the observations of the learning sample. But this information aggregation may be potentially biased.. This situation has raised concerns around the so-called fairness of machine learning algorithms, especially towards disadvantaged groups.
We provide a better comprehension of both global biases and individual biases using definitions grounded in optimal transport theory. Besides the identification of theses biases, we will also consider mitigation methods to handle bias while maintaining some accuracy in the output with applications to instrumental variable regression model.