Bias and disparity mitigation with Fairlearn
Fairlearn provides several ways to perform bias and disparity mitigation for real-world problems:
- Post-processing methods: This involves adjusting the predictions made by a machine learning model after it has been trained, to reduce bias and disparity. An example of this is the reject option classifier, which allows you to set a threshold for the prediction scores for certain sensitive features. If the threshold is exceeded, the classifier will reject the prediction and instead return a default label.
- Pre-processing methods: This involves transforming the data before training the machine learning model, to reduce bias and disparity. An example of this is
CorrelationRemover
, which adjusts the non-sensitive features to remove their correlation with the sensitive features, while retaining as much information as possible. - In-processing methods: This involves modifying the training process of the machine learning model, to reduce...