Getting started with fairness
The Fairlearn toolkit is an open source tool for assessing and improving the fairness of AI systems built by data scientists and developers. Fairlearn includes a visualization dashboard and algorithms for mitigating unfairness, along with required metrics. As AI and ML algorithms increasingly shape our world, it is critical that we ensure fairness in their application by using tools that can identify and mitigate bias. Fairlearn is one such library. As we dive into the use of Fairlearn, we must understand the reasons why it is important to consider the potential impact of sensitive features on your ML models, even if you are not explicitly including sensitive features in the training data.
A common misconception is “If we remove sensitive features such as a person’s race, sex, religion, sexual orientation, veteran status, and so on, shouldn’t that be enough to mitigate any bias?” The answer is “Not really” because...