Overcoming societal AI bias
According to an article from Lexalytics (https://www.lexalytics.com/lexablog/bias-in-ai-machine-learning), societal AI bias is when an AI behaves in ways that reflect social intolerance or institutional discrimination. At first glance, algorithms and data themselves may appear unbiased, but their output reinforces societal biases.
The following figure is a glimpse of what societal bias might look like in an abstract sense. You can see that there is some good data being brought in, but along with that, there is data that is in fragments. This misshapen data represents some flaw that doesn't allow us to get a correct sense of the state of the world we are trying to model:
The fragmented and flawed data bits will be baked into any model trained on this data unless we do something about it. One unique thing about this bias is that it can be invisible once the data has already been gathered and...