Table of Contents
Recommendation
Increasingly, businesses are turning to AI systems to automate their processes, data analysis and interactions with employees and customers. As a result, fairness has emerged as a knotty problem. AI systems inherently reflect and perpetuate the pervasive bias in data sets – a flaw that has no mathematical solution. In succinct and implementable fashion, AI risk governance expert Sian Townson recommends a three-phase approach to manage AI systems’ intractable biases.
Take-Aways
- AI systems inevitably perpetuate bias.
- Since you can’t eliminate AI bias, compensate for its unfairness in three ways.
- Bias is difficult to identify because it remains so pervasive.
Summary
AI systems inevitably perpetuate bias.
Artificial intelligence (AI) systems generalize from existing data, and because bias pervades historical data, AI will echo and perpetuate that bias. AI bias occurs in part because, typically, less input data exists for minority groups. As a result, AI results will produce lower accuracy for members of minority groups – a problem that affects algorithms for medical treatment, credit decisions, fraud detection, marketing and reading text.
“Artificial intelligence is hopelessly and inherently biased.”
No method exists to prevent AI from perpetuating bias that exists in historical data. Mathematically, it can’t be done. AI excels at discovering patterns, and bias permeates data sets too deeply for them to be eradicated. When a group of scientists tried – they carefully anonymized medical imaging data and then fed it to an AI system – the algorithm still was able to identify a patient’s race in 93% of cases.
Since you can’t eliminate AI bias, compensate for its unfairness in three ways.
As attempts to remove bias from AI will inevitably fail, choose instead to identify bias when it occurs and work to remediate it. As a first step, determine the standards of fairness you will target, select which groups to protect and define your parameters for a fair result. For example, set thresholds for selection numbers or accuracy rates.
“If companies truly want their algorithms to work equitably with a diverse population, they must deliberately compensate for unfairness.”
Next, check the AI system’s output and the impact of its decisions against the fairness standards. An approach called generative adversarial networks uses two models, one acting as an auditor for the other model’s results. In a third and final step, monitor the AI system for problems and correct bias as it appears.
Bias is difficult to identify because it remains so pervasive.
Each step in the remediation process can prove difficult. When deciding how to measure bias, for example, choose whether to base the standards on equal numbers or equal proportions selected from protected groups, or whether to apply a certain threshold to all groups. Each method yields a different result depending on the makeup of the population that provided the input data.
“A model that passes all the tests can still produce unwanted results once it’s implemented with real-world inputs.”
Remediation efforts can also go astray because AI systems interpret commands literally and lack sensitivity to intersectionality. For example, telling an AI system to make a credit product available on an equal basis to both men and women, regardless of their disability status, could result in the system choosing only men with disabilities and women with no disabilities because of its strong gender bias. Individuals tasked with identifying bias in AI often fail to see their own blind spots. Indeed, the prevalence of bias in society makes it hard to spot, and human brains aren’t wired to detect true randomness.
About the Author
Sian Townson is an expert in AI risk and governance frameworks and holds a doctorate in mathematical modeling from Oxford University. She currently works as a partner for the global consultancy, Oliver Wyman.