Artificial Discrimination

TechnologyArticleNovember 13, 2020

Powered by algorithms, artificial intelligence was meant to remove human flaws, but it is being accused of racism, sexism and other forms of discriminatory behavior. Are algorithms really bias, or is there something else going on?

Share this

There was a political outcry in Scotland over the summer. With high school students unable to take exams due to school closures caused by the Covid-19 lockdown, they were given grades based on teacher assessments that then went through an adjustment process.

But potential bias was discovered. The adjustment process downgraded the results of students from the most deprived areas by more than twice the rate of students from the wealthiest. Who was to blame? Teachers, exam markers responsible for grading tests, or someone else? The culprit wasn’t a someone, instead many pointed the blame at an algorithm.

The overwhelming negative reaction ultimately led to the Scottish Government abandoning the algorithm’s adjusted results. But once again, it raised concerns over ‘algorithmic bias’.

The reason for concern is that algorithms are making decisions that impact our lives. They screen our job applications, hail our Uber rides, uncover fraud, and even decide who we should befriend on social media.

Algorithms used in machine learning, statistics and data science – as used in the Scottish exam grading process – are mathematical models that learn from large swathes of data in accordance with a pre-programmed procedure. It means decisions are based on cold calculations uncolored by any bias or prejudices.

Or so we thought. Increasingly, these algorithms are being found to replicate the same racial, socioeconomic or gender-based biases they were built to overcome.

For instance, Apple’s credit card – issued by Goldman Sachs – sparked a gender bias inquiry by the New York State Department of Financial Services (DFS) in late 2019 when users noticed it offered higher lines of credit to men compared to women.

It came a week after the DFS opened an investigation about another algorithm, this time sold by a UnitedHealth Group subsidy. It came to light after a study by researchers at University of California, University of Chicago and Partners HealthCare in Boston discovered racial bias that resulted in black patients receiving less medical care than white patients.

Researchers at University of California, Berkeley, also found algorithms perpetuate racial bias in mortgage lending decisions. Although algorithms discriminated 40% less than face-to-face decision-making, otherwise-equivalent Latino and African American borrowers still paid 5.3 basis points more in interest on mortgages and 2.0 basis points on refinanced mortgages.

Don’t blame the algorithms

Algorithms can be biased based on who builds them, how they’re developed, and how they’re ultimately used. They draw on past data, but if the data contains biases then its decisions will reflect the original discriminatory behavior. For example, an algorithm that reviews job applicants by learning from a company’s historical data could discriminate against a gender or race if that group were underrepresented in the company’s hiring in the past.

“If you give an algorithm historical data and you don’t like the results, don’t blame the algorithm. Blame society,” says Gero Gunkel, COO at ZCAM, a Zurich entity that uses analytics, data science and artificial intelligence to develop customer solutions. “Algorithms often mirror society and so may highlight bias that was not noticed or was ignored in the past.”

But there is a benefit. Historical discriminatory practices can finally be exposed using statistical rather than anecdotal evidence. And when bias is uncovered, it can be removed.

As part of its Data Commitment, Zurich aims to minimize the risk of algorithmic bias with two validation processes, explains Gunkel. “First, we devote a significant amount of time to select, vet and connect the data that we use to train our algorithms. We then stress test our algorithms on extreme cases to ensure they are robust enough to avoid distorted decision-making.”

Bias vs Fairness

While bias can be reduced and possibly removed from algorithms, Gunkel says there is a more challenging issue: fairness. Bias occurs when we discriminate against (or promote) a defined group consciously or unconsciously. But fairness is far more complex.

“Fairness is subjective. What is considered fair to one person, is viewed as unfair to another,” says Gunkel. “In general, society considers it fair to discriminate on behavior and charge a higher motor insurance premium on reckless drivers.

“But you enter a moral dilemma if you choose to discriminate on intrinsic features. For instance, you may charge a higher life insurance premium to someone who has a family history of heart disease because statistics tell you they will die younger. But is that fair?”

Fairness can involve a balance between treatment and outcomes. A university can ensure fairness in treatment in its selection process by looking at every candidate with the same metrics, such as grades, regardless of situation or context.

Or it could seek fairness in outcome by adjusting the process to accommodate for situational and contextual characteristics. It could introduce, for example, different requirements for students with disadvantaged backgrounds as they may not have had the same access to education and opportunity as students from wealthier homes.

If society cannot agree on what is and isn’t fair, then can we expect an algorithm to solve these complex ethical dilemmas? It's for this reason that there is a growing call for governments to introduce regulations to ensure algorithms make decisions that are unbiased and fair.

Whatever route is taken, the good news is that algorithms are entirely within our control. We get to teach them values, ethics and can program them to ignore bias.

“This is our chance to remake the world into a much more equal place. You cannot retrain five million individuals to think without bias, but you could create an algorithm that is truly unbiased,” says Gunkel.

“We just need to think very carefully what we teach machines, what data we give them, so they don't just repeat our own past mistakes.”