How an awareness of bias can help us train better AI tools

Ensure Ethical AI Development train better ia tools Fire on the Hill London & New York
It’s nearly been a year since the generative artificial intelligence (AI) chatbot ChatGPT was launched. Confronted by such a groundbreaking technology, reactions were mixed, with equal parts fear and excitement colouring the conversation.
 

Now, with the dust beginning to settle, one thing is clear: AI is here to stay.

Today, almost a third of companies use AI in at least one business function, and nearly one in ten people report using generative AI at work. This number is only going to grow, with average estimates forecasting 70 per cent of businesses to use AI by 2030.

Here, too, at Fire on The Hill we have seen AI become a steady feature in the way we work, for our clients too.

From AI being used to accelerate game development, to an AI-powered intelligence platform that can detect and debunk mis- and disinformation – AI is now a central component of the office tech stack.

But, like all technology, AI has its share of flaws; questions surrounding intellectual property have taken centre stage, while data and cybersecurity concerns are quickly taking shape. Less spoken about, however, is bias within AI models.

According to a survey conducted by Deloitte, 38 per cent of respondents believe that answers generated and actions taken by AI are unbiased. But it is increasingly clear that AI can harbour and reproduce biases.

With AI set to become even more integrated into our world, misunderstanding AI and bias could have serious repercussions, potentially undermining the value of AI as a tool

Bias in AI has already had serious repercussions in the workplace.

In 2015 Amazon trained a recruitment AI tool using resumes from the last decade. These resumes were mostly male and as a result the AI attached a negative value to resumes with reference to being a woman, to the point that women were ‘essentially rejected’ when reference to their gender was detected.

Further, in 2018, a bias within an AI tool used to assess and predict recidivism was revealed when it incorrectly labelled African American defendants as “high-risk” at nearly twice the rate it mislabelled white defendants.

So, evidently, AI can produce prejudiced outcomes.

Unfortunately, these are not anomalous instances, but if we want to better understand the problem of bias in AI, it’s important to locate them in the wider human context. Importantly, bias in AI is not immutable, because AI cannot create bias, it can only learn bias – which shifts the onus from our algorithmic creations to us, as creators.

Multiple studies in the UK and America have demonstrated that when recruiters are presented with identical CVs that differ only by the names attached to them, interviews are offered at different rates to racial groups.

Similarly, it is well recorded that historically prejudices in the justice system have negatively affected minority groups. So biased AI is not necessarily an aberration, but a reflection of our own biases.

In short, biased AI is the product of an algorithm being trained and learning from data that records biased decisions, originally made by people.

Moving Forward

So, we are at a crossroads – confronted by our own historic biases and a nascent technology that threatens to reproduce them, how should we move forwards?

The question is not whether we scrap AI, but rather, how can we ensure we are training AI well? Perhaps even, can AI help us address our own biases?

For a start, we need to democratise and diversify the people making decisions about the way in which AI is trained and the data that is used to train. These people should be diverse and reflective of the wider society the AI is making decisions and generating content within.

Today, the majority of AI researchers and professionals are men in the global north, with just one in eight AI researchers being women and relatively few projects being undertaken in the global south.

Sometimes bias is unavoidable, even with the best intentions implicit bias can go unnoticed in the information used to train AI models. For this, ironically – AI might be the solution. So called ‘bias analysers’ have been developed to identify outliers and patterns that could reflect bias – and remove them from AI training data.

Lastly, thinking critically and engaging with AI around us can help us understand and reveal our own biases.

We do not have to be in a ‘doom loop,’ we have some incredible tools at our disposal, we just have to ask the right questions.

 
 
Image: Barbara ZandovalUnsplash

Share: