AI in recruitment: The problem of bias

August 29, 2019

Even though there’s plenty of legislation protecting people from discrimination while they’re applying for jobs (as well as existing employees), most notably the Equality Act 2010, there’s nothing really preventing the unconscious biases of those running the recruitment process from influencing it.

So the prospect of technology – unbiased and free of all the baggage we humans carry around with us – playing a greater role in sensitive workplace processes such as recruitment has to be welcome, doesn’t it? Surely, it’s a sure-fire way to make those processes more objective, less discriminatory and more supportive of efforts to make the workplace more diverse.

But is it? As technologies such as Artificial Intelligence (AI) and Machine Learning (ML) play a bigger role in our lives, it’s becoming clear that they can be just as biased as we are. A study in 2017 by UK and US academics showed that technology can learn biases that are implicit in our language, including problematic ones around race and gender.

TECHNOLOGY’S ROLE IN THE FUTURE OF RECRUITMENT
Our Human to Hybrid research into this new world of work reveals that, while they want to maintain contact with human colleagues, people expect to have much more contact with technology than they currently do. During the recruitment process, almost half (48%) of the 2,000 employees we spoke to were happy for job / role recommendations to be made by AI or automation, rather than being made by people and 51% were happy for the job application process to be driven by AI or automation.

And HR and recruitment leaders see the same future unfolding: over half (58%) think most candidates will be recruited remotely without ever meeting people in the organisation and slightly more (60%) say they would be prepared to hire someone on the recommendation of an advanced algorithm / AI even if it goes against their judgement on meeting them. In fact, two thirds (66%) think AI has the potential to improve recruitment within their organisation and 81% see AI / automation as having a positive impact on their job.

This positive view of the possibilities offered by technology is even stronger within transport: 40% of employees predict AI / automation will have a very positive impact on their jobs over the next five year, compared to 32% across the board, and 87% of recruitment leaders agree (compared to 81% across the board).

But that’s not to say that they’re blind to the possible downsides. Almost three quarters (72%) of HR and recruitment leaders in transport acknowledge that they need to carefully consider any legal / ethical restrictions around using AI in recruitment. More than a third (36%) of HR and recruitment leaders say bias and a lack of candidate diversity is one of the biggest challenges they’re facing, and half (49%) want to use technology to remove bias from the recruitment process to increase diversity and equal opportunity within their organisation.

But algorithmic bias is one of the top three concerns about digitising their recruitment process for nearly half (42%) of HR and recruitment leaders in the transport sector.

Their concerns are legitimate: in 2018, Amazon had to scrap an internal AI-powered recruitment tool that sorted through CVs because it was biased against women.

Employees and business leaders generally are also concerned about the impact that a hybrid workplace could have on workforce diversity: about a quarter (22%) of employees worry that it could mean a less inclusive and less diverse workforce.

HOW TO MAKE AI LESS BIASED
So, there’s something of a dichotomy here. On the one hand, people recognise the value of a diverse, inclusive workforce and they look to technology to help them to achieve it in the future. On the other hand, they recognise the danger of AI introducing more bias into the recruitment process and reducing their organisations’ diversity and inclusivity.

What’s the answer? It lies in the people who create algorithms and programme robots and provide the data that machines learn from: If their biases are able to colour their work, those biases will colour technology’s work in the hybrid workplace and we’ll lose the opportunity to create a diverse workforce whose myriad skills, abilities, life experiences and thought processes will take organisations to new heights.

Amazon’s failed recruiting tool had been trained on data submitted by people over a ten-year period, most of which came from men, and it learned to prefer men. It’s a classic example of AI’s recruitment value being limited by human bias – but that can be changed.

Recognising, as technology expert Jessica Rose told The Telegraph in 2018, that “developers and AI specialists carry the same biases as talent professionals”, using diversity and inclusion training to make them aware of their unconscious biases and how to minimise their impact, testing for bias throughout the development process, and then regularly reviewing your systems for bias is a good place to start.

Having a diverse workforce producing their best work within an inclusive organisation is a prerequisite for success in the future world of work, and AI offers a new way to achieve that. Don’t let bias get in the way.

 

Previous Article
The transformational power of data within L&D
The transformational power of data within L&D

Ray Brown, Sales Director – Learning and Development at Capita People Solutions, looks at the challenges an...

Next Article
Looking forward to a comfortable retirement
Looking forward to a comfortable retirement

What is the reality of retirement? Read the actual Capita Employee Solutions perspective on pensions and re...