Can AI destroy bias and improve diversity in recruitment?

  In 1970, the top five orchestras in the US had a mere 5% percent of female members. By 1980 that number had risen to 10%. And by the late 1990s that figure had risen to over 25%.   Women weren’t suddenly getting better at playing musical instruments, so what was driving this rise? Blind…

 
In 1970, the top five orchestras in the US had a mere 5% percent of female members. By 1980 that number had risen to 10%. And by the late 1990s that figure had risen to over 25%.
 
Women weren’t suddenly getting better at playing musical instruments, so what was driving this rise? Blind auditions. In the early 80s, orchestras began assessing the quality of musicians by the music alone: each candidate played a test piece behind a curtain rather than in view of the judges. This single small change made it 50% more likely that a female musician would make the final stage of auditions. Whether consciously or unconsciously, female candidates were the victim of bias.
 
 
Removing bias from the recruitment process is a huge ask. Blind auditions work when all that matters is ability, but in most industries, recruiters need to speak to candidates to assess their suitability for roles. We all come with our own baggage – predetermined biases that are inherent to our upbringing, our lives and experiences. We can’t ever be fully unbiased.
 
As we’ll see, a diverse workforce is a huge advantage for all businesses. But if we can’t be trusted to make an objective decision about applicants, who can? Might AI hold the answer?
 

The problems with diversity

Diversity is a major problem across cybersecurity. Women are underrepresented, making up only 8% of the cybersecurity workforce in the UK and 11% worldwide. Younger workers are too: only 12% of the UK workforce are under the age of 35 years.
 
A widespread gender pay gap was also uncovered in the cybersecurity industry, with European male professionals paid nearly £10,000 more than their female counterpart, despite typically having a lesser education. The industry is well-known for having a reputation as a ‘male dominated, middle-aged profession’.
 
A report by Computer Weekly found evidence of a gaping gender pay gap in the UK as well. Its survey, analysing the average salaries of females and males in the tech industry, revealed that there was a 25% salary difference – again in favour of males.
 

“Women feel actively discouraged from applying to cyber security jobs.”

 
The problem goes beyond wage disparity. Research by Information Age shows that women feel actively discouraged from applying to cyber security jobs while at university, meaning careers in the profession are rarely an option to them before and after graduating. This is evident by the lack of women currently working in the information-security industry.
 
Beyond gender, a study by BCS (The Chartered Institute for IT) established that only 8% of the UK IT workforce were disabled and only 21% were aged over 50 years old. It also found that IT specialists from minority groups were more likely to be offered temporary positions than contracts.
 
This lack of diversity in the cybersecurity and IT workforces represents a major problem because of its importance to workplace innovation. As Robert Hannigan (then Director of the Government Communications Headquarters) says: “To do our job, solving some of the hardest technology problems the world faces, we need different backgrounds, experiences, intellects, sexualities. It is in mixing all of those together that you get the creativity and innovation we desperately need.”
 
The facts are pretty damning, and clearly not all are down to workplace bias. But according to a Global Information Security Workforce Study, 87% of the women sampled stated that they had witnessed unconscious bias in their place of work. And that’s from the women that managed to get work in the industry. How many more have lost out in the recruitment process due to bias, whether known or unknown?
 

So, can AI help?

Unlike humans, technology can use algorithms to analyse large quantities of candidate data at high-speed, without judgement or discrimination. Its influence can have an impact on the entire recruitment process, and it has become an effective tool for recruitment in some industries already.
 
UK cyber security company Panaseer, for instance, noticed that very few women had been applying to its data scientist positions. The company used the predictive algorithms and machine learning capabilities of US-based Textio, a platform designed to detect gender biased language in job descriptions. After altering male-biased words like ‘driven’ to more women-friendly words such as ‘collaborative’, they saw a 60% rise in the number of shortlisted female candidates.
 
AI’s uses don’t stop there though. Some large US-based recruiters have started using AI platforms such as Mya the chatbot to make initial contact with potential candidates. This system is programmed to ask objective, performance-based questions while avoiding the subconscious judgements a human might make.
 
Other systems, such as a the data-driven AI video platforms Paññã or HireVue, can even analyse candidate video interviews to find the best talent. These systems use dynamic questioning, expert evaluation and voice and face recognition to block out under-qualified applicants from large pools of candidates.
 
The technology is impressive, and improving all the time – but it’s not perfect.
 

The problems with AI

The aforementioned AI video platforms, Paññã or HireVue, are undoubtedly impressive. But, as WIRED points out, the demand for AI solutions has predominantly come from large retailers who receive high volumes of applications for lower-skilled, less specific jobs. The goal is efficiency rather than fairness. For higher skilled roles with fewer applicants, the selection process is much more nuanced. Businesses would be brave to accept the opinion of a computer for a role commanding an £80,000 + salary.
 
A bigger issue is perhaps the bias of AI itself. AI only knows what it learns, and generally it learns from flawed, biased humans.
 
Take Tay.ai for example. This conversational chatbot created by Microsoft used live Twitter interactions to become ‘smarter in real time’. However, after one day on the social media channel, the chatbot had become racist and riddled with bias – reflective of the material it had been trained upon.
 
Or the Beauty.ai app which used AI to select the most attractive person from an array of submitted photos and exhibited a strong preference for light-skinned, light-haired entrants. There’s a reason the conversation around bias in AI has reached the US Congress. It’s a problem we’ve yet to solve.
 

The reality

So what’s the likelihood that AI can help eliminate bias in recruiting? The jury’s out.
 
It can absolutely provide some major advantages in tackling bias and diversity for certain industries, but it’s unlikely to replace the human role altogether. As LinkedIn’s Global Recruiting Trends report suggests, the most likely use will be helping recruiters source, screen and nurture candidates – saving time and removing human bias.
 
The recruiter’s job will instead focus more on assessing candidate potential, building one-on-one relationships and judging culture fit, particularly for highly skilled roles like cybersecurity. In the medium term, AI will act more as a personal assistant to recruiters, another string to their bow when assessing the quality and suitability of candidates.
 
In the long term, who knows? Perhaps your future recruiter will be more robot than human.