Pump Court Chambers

The Interplay between the Equality Act 2010 and the use of AI in Recruitment

News, Blog 19th June 2023

There are numerous ways in which AI is revolutionising the job market and our workplaces. None more so than the recruitment process, with a growing number of employers using software to sift through applications and to identify the best possible candidates. But there are potential pitfalls for those who seek to streamline their hiring process, with the Equality Act 2010 (EqA 2010) ostensibly legislating against any discrimination arising out of the algorithms.

This article explores the potential interplay between the EqA 2010 and AI in recruitment. It also latterly asks the academic (but hopefully interesting) question as to whether being human might amount to a protected characteristic and whether there are grounds for unlawful discrimination if a human is rejected for a role in place of self-selecting AI.

AI in Recruitment and Discrimination Under the Equality Act 2010

S.4 EqA 2010 lists the “protected characteristics” of employees, against which employers cannot directly or indirectly discriminate. They are age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.

This protection extends to the recruitment process. The arrangements by which an employer decides who to hire cannot discriminate against a candidate (ss.39(1)(a) and 39(3)(a) EqA 2010), nor can the employer decline to offer a job to a candidate based on their protected characteristics (ss. 39(1)(c) and 39(3)(c)). Of relevance to this article, “arrangements” are likely to include the personal specifications for roles, the format of application forms, and the way in which an employer will analyse the contents thereof. AI can identify the skills, experience, and personality traits required for the job, create a form or questionnaire designed to elicit the most relevant information, then assess candidates’ responses for the best answers (as well as their CVs, online profiles, etc.). That could all theoretically take place at a click of a button, saving employers a significant amount of money, time, and resources.

Such an approach, however, may be fraught with legal implications. Using a well-known case study as an example, Amazon had to pull their Edinburgh-based CV-reviewing AI software, as it discriminated against female candidates who were applying for the top tech roles. Part of the recruitment computer model placed an emphasis on patterns in CVs submitted to the company over the previous ten-year period. Given the world of tech entrepreneurism is dominated by men, they were much more likely to have applied more frequently, meaning the system effectively taught itself to prefer men over women. Moreover, certain verbs that stood out to the algorithm, such as “executed” and “captured”, were more often used by men. The programme would ultimately actively penalise use of the word “women”, such as reference to being a “women’s chess club captain”.

This clearly discriminates and any use of similarly flawed AI recruitment software in the UK would surely give rise to a claim under the EqA 2010.

There are, however, situations whereby the discrimination caused by AI may be rather more subtle, and therefore the matter of indirect discrimination during the recruitment process would need to be considered in closer detail.

On 11th February 2021, a report was published for the TUC entitled Technology Managing People – the legal implications. It notes, inter alia, that an algorithm can be viewed as a “provision, criterion, or practice” pursuant to the meaning within s.19(1) EqA 2010. A provision, criterion, or practice is defined at s.19(2) as being indirectly discriminatory in relation to a relevant protected characteristic if:

  • The employer applies it to persons with whom a candidate (for the purposes of this article) does not share the characteristic;
  • It would put those with whom the candidate shares the characteristic at a particular disadvantage;
  • It puts the candidate themselves at that disadvantage; and
  • The employer cannot show it to be a proportionate means of achieving a legitimate aim.

A relevant example may be found in the growing use of AI to detect facial expressions during video interviews. It may be that candidates who are perfectly capable and suited for the role do not display facial expressions in a “typical” way (for example, from having experienced life-long blindness, or due to ASD or neurodivergent presentations). That would place them at an inherent disadvantage to other candidates, although within a context whereby the algorithms have not explicitly discriminated against candidates based on a protected characteristic. It would also not be a proportionate means of recruitment – it is perfectly possible to assess and recruit candidates based on one’s own impressions of them in interview.

The recommendations made within the TUC report are that humans have the ultimate oversight on decisions and that AI policies should be open and transparent. It is important to remember that, for the time being, AI is only as effective as people who write the code and apply the policies. Should those be flawed and lead to discriminatory hiring practices, then the buck always stops with the employer.

Does the Equality Act 2010 Legislate for Discrimination Against Humans?

What happens when an employer decides not to offer a job to a candidate based on the very fact they are a human being, preferring instead to use AI to carry out the role? Could the candidate bring a successful claim under the EqA 2010?

That may sound like the premise for a particularly litigious episode of Black Mirror but, as we know, employers are already using AI in place of humans. It may be arguable that being a member of the human race is a protected characteristic, or indeed essentially an “all of the above” answer could apply in relation to s.4 EqA 2010. Should such an assertion succeed, then employers should be stumped by ss. 39(1)(c) and 39(3)(c) EqA 2010, given they would then be barred from declining to offer a job to someone simply because they are a human.

This is of course a novel and academic discussion, but one which is within the realms of possibility. Notwithstanding the centuries-long march to mechanisation, digitisation, and, increasingly, the use of AI, people may end up having to use any tools at their disposal to remain within the workforce. Moreover, any claimant may find themselves in front of an increasingly sympathetic judiciary, given that law is earmarked to be one of the areas in which AI will quickly start replacing people.

(Disclaimer: this article was not generated by ChatGPT)

Alex McHugh

Home
Shortlist close
Title Type CV Email

Remove All

Download


Click here to email this list.