In 2026, HR technology is no longer just a ‘nice to have’ – it’s the engine of recruitment, and it is faster than ever. AI tools that can scan thousands of CVs in seconds, predict ‘cultural fit’ from a 30-second video, and even schedule interviews without a human lifting a finger. It feels like magic-until you get a letter from the Employment Tribunal. As the use of these tools grow, so does the risk of high-value litigation.
The Legal Reality: You Can’t Blame the Bot
Under Section 39 of the Equality Act 2010, employers are prohibited from discriminating during the arrangements they make for deciding to whom to offer employment. Crucially, you cannot delegate your legal liability to a robot. This means that if your AI software filters out candidates based on data patterns that correlate with a protected characteristic (like age, gender, race, or disability) your firm is legally liable – not the software provider.
While the UK has not passed a specific ‘AI Act’ for employment yet, Tribunals are increasingly looking at Algorithmic Bias. If your software ‘learns’ from a historical database that lacks diversity, it may create a Provision, Criterion, or Practice (PCP) that puts certain groups at a disadvantage, potentially leading to claims of Indirect Discrimination (Section 19, Equality Act 2010).
For example, imagine your AI tool learns that your most successful historical hires were all men in their 30s who lived in London. It may begin to auto-reject brilliant candidates who do not fit that narrow data set.
The Solely Automated Trap
The UK GDPR (and the Data Protection Act 2018) generally prohibits decisions made about individuals based solely on automated processing that have ‘legal or similarly significant effects. Rejection of a candidate at the final stage by AI, without human intervention, could constitute a breach.
How do we stay ‘Bot-Legal’ in 2026?
-
The Bias Audit – Ask your AI software vendor to provide their latest bias audit. If they prove their tool is unbiased, do not use it. In the event of a claim, being able to show you performed due diligence is your first line of defence.
-
Update Your Privacy Notice – You are legally required to be transparent about how candidates’ personal data is processed. Ensure your notices explicitly mention AI involvement in candidate scoring.
-
The Human Veto – Never let AI have the final say. Ensure a human recruiter checks borderline rejections to prevent talented candidates being ‘digitally excluded’ by a glitchy algorithm.
If you would like to discuss AI in the recruitment process or any other employment matters, please contact Toby Walker by email or on 01494 521 301.