Explore how AI shortlisting tools are changing recruitment in the UK. This practical guide outlines steps to adopt AI responsibly, communicate transparently with candidates, and ensure legal and ethical compliance in your hiring process.
AI shortlisting in recruitment: How to save time, recruit more ethically and stay compliant with UK law
This article is for UK based recruiters and HR professionals considering the adoption of AI-based shortlisting tools. It’s packed with practical advice and highlights areas where you may have legal responsibilities – but please remember, this is not legal advice.
Why AI Shortlisting is on every recruiter’s radar
Recruiters are under more pressure than ever. Applications are surging (UK applicant data from Job Adder showed a 45% increase from 2023 –2024), candidate expectations are rising, and recruitment teams aren’t growing at the same pace – in many cases, they’re shrinking.
Meanwhile, candidates are increasingly using AI to help them to apply to more employers. These growing demands have pushed many to explore the benefits of AI, such as reducing manual workload, improving consistency, and speeding up early-stage screening.
We believe that by the end of 2025, AI shortlisting will be an essential feature included in almost every applicant tracking system (ATS). But not all AI features are created equal – and different vendors will takevery different approaches. Whether you're considering a new provider or enabling the AI feature in your existing ATS, here’s what you need to know to roll it out responsibly and in line with best practice.
What should you look for in an AI Recruitment Tool?
If AI shortlisting is a key feature in your ATS evaluation process, or you're considering enabling it in your current system, the first thing to understand is how the tool actually works. Vendors should be able to clearly explain the AI functionality in simple terms. That explanation should then be something you can relay with confidence to internal stakeholders and hiring managers.
If you can’t explain it clearly to your HR colleagues orleadership team, you’ll likely struggle to explain it to a candidate—especially one who has just been rejected.
You should also expect more than just an explanation of how to turn it on in a help article. Vendors serious about AI recruiting should support you with best practice guidance and training.
This is not a “switch it on and walk away” type of feature. It needs thoughtful implementation and careful communication to staff and candidates alike.
A note on legal responsibility
Throughout this guide, we’re highlighting both best practice and areas where you may have a legal responsibility. That said, we are not a law firm just a recruitment software company, and this article should not be considered legal advice. If in doubt, consult a qualified legal professional.
Step 1: Update your privacy policy to explain your usage of AI in your hiring process
Your first action point should be to review and update your privacy policy (PP) to reflect how AI is used within your recruitment process.
We strongly recommend creating a dedicated privacy policy specifically for recruitment. Trying to explain how you process customer data, supplier data, and applicant data all in one place tends to result in a vague, overly complex document. The advent of AI only makes this more challenging –and more important – to get right.
At a minimum, your privacy policy should:
- Clearly state that you use AI tools as part of your applicant screening or shortlisting process.
- Explain that your AI recruiting tool is used to support decision-making, not to make final hiring decisions (unless you are doing so – more on that shortly).
- Provide a general explanation of how the tool works (e.g., “our system uses AI to review and score applications based on the criteria set by our hiring team”).
Step 2: Make your new "AI ready privacy policy" easy to find
It’s not enough to update your privacy policy – you also need to ensure it’s clearly visible wherever a candidate might interact with you. That includes:
- Your careers site
- Contact forms (particularly those linked to job applications)
- Application forms
- Post-application emails (e.g. “Thank you for your application” emails)
Don’t overlook edge cases either – for instance, if a colleague hands in a printed CV or refers a friend via email, that applicant may still end up in your ATS and potentially be evaluated by AI. You’ll need a process to ensure that they, too, are directed to your privacy policy.
Taking these steps is not just good housekeeping – it gives you a strong foundation if an applicant ever challenges how their data was handled.
Step 3: Be clear on automated decision-making (Spoiler: You don’t want automated decision making)
Under UK GDPR, if a decision is made solely by automated means – with no meaningful human involvement – and that decision has legal or similarly significant effects, then you must get explicit consent from the individual involved.
Most AI shortlisting tools do not make final decisions. They assist recruiters by reviewing CVs and highlighting top candidates, but the ultimate choice remains with a human. That means you are not making solely automated decisions, and you don’t need to get explicit opt-in consent from applicants.
However, this distinction must be made clear in your privacy policy and communications with candidates.

Step 4: Train your staff on ethical use of AI in recruitment
No matter how user-friendly your AI shortlisting tool may seem, training your staff is essential. It’s not just about making sure they can use the system – it’s about ensuring they understand the legal implications of misusing it.
Training alone won’t fully protect your organisation from risk, but it will demonstrate that you’ve taken reasonable and proactive steps to act as a responsible employer. The opposite is also true: failing to offer training could be viewed as negligence if something goes wrong.
When you train your team, make sure you cover:
- What the AI recruiting tool does (and doesn’t do)
- How to use the tool ethically and accurately
- What shortcuts to avoid
- How to answer candidate questions transparently
This isn’t just for recruiters – hiring managers involved in shortlisting should also understand the basics of how the tool works and how to use it responsibly.
Step 5: Engage your ATS vendor on training and best practice
Your software provider plays a key role in ensuring you’re set up for success with AI recruiting. You should feel confident asking them the following:
- Do they provide user training or guidance tailored to their AI tools?
- Have they created best practice materials, not just technical instructions?
- Do they offer examples or scenarios of how to prompt or instruct the AI tool?
Some ATS providers may offer prompt-based AI features, where recruiters can type instructions into a window to tell the AI how to screen applicants. While this provides flexibility, it also opens the door to misuse –particularly if users aren’t trained properly or if prompts are vague, biased,or ethically questionable.
Ideally, your vendor should guide your users and place controls or safeguards around how AI features are accessed and used. Leaving users to invent prompts from scratch increases the risk of inconsistency and bias.
Step 6: Monitor staff usage
You need to know how your AI tools are being used – and just as importantly, whether they’re being used correctly.
Ask your ATS provider:
- Can you track how and when staff use the AI feature?
- Are there dashboards or logs you can review?
- Can you monitor any prompts or input instructions?
Communicate clearly to your team that usage is being monitored. This helps reinforce the importance of following guidance and discourages any temptation to take shortcuts that could undermine fairness or quality.
Step 7: Create an internal employee AI Usage policy
Formalise your expectations by introducing an employee AI usage policy that covers recruitment activity.
Your policy should clearly outline:
- What tools can be used in recruitment (e.g. specific ATS features or browser-based AI tools)
- How they should be used, and for what types of tasks
- How not to use them – including examples of prohibited or risky behaviour
- Any training or sign-off requirements that must be completed first
Including this in your wider technology or recruitment policy is a good way to ensure consistency and avoid ambiguity across teams.
Step 8: Ask your vendor about reliability, consistency, & bias mitigation
When evaluating a provider, ask how they ensure the AI is both reliable and consistent in its outputs. You’ll want to understand:
- How the tool is tested
- What monitoring or feedback mechanisms exist
- How the vendor reviews and improves the AI over time
Just as importantly, ask what steps they’ve taken to minimise bias. AI tools are only as good as the data and rules they’re built upon. Your provider should be able to explain:
- If they tested the tool across different groups (e.g. gender, ethnicity, disability)
- Whether they conducted any bias audits or validation studies
- What actions are taken when inconsistencies or concerning patterns are identified
Step 9: Ask to review their Data Protection Impact Assessment (DPIA)
Every AI-driven recruitment feature should have been subject to a Data Protection Impact Assessment (DPIA) by the vendor. This is a critical document that helps assess and mitigate risks to individual rights and freedoms – and it’s your right, as a customer, to request access to it.
A thorough DPIA demonstrates that the vendor has:
- Thought carefully about the risks of their AI functionality
- Not cut corners in the development process
- Considered fairness, transparency, and data protection from the outset
You don’t need to be a legal expert to read the DPIA – but you should understand its key findings and keep a copy on file.
Step 10: Create a feedback channel for candidates
Transparency also means listening. You should provide an email address (or equivalent contact route) where candidates can share feedback or raise concerns about your recruitment process – especially in relation to AI use.
This should be a monitored address, and someone in your team should be responsible for reviewing and responding to queries in a timely and consistent way.
Even better: proactively invite feedback in your rejection emails or privacy policy. This helps candidates feel that your process is open, fair, and accountable.
Step 11: Train staff on how to talk about AI Use
Your team should be ready to answer questions from candidates about AI usage – confidently and honestly.
For example, we’d expect a Hireful customer to explain:
“Yes, we use AI to help rate applicants against the criteria set out in the job advert. This assists our recruiters by saving time, but all final decisions are made by a human.”
Where appropriate, you might go further and explain how the rating was produced. If your system generates a score or summary, offer to share it:
“Would you like to see your AI rating and a short summary of how the rating was calculated?”
Remember: You don’t need to explicitly mention to every applicant in your rejection email that they were rated by AI. You should however, be completely transparent about AI usage if you are asked about it in your recruitment process.

Step 12: Only move forward if you truly understand the tool
This is your final checkpoint – and arguably the most important one.
Before enabling AI shortlisting, ask yourself:
- Can I explain how this AI tool works to a 16-year-old?
- Can I describe how applicants are rated, and why?
- If someone challenged us, could we clearly justify our process?
If the answers are yes, you’re in a strong position.

What you must avoid is adopting what’s often called “blackbox” AI – where decisions are made by algorithms in ways you don’t understand and can’t explain. If you don’t know how conclusions are reached, how will you ever know whether they were reached fairly?
In summary
AI shortlisting has the potential to transform recruitment –not by replacing humans, but by helping them focus their time where it matters most. But for this technology to work well and ethically, it must be implemented thoughtfully, transparently, and with oversight.
By following the steps outlined in this guide, you’ll be better prepared to:
- Evaluate tools that align with your values
- Train and support your team properly
- Reassure candidates that your hiring process is fair and explainable
And most importantly, you’ll be protecting your organisation– and the candidates you serve – as the world of recruitment continues to evolve.
Ready to find out more about hireful’s ATS and our AI shortlisting?
You can read about our AI shortlisting feature here.
The following articles might also be of interest:
- How our AI shortlisting works, click here.
- How our AI shortlisting minimises bias, click here.
When you are ready, feel free to contact our team to discuss how this could reduce bias and streamline your recruitment process.