Article

AI fraud and security risks in recruitment

Watch back the recording of this webinar, which explores the potential AI fraud and security risks faced within recruitment processes.

2 August 2024

This webinar provided an opportunity for employers to learn about the risks of AI fraud, including in job applications and interviews, AI collecting sensitive personal data about candidates, along with some case examples. 

Two counter fraud specialists delivered presentations and followed up with a Q&A session. The accompanying slides and responses to unanswered questions from the Q&A are available further down this page. 

Watch the webinar recording in full below:

Artificial Intelligence fraud and security risks in recruitment

  • What is acceptable use of AI?

    This will be a question for each organisation to consider and agree upon. It is recommended that the organisation makes it clear as to what the agreed allowable usage should be and make that clear to candidates before they submit their application. 

    The acceptable use could allow candidates to use AI to assist in reordering their words in their application answers. The candidate should always be expected to be able to back up the answers they provide in an application and recruitment should not solely be based upon an application.

    How is a candidate’s declaration of use of AI beneficial to the employer?

    By declaring use of AI demonstrates the candidate’s honesty and integrity at the outset of their application. If AI was used and not declared, then that honesty is bought into question. There would also be evidence of a false declaration which would support any further internal and counter fraud action.

    What is the recommended follow-up action to take if a candidate declares they have used AI?

    Follow-up action would likely be during the formal interview process to ascertain the reasons for using AI. The candidate may have learning difficulties such as dyslexia and therefore AI may be a useful tool to assist a candidate. The accuracy of the content of an application should always be checked. This could be done through the usual employer checks such as reference and past employment checks and through an interview process.

    If you suspect an application has been AI generated but it hasn’t been declared, what can you do? As you shouldn’t reject it based on suspicion only.

    Whilst we cannot advise of any official marketed technology to detect AI, there are AI apps that can be explored to check whether the answers to an application question are the same or similar as those presented by a candidate.

    It is acceptable to challenge and ask questions of the candidate if you suspect the application has been AI generated and not been declared (hence the reason for making it clear from the outset whether AI is acceptable or not). If concerns still exist, then it would be appropriate to refer to the organisation’s counter fraud lead for investigation.

    What is the balance of risk regarding potential race/disability discrimination? If someone has clearly used AI to create a personal statement, and their listed experience/education doesn't match the statement, surely, they can be rejected on that basis?

    The organisation policy and processes for recruitment in all areas of the process should be clear and consistent for all candidates regardless of race or disabilities. If a person had used AI to give false information (for example stating that they were qualified when follow up checks found otherwise) then the organisation could decide not to recruit in the usual way and a referral should be made to the organisations counter fraud lead in respect of an attempted fraud.

    If someone declares that they are not using AI in an online interview - what effectively can you do if you suspect that they are?

    This will always be tricky, but it would be appropriate to ask the candidate to show on camera evidence of their identity and where they are located. This approach is known to be taken in online International English Language tests where a person is asked to show the room to ensure that there is no equipment or other persons present. Also be mindful of the red flags such as:

    • The candidate’s camera is switched off.
    • The candidate has a second device/screen (can you see their hands)?
    • The person engaged or distracted (e.g., typing on a device).
    • The candidate mutes and pauses before answering a question.
    • The candidate wearing an earpiece/listening device.
    • There is someone else in the room (using AI).
    Could some of the prompts/AI in interviews queries be linked to neurodivergent conditions - engaged/distracted or pausing before responding to questions?

    Absolutely and we should always remember that the majority of candidates are honest. It is the minority that seek to take advantage of recruitment processes and systems. However, if a person did have neurodivergent conditions you would expect to see signs of these behaviours again if they were appointed to the role – if that was not the case, then questions could be raised.