While most job seekers are using artificial intelligence tools for basic help, like assistance writing a cover letter, some may be using the technology to forge documents and create fake resumes, said RJ Frasca, vice president of channels and partnerships at Shield Screening.
One way companies are combating dangerous applications is by fighting fire with fire and using AI in screening platforms to sort out who might be using the technology for less scrupulous reasons. As job candidates are “using more AI to get through filters, [hiring managers] are using more AI to fight those filters,” he said.
Here’s what hiring managers need to know.
The AI-enabled candidate problem
AI, especially for younger candidates who may be taught how to use generative AI in high school and college, has become part of the job-seeking process. That use is typically harmless, said Joe Jones, director of research and insights for the International Association of Privacy Professionals, a nonprofit that defines, promotes and improves privacy and AI-related professions.
“They’re trawling various databases and networks to see what’s available,” said Jones. Others are using it “in a way to generate content to support their application,” like responding to questions, uploading resumes and creating a cover letter template.
Generative AI starts to become a problem when it’s used not as a tool, but the “be all and end all” of the job hunting process, he said, with candidates solely relying on AI to fully write cover letters and resumes that are not personalized, or providing responses to questions that don’t actually answer what is being asked.
Deepfake technology, which also relies on AI, has only gotten more convincing. According to a report from finance software provider Medius, just over half (53%) of businesses in the U.S. and U.K. have been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks.
HR professionals might not think this is their problem, but hiring is a potential entry point for criminals, as they can put themselves onto hiring platforms and steal company software, install malware, or worse.
At the end of 2024, for example, threat intelligence and incident response firm Mandiant warned that North Korea-backed remote IT workers had infiltrated “dozens” of Fortune 100 companies through deepfaked video interviews and stolen personally identifiable information.
Fighting fire with fire
But AI can help recruiters defend themselves against bad actors, too. Candidate uses AI to make a fake diploma? “There’s a flip side of that, where AI is being used to dig deeper and verify” whatever documents or information candidates are presenting, said Frasca.
AI can also be used to verify a candidates’ identity, by doing a live screen of their face and comparing it to a government ID database, he added.
On video calls, AI can be used to detect when the person you’re talking to isn’t really there, or an altered version of someone else. AI can spot patterns “that are more prevalent in deepfakes than in human cases,” said Jones, like eye contact and voice intonations. “There’s all sorts of analysis on the presentation of a person” that can be detected.
But companies shouldn’t just add an AI program or deepfake detector to their platforms, set it and forget it. “The technology’s capabilities need quite a bit of human oversight,” said Frasca, especially if companies are using it for more than just detecting potential fraud, but to sort through candidates and applications.
HR departments using AI also need to make sure that “bias doesn’t creep in when the AI is being used,” said Jones. An AI could make inferences in gender, race or other protected characteristics. It could, for example, scan a resume and assume the applicant is a woman, and make inferences about the candidate from that determination. “Bias can creep in — or more than creep,” he said.
Notably, job seekers are also wary of employers that rely on AI tools for the hiring process; around 3 in 5 job hunters surveyed by Express Employment Professionals and Harris Poll said they would consider not applying to companies that rely on generative AI.
And while legislation around AI is still in its infancy, existing laws on things like privacy, copyright and IP are being applied to it, which HR managers should be mindful of. “In the context of AI, privacy laws, quality laws, and nondiscrimination laws still apply,” Jones said.