Jamie Kohn is a senior director, research in the Gartner HR Practice.
As artificial intelligence becomes a mainstay in the recruitment space, candidates are leveraging AI to spruce up their resumes and mass-apply to jobs, while employers are relying on AI to weed through the uptick in resumes they are receiving.
But job seekers are struggling to stand out in this increasingly automated landscape and are growing more frustrated by the increased use of AI in hiring.
Recent Gartner data reflects this. In a 3Q25 survey of 2,901 job candidates, 68% said they prefer human interactions over those with AI/chatbots, an increase from two years ago when 58% preferred humans. Additionally, more candidates (26%) said they would drop out of an application process if they had to interact with AI, up from 21% two years ago.
Improving the candidate experience
Despite these concerns, employers see AI as a way to boost efficiency while also improving the candidate experience. AI can reduce the time spent on interview scheduling, help to provide a consistent interview experience and shorten the overall time to hire.
The newest frontier in an AI-driven hiring process is AI interview agents. Employers are starting to experiment with AI interviewers that conduct phone interviews at times that work best for candidates. Gartner’s 3Q25 survey shows that 9% of candidates have completed an interview with AI, with AI interviews being more common in technology (15%), telecommunications (12%) and professional services (11%).
While candidates have reservations about AI, Gartner’s 3Q25 survey shows 30% of overall candidates would be open to having an interview conducted by AI. For those who have done an AI interview, this number jumps to 61%. Among the candidates who have had an AI interview, 48% felt comfortable being interviewed by AI, and 27% said they prefer the AI interviewer to a human. However, only 31% said they knew ahead of time that they would be doing an AI interview — a missed opportunity to improve the candidate experience.
Increased candidate use of AI is eroding quality
However, as more employers leverage AI, candidates are responding in kind by using AI tools themselves.
Gartner found 13% of candidates in 3Q25 reported using generative AI in real-time during an interview, with even more candidates in the Asia-Pacific region (18%) admitting to in-interview usage. Those uses ranged from relatively benign tasks to more questionable ones. For example, 44% of candidates used generative AI during interviews for real-time research on the company or role, while 41% used it to generate answers to questions.
As hiring becomes increasingly influenced by AI, employers need to make sure they are evaluating candidates’ true abilities, while also preserving trust in the hiring process. Gartner has outlined steps employers can take to set parameters and adapt their interview process for candidate AI use.
Transparency around AI use policy
Many candidates see their use of generative AI as a way to level the playing field as the job market becomes more competitive. It is up to employers to openly communicate their generative AI use policy to candidates during recruitment, including expectations and philosophy on AI use at work and during the interview process. If employers don’t want candidates using generative AI in interviews or assessments, they should clearly state that.
This doesn’t mean candidates shouldn’t be allowed to use generative AI at all, but employers should evaluate candidates on their ability to complete tasks both with and without generative AI assistance.
Updating interview guides with AI-resistant questions
One way for employers to get a true sense of a candidate’s abilities is to use interview questions that generative AI is not good at answering. For example, generative AI can generate explanations of complex, domain-specific topics, but it cannot provide information on genuine personal experiences.
Recruiters should focus on behavioral interviews, case studies and hypothetical scenarios to better assess candidates’ true skills. Behavioral questions, such as asking how a candidate handled an underperforming team member (vs. asking generically about management style), require personal experience and are less suited to generative AI assistance. For case study problems, candidates can use generative AI to solve the problems, but AI is less effective for follow-up questions that probe candidates’ thought processes.
Training interviewers to spot generative AI usage
Recruiting leaders should also provide clear technology guidelines to interviewers and candidates on what devices and technology may be used and expectations about being on camera. Recruiters should train interviewers to spot generative AI usage during interviews. Common behaviors candidates may exhibit if they are using AI include:
-
Unusual pauses before responding (to allow AI to generate a response)
-
Unnatural or overly polished sentence structure
-
Struggling to answer follow-up questions with any depth of knowledge
-
Heavier use of buzzwords or jargon
Integrating generative AI into interviews
Finally, many employees are now required to use generative AI on the job. Why should the assessment process be any different? Recruiting leaders must redesign interviews to align with how work gets done, combining generative AI and human expertise. For example, interviews might require candidates to use generative AI for a task and talk through their process. This type of interview can allow interviewers to evaluate generative AI skills such as writing effective prompts and evaluating output while also assessing core skills like communication and overall subject matter expertise. This helps employers assess candidates’ true ability to do the work.
As both employers and job candidates increasingly rely on AI tools, transparency and clear expectations are more important than ever. Employers must communicate how they are using AI and set boundaries for candidates’ AI use. Otherwise, employers risk hiring unqualified candidates while losing the trust of all candidates in the process.