Alison Lands is a vice president in the employer mobilization practice at Jobs for the Future.
Artificial intelligence promises to make hiring smarter and more objective. But in practice, AI-powered tools are introducing new layers of inconsistency and doubt at a moment when trust in hiring is already fragile.
Today, employers are inundated with more applications than they can realistically process. Overwhelmed hiring teams are turning to AI-powered solutions to triage the flood of applications, prioritizing speed and scale in an increasingly strained labor market. They’re leveraging new tools to screen resumes, schedule interviews, assess competencies and even predict job fit.
In a self-reinforcing cycle, the influx of applications has been driven by growing use of generative AI on the candidates’ side. With these tools, candidates can produce resumes and cover letters at a rapid pace, with nearly half of job seekers using AI to increase their application volume.
As a result, we are in a de facto “AI arms race.” A volatile labor market, riddled with uncertainty in part driven by AI’s adoption has created a hiring system that moves faster than ever but with less clarity, confidence and shared understanding of what being qualified actually means.
To maximize what AI can offer employers in terms of efficiency, while also working toward successful hiring outcomes for job seekers, there are foundational changes that business leaders must treat as top priorities
A crisis of confidence in the hiring system
The Illusion of Progress in Skills-Based Hiring, a special report from the University of Phoenix Career Institute, illustrates the rise in usage of AI-powered solutions in the hiring process as well as the challenges that have come along with that.
The report finds that nearly 30% of hiring stakeholders say AI tools are starting to perform tasks once handled by humans — raising urgent questions about fairness, transparency and reliability. More than half of candidates (57%) and nearly half of hiring stakeholders (47%) believe AI impacts objectivity in the hiring process. Half of hiring managers (50%) worry that these tools may screen out qualified candidates.
While concerns around the use of AI in the hiring process are widespread, action to address them is not. I’ve said before that AI is an impatient technology, and our legacy hiring and talent infrastructure was not built to move at this speed. This mismatch is fueling the very doubt these tools were meant to reduce. Per University of Phoenix’s research, only 37% of organizations using AI in their hiring process currently audit their tools for fairness — an alarming gap between risk and responsibility.
As AI tools become de rigueur on both the job seekers’ and employers’ side of hiring, the stakes for building trust in this technology and establishing best practices for use have never been higher.
Creating a new standard for talent management with AI-powered support
While employers and job seekers navigate AI disruption, businesses are also exploring another shift: the move towards skills-based hiring and talent management processes. At Jobs for the Future, we believe that skills-based hiring can transform how jobs are defined, advertised and filled. By evaluating people on what they can do walking in the door, this approach brings greater objectivity to talent decisions while expanding the labor pool along with greater access to opportunity. Most employers are moving in this direction.
University of Phoenix’s research shows that a vast majority of hiring stakeholders (82%) say their processes are shifting toward skills-based practices.
But adopting the language of skills is not the same as building a skills-based system.
The University of Phoenix special report finds many organizations pursuing skills-based practices have not put complementary measures in place to make those practices real: 53% of employers report a lack of standardized hiring practices, and 57% of hiring stakeholders say they need better training to evaluate candidates’ skills.
The result is a system without consistent frameworks, shared evaluation standards or interviewer preparation. That gap shows up clearly in readiness; nearly a quarter of hiring stakeholders (24%) say they receive no training or materials before interviewing new candidates.
In that vacuum, hiring teams often revert to familiar shortcuts like gut instinct, referrals and subjective notions of “fit.” These traditional tactics undermine gains made in improving the fairness and transparency of the hiring process, and when paired with AI, these inequities don’t disappear — they scale. When AI adoption outpaces training and governance, risk scales faster than results. Then trust fails, not because skills-based practices are flawed, but because implementation is incomplete.
In other words, skills-based hiring can’t just be an aspiration; it must become a true operating system if AI tools are going to support it. When standardized and implemented consistently across teams and processes, a skills-based foundation gives AI tools something objective and job-relevant to measure against — marrying efficiency with objectivity to drive substantive progress toward a more successful hiring system for all.
The way forward: Taking skills-based hiring models from intent to reality
To this point, the solution to opaque, untrusted AI isn’t more sophisticated tools; it’s a stronger foundation beneath them. But how can businesses achieve that?
- For employers, it starts with operationalizing skills-based practices end-to-end. That means defining clear skill standards, using structured and consistent assessments and equipping hiring teams with training so they can evaluate skills reliably rather than defaulting to proxies like familiarity, referrals or perceived “fit.”
- Fairness must be treated as non-negotiable in the tech integration process. AI tools used in hiring should be regularly audited for bias and validated against legitimate, job-relevant criteria. Organizations should also be transparent with candidates about when and how AI is used and ensure that human oversight remains central — especially for high-stakes decisions.
- Finally, governance must be ongoing. Implementing cross-functional oversight, continuous monitoring and feedback loops from both candidates and hiring teams will help ensure the process remains as objective as possible, even as technology and job descriptions continue to evolve.
Skills-based models give AI something meaningful to measure — restoring trust while preserving efficiency. Trust in AI-enabled hiring will be earned the old-fashioned way, through clear standards, trained humans and visible accountability. When organizations invest in clear standards and consistent practices, AI can finally do what it promises: support better decisions, fairer outcomes and a hiring system people actually trust.