Dive Brief:
- Human recruiters largely adopt the biases of the artificial intelligence tools they use when selecting job candidates, researchers at the University of Washington found in a recent study detailed in a Nov. 10 press release.
- The study asked 528 participants to work with simulated large language models to choose candidates for 16 different jobs. Researchers simulated AI models with different levels of bias which would generate hiring recommendations for resumes submitted by fictional, equally qualified White, Black, Hispanic and Asian men.
- Per the results, participants who selected candidates without AI, or with a “neutral” AI, chose White and non-White candidates at equal rates. Those working with moderately biased AI, however, showed preferences for either White or non-White candidates that matched the biased recommendations of the AI. Participants working with severely biased AI models made only slightly less biased decisions than those recommended.
Dive Insight:
The combination of AI and human recruiters is becoming increasingly dominant, Kyra Wilson, a doctoral student at the University of Washington and lead author of the study, said in the press release. Given that reality, Wilson said the researchers sought to determine how AI technology influences recruiters’ decision-making.
“Our findings were stark: Unless bias is obvious, people were perfectly willing to accept the AI’s biases,” Wilson said.
A recent Employ Inc. report found that 65% of recruiters were using AI in their workflows and 52% said they planned to invest in new recruiting tech, signaling accelerating adoption. In a more extreme discovery, a Resume.org report published in August found that one-third of U.S. workers believed their employers’ hiring processes would be entirely run by AI by 2026.
Concerns about bias in hiring AI have been voiced throughout the early 2020s hype cycle surrounding the tech. The University of Washington study found that, even though participants using severely biased AI models made slightly less biased decisions, they still followed the AI’s suggestions about 90% of the time. But Wilson said employers could reduce bias by implementing better models.
Another method for counteracting bias may be to require recruiters to undertake implicit association tests that help detect subconscious biases. The university said bias dropped 13% among participants who began the study with such a test.
Aylin Caliskan, associate professor at the university’s Information School, said in the release that the scientists designing AI models also have a role to play in reducing bias, as do policymakers.
“People have agency, and that has huge impact and consequences, and we shouldn’t lose our critical thinking abilities when interacting with AI,” Caliskan said. “But I don’t want to place all the responsibility on people using AI.”
Employers also may need to note the rise of state and local laws that regulate the use of AI in hiring. California, for example, recently issued regulations that will require employers covered by the state’s Consumer Privacy Act to issue pre-use notices to potential candidates about the use of AI tools as well as field consumer information requests about automated hiring tools. Employers also will need to perform risk assessments of such tools, per the regulations, which take effect January 2027.