Dive Brief:
- When people believe they’re being assessed by artificial intelligence, they emphasize their analytical skills and downplay their intuitive and emotional skills, according to research published in the June issue of the Proceedings of the National Academy of Arts and Sciences.
- This shift in behavior – what the researchers refer to as the “AI assessment effect” – happens because of the lay belief that AI assessments prioritize analytical characteristics, the researchers said.
- The result has significant implications for HR managers and decision-makers involved with the selection process, the research found. This is because “if people strategically adjust their behavior in line with their lay beliefs about what AI prioritizes, their true capabilities and/or personalities may not be revealed,” the researchers explained.
Dive Insight:
The researchers, from the Institute of Behavioral Science and Technology at the University of St. Gallen in Switzerland and the Rotterdam School of Management at the Netherland’s Erasmus University Rotterdam, based their findings on several studies.
At the outset, they surveyed 1,421 job candidates who participated in a game-based assessment for Equature, an employer software company, according to a research summary.
The candidates were asked to rate, on a continuum from exclusively human to exclusively AI, who they believed assessed them. They were also asked to rate, using a similar continuum, whether they adapted their behavior to this belief.
Given bedrock psychology that people tend to adapt their behavior to match what they believe an assessor will find favorable, understanding the responses the researchers found is critical as AI tools become increasingly prevalent in the selection process, they explained.
In the context of HR management, this behavioral shift could fundamentally alter who gets selected for positions and potentially undermine the selection process, the researchers said.
To address the issue and make sure candidate responses are authentic, they recommended that organizations identify and address the AI assessment effect in their own assessment practices.
Based on recent findings by Resume Builder, the advice comes at a crucial time.
According to the findings, released in June, almost all (94%) of the 6 in 10 managers who use AI tools at work use AI to make decisions about their direct reports, including for determining raises, promotions, layoffs and terminations.
Yet only a third of the more than 1,300 U.S. managers surveyed said they received formal training on how to use AI ethically, and about a quarter said they haven’t received any training.
Additionally, 1 in 5 managers told Resume Builder they frequently let AI make final decisions without human input, although nearly all managers said they’re willing to step in if they disagree with an AI-based recommendation, the survey showed.
“It’s essential not to lose the ‘people’ in people management,” Stacy Haller, Resume Builder’s chief career advisor, cautioned. While AI can support data-driven insights, it lacks context, empathy and judgment, and AI outcomes reflect the data the tool is given, which can be flawed, biased or manipulated, Haller said.