- Popular discussion of artificial intelligence has hit new heights, but recently published data from a Pew Research Center survey of U.S. adults showed mixed reaction to the inclusion of AI in various employment functions.
- For example, 71% of respondents said they opposed using AI to make final hiring decisions, 70% opposed using it to analyze employees’ facial expressions and 61% opposed using it to track workers’ movements on the job. A majority also opposed applications including keeping track of office workers, recording computer activity and tracking how often workers took breaks.
- However, nearly half of respondents said they felt AI would be better than humans at treating all job applicants the same way. Of respondents who said that bias and unfair treatment based on an applicant’s race or ethnicity is a problem, 53% said that AI would improve this issue.
The confluence of generative AI tools, hiring algorithms and productivity tracking software, among other AI applications, has led to an evolving situation for HR teams. AI is such a rapidly growing presence in the employment space that regulators at the federal, state and local levels have scrambled to respond.
For example, officials in the U.S. Equal Employment Opportunity Commission have long cautioned about the use of AI in hiring decisions and the potential for discrimination, unintentional or otherwise. In January, EEOC held a hearing at which experts demonstrated how automated hiring tools could be used as proxies for discriminatory preferences.
New York City enacted one of the nation’s most restrictive AI-in-hiring laws, and the city government has said it will begin enforcement of a law requiring local employers to audit and notify job candidates about the use of automated employment decision tools beginning July 5.
Pew’s survey found that two-thirds of adults said they would not apply for a job with an employer that uses AI to assist hiring decisions. In extended responses collected by Pew, those opposed to such a job cited concerns such as needing the “right” keywords on an application and the potential failure of AI to capture nonverbal information from candidates.
But among the 32% who said they would apply to such a job, the reasons for doing so included the belief that AI could be less prejudiced as well as more objective compared to human recruiters, while another response stated that AI “might see my qualities better than a person.”
Pew noted differences in respondent sentiment about AI based on factors such as income level, gender, race and ethnicity. For example, higher-income respondents were more likely than their peers to favor the use of AI in reviewing applications. Men in the survey were more likely than women to see specific benefits and downsides to AI usage in the workplace. White and Asian adults were also more likely to see potential downsides for AI used to monitor workers.
“It is important to note that as the public confronts these questions about uses of AI in hiring and monitoring workers, notable shares of the population say they are not sure of their positions,” Pew researchers said in an article accompanying the survey results.
Employers have a number of considerations to make when deciding if, and how, AI should be integrated into their processes. In one recent study, researchers at Bain & Co. opined that employers should allow employee needs to drive workplace use of AI and automation. A 1E survey of information technology managers and workers found that most respondents had witnessed negative impacts as a result of employers using surveillance technology.