- Federal enforcement authorities cautioned employers Thursday about using tools like artificial intelligence and machine learning in employment. Algorithmic decision-making tools, particularly when used to hire, monitor performance, determine pay or performance or establish other terms and conditions of employment, may discriminate against people with disabilities, regulators warned in a pair of technical assistance documents.
- The documents were issued by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice. DOJ Assistant Attorney General Kristen Clarke said during a press call Thursday the two agencies are “sounding the alarm” that employers’ “blind reliance” on tools that make use of AI, machine learning and other processes could impede access to opportunity for people with disabilities in violation of the Americans with Disabilities Act.
- In its document, EEOC highlighted three of the “most common” ways such tools may violate the ADA: an employer could fail to provide reasonable accommodations necessary for a job applicant or employee to be rated fairly and accurately by an algorithm; the tool could screen out people with disabilities even if they are able to perform a job with a reasonable accommodation; or the tool could be adopted in a manner that violates the ADA’s restrictions on disability-related inquiries and medical examinations.
The technical assistance is a follow up to EEOC’s announcement last fall that it would address the implications of hiring technologies for bias. In October 2021, Chair Charlotte Burrows said the agency would reach out to stakeholders as part of an initiative to learn about algorithmic tools and identify best practices around algorithmic fairness and the use of AI in employment decisions. Other EEOC members, including Commissioner Keith Sonderling, have previously spoken about the necessity of evaluating algorithm-based tools.
A confluence of factors have led the agencies to address the topic, Burrows and Clarke said during Thursday’s press call. One is the persistent issue of unemployment for U.S. workers with disabilities. The gap between this group and workers without disabilities grew during the pandemic, and the U.S. Bureau of Labor Statistics’ April jobs numbers showed a participation rate of 23.1% for the former compared to 67.5% for the latter.
Another factor concerns employers’ rapid adoption and deployment of algorithmic tools, particularly incorporating AI. A 2020 report by HR consulting firm Mercer found that 41% of organizational respondents were using algorithms to identify “best-fit candidates,” while 38% were planning to begin doing so in 2020.
“In some of these instances, the applicant and employee may not even know they’re being assessed,” Burrows said of the tools deployed during interviews, calls and other hiring formats. “There’s enormous potential in [this] technology, but we’ve got to make sure that as we look to the future, we aren’t leaving anyone out.”
Burrows also illustrated a few examples of potentially problematic applicants. For instance, a speech detection program that assesses a candidate’s speech patterns may unfairly assess candidates who have a speech impediment. Other assessments, such as those that make use of keyboard inputs, could disadvantage candidates who have less dexterity in their use of a keyboard.
Such applications can compound existing diversity and inclusion issues, said Clarke, because an employer may tie assessment criteria to the performance of existing employees whom the employer considers the most successful in their respective roles. “But because [the employer] has not hired many people with disabilities in the past [...] none of the applicants chosen by the tool have a disability.”
While Thursday’s technical assistance documents are focused mainly on guidance for employers, Clarke added that the agencies hope to send a message to applicants with disabilities about the ADA’s right to accommodation. Burrows added that employers should seek vendors that are able to manage reasonable accommodations and be transparent about the factors their tools consider when evaluating candidates.
The announcement is a largely positive development for employers, according to Jennifer Betts, office managing shareholder at Ogletree Deakins. She added that it is likely EEOC will, at a future point, provide additional assistance on AI, and that Thursday’s documents should not be interpreted as an attempt to dissuade employers from using AI tools, but rather as an attempt to increase awareness about the potential for adverse consequences that may be inadvertent on the part of employers.
“Now we know these are the three primary areas where EEOC sees the bulk of compliance risk,” Betts said in an interview. “That really gives employers a nice roadmap for how to analyze these issues.”