HR professionals hoping to integrate artificial intelligence tools into workplace processes "have their work cut out for them," a U.S. Equal Employment Opportunity Commission official told attendees at an April 8 American Bar Association conference.
The technology was gaining traction before the coronavirus pandemic, and it now promises to aid in the country’s recovery, according to Commissioner Keith Sonderling. AI can manage elevators and other shared spaces to ensure social distancing, for example; or it can develop hybrid schedules, he said, predicting that "we will be seeing more, rather than less, AI as we begin to return to our workplaces, schools and our pre-COVID routines."
But such solutions come with risks and the one on EEOC’s radar is discrimination.
Choosing a solution
Employers considering or using AI should evaluate algorithms "early and often for biased outcomes and reengineer as appropriate," Sonderling suggested.
An algorithm that makes predictions about job applicants, for example, is only as good as the data on which it was taught to rely, he explained, citing information from EEOC’s chief analyst. Therefore, an algorithm that relies solely on the characteristics of a company’s current workforce to model an ideal applicant may merely replicate the status quo. If an existing workforce is made up of primarily one race, gender or age group, he said, the algorithm may screen out applicants who do not share those same characteristics.
Employers also should ask vendors what happens in the case of a discovery request, suggested Christine Webber, a co-presenter and plaintiffs’ attorney from Cohen Milstein Sellers & Toll. If an algorithm is challenged in a lawsuit, past litigation suggests an employer could be caught between the vendor’s interest in keeping an algorithm proprietary and the plaintiff’s discovery needs, "and they could be really hung out to dry."
Webber suggested employers ask whether the inner workings of the algorithm will be shared or if they’ll be protected as trade secrets — "which means the employer has no defense to establish the validity if it's a disparate impact claim or to establish that there is no disparate impact disparate treatment going on."
Employers eventually may see the federal government wade further into the issue. There’s bipartisan interest in Washington in addressing technology that can make decisions in the workplace, according to Sonderling.
EEOC, in particular, "must be a leader in this area," he said, "especially as few federal courts have yet to weigh in on how federal anti-discrimination law applies to AI-informed employment decisions."
Sonderling didn’t offer specifics about what such involvement might look like, but he encouraged stakeholders to reach out: "If you think the EEOC should provide guidance on specific issues relating to bias in technology," he said, "we want to hear from you."