- National Labor Relations Board General Counsel Jennifer Abruzzo published a memo on Monday calling for the NLRB to address workplace surveillance, “algorithmic-management tools” and other technologies that interfere with workers’ ability to exercise rights guaranteed under the National Labor Relations Act.
- As part of the memo, Abruzzo said she would ask the Board to adopt a framework holding that an employer presumptively violates the NLRA when its surveillance and management practices, viewed as a whole, would tend to interfere with or prevent a reasonable employee from engaging in protected activity.
- Abruzzo acknowledged that while employers may have legitimate business reasons for using the tech, “the employer’s interests must be balanced against employees’ rights under the Act.” If business needs outweigh employees’ rights, and unless employers demonstrate that special circumstances require covert use of the tech, “I will urge the Board to require the employer to disclose to employees the technologies it uses to monitor and manage them, its reasons for doing so, and how it is using the information it obtains,” Abruzzo said.
The Oct. 31 memo is the latest in a long line of signals that federal agencies are focusing on workplace tech and surveillance, Lauren Daming, employment and labor attorney and certified information privacy professional at Greensfelder, Hemker & Gale, told HR Dive in an interview.
Abruzzo published the memo mere weeks after the White House’s Office of Science and Technology Policy issued its “Blueprint for an AI Bill of Rights,” which addressed a myriad of contexts — including workplaces — in which automated tech could lead to bias and discrimination. For example, the blueprint’s authors pointed to data privacy as a guiding principle for automated systems and cited instances in which employers had reportedly used surveillance software to track employee discussion about union activity.
On another front, the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice published a pair of technical assistance documents that cautioned employers about the use of AI, machine learning and other algorithmic decision-making tools in employment contexts, including “blind reliance” on such tools that may violate civil rights laws like the Americans with Disabilities Act.
Daming compared the stream of announcements from federal agencies on AI, automated tech and surveillance tools to a series of waves affecting employers’ compliance efforts. “As time goes on and employers continue to use these technologies, I feel like we’re just adding onto what [employers] need to consider when using the technology,” she said.
While Abruzzo’s memo serves mainly as guidance and as a way to set out the general counsel’s rationale for pursuing litigation priorities, the balancing test proposed to determine whether an employer’s business needs outweigh employee rights may pose a “very high bar” for employers to meet, Daming said. A federal standard would layer on top of state and local laws regulating HR tech, such as Illinois’ Biometric Information Privacy Act.
The memo cited a variety of research, legal cases and news stories on the subject. One citation is a 2021 report from researchers at the University of California at Berkeley Labor Center that detailed the use of data and algorithms to analyze worker productivity, automate hiring processes and monitor activity. Abruzzo also cited a 2021 New York Times article covering Amazon and its use of such tech.
“It concerns me that employers could use these technologies to interfere with the exercise of Section 7 rights under the National Labor Relations Act by significantly impairing or negating employees’ ability to engage in protected activity—and to keep that activity confidential from their employer,” Abruzzo said in a statement accompanying the memo.
Daming said employers may want to have discussions with their tech vendors to ensure that their tools are not making decisions based on protected activity.