Artificial intelligence is already disrupting operations across swaths of industries, and HR is no exception.
AI’s use cases in the HR space are evolving rapidly, from filtering and tracking job candidates to generating internal communications and analyzing facial expressions during video interviews. But it’s the use of AI in hiring that is drawing particular attention from regulators.
Here we track the states, cities and other jurisdictions that have passed laws on the use of AI and other automated decision-making tools in the hiring process. We’ll offer a brief description of each law’s requirements, its effective date and a link to the original law, along with any relevant coverage from our journalists.
Readers may sort through the laws using the field on the left side of this page. The categories by which readers may sort include state name and area of jurisdiction (statewide vs. locality only).
Want to know when new AI in hiring laws are enacted or old ones updated? Sign up for our newsletter. Have a question or comment? Email us.
Keep up with the story. Subscribe to the HR Dive free daily newsletter
Employers affected: All employers subject to the California Consumer Privacy Act
California’s Privacy Protection Agency finalized regulations requiring CCPA-covered employers that use automated decisionmaking technology in hiring — or in other matters resulting in the provision or denial of employment opportunities or compensation — to issue pre-use notice requirements explaining the specific purpose for which employers plan to use such tools.
Employers must provide the ability to opt out of using an automated hiring tool. However, an opt-out need not be provided if the employer offers applicants a method by which to appeal the tool’s decision to a human reviewer who has the authority to overturn the decision. Alternatively, employers may avoid the opt-out requirement if the tool is used solely for certain purposes, such as assessment of a candidate’s ability to perform at work or allocation or assignment of work.
If provided, an opt-out must meet several other requirements as set forth in the regulations. Employers may reject an opt-out request that they have a reasonable, documented and good faith belief to be fraudulent, but if they do so, they must inform the requester of the denial and explain why they believe the request is fraudulent.
Consumers also have the right to request information about an automated hiring tool. Employers complying with such requests are not required to disclose trade secrets or information that would compromise certain security protocols, but if an information request is denied in whole or in part, the employer must inform the requester and explain the basis of its denial.
Employers also must perform risk assessments of automated hiring tools. The assessment must include information such as the logic of the tool, any assumptions or limitations of that logic, as well as the output and how the business will use the output to make significant decisions. Assessments must be reviewed and updated, as necessary, at least once every three years, or no later than 45 days from the date of a material change to a tool’s processing activity.
Risk assessments must be retained for as long as the processing continues or for at least five years after the completion of the assessment, whichever is later. Assessments must be submitted to CPPA according to a schedule outlined in the regulations.
Employers affected: All employers with five or more employees
California’s Civil Rights Council finalized regulations clarifying that the state’s workplace antidiscrimination laws prohibit employers from using automated decision-making systems or selection criteria that discriminate against an applicant, employee or class thereof on a protected basis.
For example, systems that measure applicants’ skill, dexterity, reaction time and/or other abilities or characteristics may discriminate against those with certain disabilities or other protected characteristics. Automated systems that analyze an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may be similarly discriminatory.
To avoid discrimination, employers may need to provide reasonable accommodation to an applicant. The presence or lack of anti-bias testing or similar proactive efforts — including the quality, efficacy, recency, scope, results and response to the results of such testing or efforts — are relevant to discrimination claims or available defenses from such claims, per the regulations.
Employers also must keep on file any automated decision system data belonging to applicants for a period of at least four years.
Employers that deploy “high-risk artificial intelligence systems” for purposes such as the provision or denial of job opportunities must take “reasonable care” to protect consumers from algorithmic discrimination.
Employers must create AI risk management policies and programs. They also must complete impact assessments for all AI systems both annually as well as within 90 days after any modifications are made. Consumers must be notified about the AI’s deployment, and deployers must publish statements disclosing the AI systems they deploy and the information those systems collect.
Employers must provide consumers opportunities to correct any incorrect personal data processed by an AI and appeal adverse consequential decisions made by an AI. In the event that a deployer discovers algorithmic discrimination has occurred, it must report the discovery to the state attorney general within 90 days.
Some of the law’s requirements do not apply when the deployer employs fewer than 50 full-time employees and does not use its own data to train the AI; the AI system meets certain exemption criteria; and the deployer makes an impact assessment of the AI available to consumers.
Employers that ask candidates to record video interviews and conduct analysis of applicant-submitted videos using AI must, prior to the interview, notify applicants that AI may be used, provide information explaining how the AI works and what general characteristics it will use to evaluate applicants, and obtain candidates’ consent to be evaluated by the AI.
Employers may not share applicant videos with anyone other than those persons whose expertise or technology is necessary to evaluate a candidate’s fitness for the position.
Employers must, within 30 days after receiving a deletion request from a candidate, delete video interviews and instruct any persons who received copies of such interviews to delete them, including any electronically generated backup copies.
Employers that rely solely upon AI analysis of video interviews to determine whether applicants will be selected for in-person interviews must collect and report the race and ethnicity data of all applicants who are selected for in-person interviews following an AI analysis, those who are not selected following an AI analysis and those who are ultimately hired. This data must be reported annually to the Illinois Department of Commerce and Economic Opportunity.
Employers may not use facial recognition services for the purpose of creating a facial template — i.e., the machine-interpretable pattern of facial features extracted from one or more images of an individual by such services — during an applicant’s interview for employment, unless the applicant consents to this by signing a waiver. The waiver must state the applicant’s name, the date of the interview, that the applicant consented to the use of facial recognition during the interview and whether the applicant read the consent waiver.
Employers must conduct bias audits on automated employment decision tools, including those that utilize AI and similar technologies. The results of such audits must be posted, or linked, to the employer’s website, and the results must disclose selection or scoring rates across gender and race or ethnicity categories.
Employers may not continue to use automated employment decision tools that have not been audited in more than one year.
Employers also must provide certain notices about such tools at least 10 days prior to their use to employees or job candidates who reside in the city, and they must provide candidates the opportunity to request an alternative selection process or accommodation. New York City began enforcement of the law on July 5, 2023.
Employers that develop or deploy AI systems in Texas may not do so with the intent to unlawfully discriminate against a group or class of persons in violation of state or federal laws. However, disparate impact is not sufficient to demonstrate an intent to discriminate under the law.