Employment & AI

State lawmakers are concerned about AI use in hiring and promotions

Helping businesses sort through the thousands of job applicants they receive for job openings has been an early use case for artificial intelligence. However, widespread use of such tools has attracted the scrutiny of policymakers, who seek to protect the privacy of job applicants and combat unintentional biases these tools could promote. We see a similar set of policy levers used in proposed AI hiring laws as we’ve found in other use-level regulations of AI: disclosures and impact assessments. But as policymakers in New York City learned, getting the scope right can be a challenge.

Enacted Laws

The use of AI tools for hiring decisions is one of the first aspects of AI that policymakers looked to address. Back in 2019, Illinois lawmakers enacted the Artificial Intelligence Video Interview Act (IL HB 2557), which requires employers who use AI to analyze video of job interviews to provide notice, explain to the applicant how the AI tool works, and obtain consent from the job applicant before the AI tool can be used to make an evaluation — creating an opt-in requirement. Maryland followed Illinois’ lead and enacted a similar bill (MD HB 1202) in 2020. The Maryland law also prohibits an employer from using certain facial recognition services during an applicant's interview for employment unless the applicant consents. Notably, neither of these laws contain an explicit cause of action to enforce violations. 

These early laws were narrowly focused on AI tools that use video footage to evaluate job applicants’ facial expressions, body language, word choice, and tone of voice. But in 2021, New York City took aim at AI hiring tools beyond video evaluations when the City Council adopted Local Law 144. The law requires that any employer or employment agency that uses an AI tool for hiring or promotions must have an independent auditor conduct an annual bias audit of the AI tool and make a summary of the results available on their website. Additionally, a summary of the audit must be made available on the employer’s website and candidates must be given notice that an AI tool is being used during the hiring or promotion process. 

It took regulators another year and a half to finalize rules to implement Local Law 144, but the law finally went into effect on July 5, 2023. And since then, you probably haven’t heard much about it. That’s because the law limits the application of the requirements to tools that “substantially assist or replace” human decision-making. Most companies that use AI tools in the hiring process can say that a human remains involved in the hiring process at some step, so the tool does not fully “replace” a human and the phrase “substantially assist” leaves much to interpretation. After the first six months, the city’s Department of Consumer and Worker Protection, tasked with enforcing the law, said they hadn’t received a single complaint for violation of the law even though an outside study indicated that few companies have published the required audit reports on their websites. 

But despite the shortcomings of New York City’s AI hiring law, the use of AI tools in the hiring process is still a top interest for policymakers. And state lawmakers are taking the lessons learned from New York City’s experience to ensure that any bills they pass are not ignored.

In 2025, Illinois enacted a broader AI hiring law (IL HB 3773) that prohibits employers that use predictive data analytics in their employment decisions from considering the applicant's biographical information, such as race or zip code, to reject an applicant in specified contexts.

Finally, Colorado is the first state to enact a broad consumer protection law (CO SB 205) that regulates the use of “high-risk” AI systems to make “consequential decisions.” Decisions that have a significant effect on employment or employment opportunity are included in the Colorado law’s definition of “consequential decision.” Deployers of “high-risk” AI tools must implement a risk management policy and program and complete an impact assessment of the system. Deployers must notify consumers when “high-risk” AI systems are a substantial factor in consequential decisions, make certain disclosures, provide an opportunity to opt out of having personal data processed for profiling, provide an opportunity to correct data, and provide an opportunity to appeal an adverse decision and allow for human review. Smaller companies with under 50 employees are exempt from these requirements. AI systems that interact with consumers must disclose that the interaction is with an AI system. The law would only be enforceable by the attorney general (i.e., no private right of action), and companies can be provided a right to cure any violations. Importantly, despite being enacted in 2024, the Colorado law does not go into effect until February 2026, and there’s significant talk of amending the bill before it goes into effect.

Proposals

Many bills introduced on this topic by state lawmakers follow the template that New York City’s law provides: requiring disclosures and impact assessments (or bias audits) for businesses using AI tools in the hiring process. Lawmakers in Albany introduced legislation (NY SB 7623) in 2024 that followed this path, requiring employers with 100 or more employees who use an “automated employment decision tool” for hiring decisions to conduct an impact assessment and mandating notice to job candidates. The bill goes a bit further by granting a right for employees to access and correct the data and prohibiting retaliation against the candidates or employees.

Pennsylvania lawmakers debated a bill (PA HB 1729) that would have copied a unique aspect of the Maryland and Illinois video interview laws by adding an opt-in requirement for employers that seek to use AI tools in the hiring process. After notifying an applicant of their use of the AI tool, and explaining how the tool works, employers would need to further receive an applicant’s consent to the use of the AI tool. 

Despite the setbacks in New York City’s law, policymakers want to protect job candidates from any unintended harm that the growing use of AI tools in the hiring process might cause. And depending on how these AI tools are defined, this is an issue that could end up affecting a large percentage of businesses.

To keep up with this issue, see the map and table below for real-time tracking of state legislation related to the regulation of AI in employment sourced from MultiState’s industry-leading legislative tracking service.