Sifting through a mountainous stack of resumes and cover letters, conducting tedious and time-consuming screening interviews, and then more rounds of interviews – such is the life of a company’s hiring manager and anyone else charged with finding the right candidate for an open position. It is, therefore, unsurprising that employers and third-party staffing services have rapidly adopted artificial intelligence (AI) to streamline the search and hiring process and make it more likely to lead to the right hire.
But just because an algorithm or other automated decision-making technology, rather than a human, evaluates candidates, it does not mean employers have a “get out of discrimination free” card. Discriminatory impacts and outcomes arising from using AI can run afoul of federal, state, and local laws that prohibit discrimination in hiring and employment practices.
Proliferation of State Legislation Governing Use of AI Hiring Technologies
Concerns that AI tools and the algorithmic decision-making at their core can lead to discriminatory impacts and outcomes have led to a proliferation of laws, regulations, and ordinances focused specifically on establishing new anti-discrimination protections or applying existing ones to employers’ use of AI in hiring and employment decisions.
Illinois, Colorado, and Utah are among the jurisdictions that followed New York City’s lead in imposing new requirements and limitations on employers’ use of AI. The City’s Local Law 144 was also the model for two bills that have been proposed in New Jersey and one in New York State that establishes rules for using certain automated employment decision tools. Those bills, similar to ones introduced in 2023, have yet to become law.
New Jersey’s Law Against Discrimination Applies to Use of AI in Hiring Practices
While the Garden State has yet to adopt legislation specifically addressing the use of AI in hiring, the state’s attorney general and Division of Civil Rights (DCR) have made it clear that the New Jersey Law Against Discrimination (LAD) prohibits “algorithmic discrimination” arising from the use of automated decision-making tools “in the same way it has long applied to other discriminatory conduct.”
In their “Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination,” released on January 9, 2025, the attorney general and DCR cite multiple studies and discuss several ways the use of automated decision-making tools “can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics.”
Notably, the guidance emphasizes, “[a] covered entity can violate the LAD even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool.” Discriminatory outcomes, rather than intent, are all that matters: “When employers… use automated decision-making tools, they may violate the LAD if those tools result in disparate treatment based on a protected characteristic or if those tools have a disparate impact based on a protected characteristic.”
Similarly, according to the guidance, “LAD also prohibits algorithmic discrimination when it precludes or impedes the provision of reasonable accommodations, or of modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.”
How AI Can Lead To Unlawful Employment Discrimination
As the guidance discusses, several AI touchpoints – design, training, and deployment – can lead to prohibited discriminatory impacts or outcomes in hiring:
Bias in Design
When designing an automated decision-making tool, a developer may make decisions that can either intentionally or inadvertently skew the tool and lead to prohibited biases. Choices regarding the tool’s output, the model or algorithms it uses, and the inputs it evaluates can introduce bias into the automated decision-making tool. If the historical data used by an algorithm contains biases, the AI will replicate and perpetuate them.
Bias in Training
Once designed, an AI tool needs to be “trained” before it can be deployed. Training involves exposing the tool to training data from which the tool learns correlations or rules. A developer can either create a new dataset to train an automated decision-making tool or refine and reformat a pre-existing dataset. The guidance explains, “[t]he training data may reflect the developer’s own biases, or it may reflect institutional and systemic inequities. The tool can become biased if the training data is skewed or unrepresentative, lacks diversity, reflects historical bias, is disconnected from the context the tool will be deployed in, is artificially generated by another automated decision-making tool, or contains errors.
Discrimination in AI Deployment
Once deployed, an AI tool can make decisions that constitute algorithmic discrimination for many reasons. It can be used to purposefully discriminate, “for example, if the tool is used to assess members of one protected class but not another.” If an AI tool makes decisions it was not designed to assess, its deployment may amplify any bias and reflect systemic inequities that exist outside of the tool. In some instances, deployment may expose biases that were not apparent during design and testing.
What’s Next for Employers That Use AI in Hiring
As often happens with rapid technological advances, the law governing AI’s use in hiring is playing catch-up, and employers must navigate a patchwork of evolving rules of the road that creates a host of compliance and operational challenges. The guidance strongly advises employers to take affirmative steps to protect themselves, applicants, and employees from discriminatory outcomes. Specifically:
“It is critical that employers, housing providers, places of public accommodation, and other covered entities—as well as the developers and vendors of automated decision-making tools used by these entities—carefully consider and evaluate the design and testing of automated decision-making tools before they are deployed, and carefully analyze and evaluate those tools on an ongoing basis after they are deployed. Doing so is necessary to decrease the risk of discriminatory outcomes and thereby decrease the risk of possible liability under the LAD.”
If you have questions or concerns about your company’s use of AI in hiring and employment practices, please contact Barry Capp at Ansell.Law.