On January 9, 2025, New Jersey Attorney General Matthew J. Platkin and the Division on Civil Rights issued guidance stating that New Jersey’s anti-discrimination law applies to artificial intelligence. Specifically, the New Jersey Law Against Discrimination (“LAD”) applies to algorithmic discrimination – discrimination that results from the use of automated decision-making tools – the same way it has long applied to other forms of discriminatory conduct.

In a statement accompanying the guidance, the Attorney General explained that while “technological innovation . . . has the potential to revolutionize key industries . . . it is also critically important that the needs of our state’s diverse communities are considered as these new technologies are deployed.” This move is part of a growing trend among states to address and mitigate the risks of potential algorithmic discrimination resulting from employers’ use of AI systems.

LAD’s Prohibition of Algorithmic Discrimination

The guidance explains that the term “automated decision-making tool” refers to any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process. Automated decision-making tools can incorporate technologies such as generative AI, machine-learning models, traditional statistical tools, and decision trees.

The guidance makes clear that under the LAD, discrimination is prohibited regardless of whether it is caused by automated decision-making tools or human actions. The LAD’s broad purpose is to eliminate discrimination, and it doesn’t distinguish between the mechanisms used to discriminate. This means that employers will still be held accountable under the LAD for discriminatory practices, even if those practices rely on automated systems. An employer can violate the LAD even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool. Essentially, claims of algorithmic discrimination are assessed the same way as other discrimination claims under the LAD.

The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. The LAD also prohibits algorithmic discrimination when it precludes or impedes the provision of reasonable accommodations, or of modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.

Unlike the New York City law that restricts employers’ ability to use automated employment decision tools in hiring and promotion decisions within New York City and requires employers to perform a bias audit of such tools to assess the potential disparate impact on sex, race, and ethnicity, there is no audit requirement under the LAD. However, the Attorney General’s guidance does recognize that “algorithmic bias” can occur in the use of automated decision-making tools and recommends various steps employers can take to identify and eliminate such bias, such as:

This new guidance highlights the need for employers to exercise caution when using artificial intelligence and to thoroughly assess any automated decision-making tools they intend to implement.
 

Tamy Dawli is a law clerk and contributed to this article

Leave a Reply

Your email address will not be published. Required fields are marked *