The use of algorithmic software and automated decision systems (ADS) to make workforce decisions, including the most sophisticated type, artificial intelligence (AI), has surged in recent years. HR technology’s promise of increased productivity and efficiency, data-driven insights, and cost reduction is undeniably appealing to businesses striving to streamline operations such as hiring, promotions, performance evaluations, compensation reviews, or employment terminations. However, as companies increasingly rely on AI, algorithms, and automated decision-making tools (ADTs) to make high-stakes workforce decisions, they may unknowingly expose themselves to serious legal risks, particularly under Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), and numerous other federal, state, and local laws.

Quick Hits

What Are Automated Technology Tools and How Does AI Relate?

In the employment context, algorithmic or automated HR tools refer to software systems that utilize predefined rules to run data through algorithms to assist with various human resources functions. These tools can range from simple rule-based formula systems to more advanced generative AI-powered technologies. Unlike traditional algorithms, which operate based on fixed, explicit instructions to process data and make decisions, generative AI systems differ in that they can learn from data, adapt over time, and make autonomous adjustments without being limited to predefined rules.

Employers use these tools in numerous ways to automate and enhance HR functions. A few examples:

AI Liability Risks Under Current Laws

AI-driven workforce decisions are covered by a variety of employment laws, and employers are facing an increasing number of agency investigations and lawsuits related to their use of AI in employment. Some of the key legal frameworks include:

  1. Title VII: Title VII prohibits discrimination on the basis of race, color, religion, sex, or national origin in employment practices. Under Title VII, employers can be held liable for facially neutral practices that have a disproportionate, adverse impact on members of a protected class. This includes decisions made by AI systems. Even if an AI system is designed to be neutral, if it has a discriminatory effect on a protected class, an employer can be held liable under the disparate impact theory. While the current administration has directed federal agencies to deprioritize disparate impact theory, it is still a viable legal theory under federal, state, and local anti-discrimination laws. Where AI systems are providing an assessment that is utilized as one of many factors by human decision-makers, they can also contribute to disparate treatment discrimination risks.
  2. The ADA: If AI systems screen out individuals with disabilities, they may violate the Americans with Disabilities Act (ADA). It is also critical that AI-based systems are accessible and that employers provide reasonable accommodations as appropriate to avoid discrimination against individuals with disabilities.
  3. The ADEA: The Age Discrimination in Employment Act (ADEA) prohibits discrimination against applicants and employees ages forty or older.
  4. The Equal Pay Act: AI tools that factor in compensation and salary data can be prone to replicating past pay disparities. Employers using AI must ensure that their systems are not creating or perpetuating sex-based pay inequities, or they risk violating the Equal Pay Act.

The Challenge of Algorithmic Transparency and Accountability

One of the most significant challenges with the use of AI in workforce decisions is the lack of transparency in how algorithms make decisions. Unlike human decision-makers who can explain their reasoning, generative AI systems operate as “black boxes,” making it difficult, if not impossible, for employers to understand—or defend—how decisions are reached.

This opacity creates significant legal risks. Without a clear understanding of how an algorithm reaches its conclusions, it may be difficult to defend against discrimination claims. If a company cannot provide a clear rationale for why an AI system made a particular decision, it could face regulatory action or legal liability.

Algorithmic systems generally apply the same formula against all candidates, creating relative consistency in the comparisons. For generative AI systems, there is greater complexity because the judgments and standards  change over time as the system absorbs more information. As a result, the decision-making applied to one candidate or employee will vary from the decisions made at a different point in time.

Mitigating the Legal Risks: AI Audits, Workforce Analytics, and Bias Detection

While the potential legal risks are significant, there are proactive steps employers may want to take to mitigate exposure to algorithmic bias and discrimination claims. These steps include:

Implementing routine and ongoing audits under legal privilege is one of the most critical steps to ensuring AI is being used in a legally defensible way. These audits may include monitoring algorithms for disparate impacts on protected groups. If a hiring algorithm disproportionately screens out individuals in a protected group, employers may want to take steps to correct these biases before they lead to discrimination charges or lawsuits. Given the risks associated with volume, and to ensure corrective action as quickly as possible, companies may want to undertake these privileged audits on a routine (monthly, quarterly, etc.) basis.

The AI landscape is rapidly evolving, so employers may want to continue to track changing laws and regulations in order to implement policies and procedures to ensure the safe, compliant, and nondiscriminatory use of AI in their workplace, and to reduce risk by engaging in privileged, proactive analyses to evaluate AI tools for bias.

Leave a Reply

Your email address will not be published. Required fields are marked *