Final Impact? EEOC Closing of Disparate Impact Investigations Is Challenged

Like almost every federal agency, things have certainly changed at the Equal Employment Opportunity Commission (EEOC) in the last 10 months. One of those significant changes has been the departure from investigation of claims asserting the “disparate impact” theory. A plaintiff in a disparate impact case, generally speaking, must show that a seemingly neutral policy causes disproportionately negative and unjustified harm to a protected class (like race or sex). At least one former claimant is now suing the EEOC for dropping its investigation into her charge of disparate impact discrimination, alleging that the EEOC violated its statutory obligations in Cross v. U.S. EEOC (No. 1:25-cv-03702 in D.D.C.). 
EEOC’s Procedure and History on Disparate Impact
As we know, the EEOC enforces federal discrimination laws like Title VII and the Age Discrimination in Employment Act (ADEA). A charging party files a charge and the EEOC investigates. During the investigation, the EEOC is looking for evidence that a charging party has been discriminated against, either because of disparate treatment (intentional discrimination) or disparate impact. After the investigation, the EEOC can take one of two steps: (1) It can decide “that there is not reasonable cause to believe the charge is true” and dismiss the charge [42 U.S.C. § 2000e-5(b)] or (2) it can decide “that there is reasonable cause to believe that the charge is true,” in which case it “shall endeavor to eliminate any such alleged unlawful employment practice by informal methods of conference, conciliation, and persuasion.” If those efforts at conciliation fail, the EEOC can file a lawsuit on behalf of the claimant (§ 2000e-5(f)(1)).
According to the lawsuit, disparate impact claims are an “important tool that workers can use to hold employers accountable when they adopt unjustified policies or practices that disadvantage a particular group but where there is insufficient evidence to prove intentional discrimination.” The EEOC has had a long history of recognizing disparate-impact liability under the ADEA, going back to 1981, followed by codification in the Civil Rights Act of 1991. According to the plaintiff, the “EEOC has consistently recognized disparate-impact discrimination as a cognizable theory of discrimination in both its law enforcement and advisory roles.” 
Executive Order and September Memo Shutting Down Disparate Impact Claims
On April 28, 2025, President Trump issued Executive Order 14281, “Restoring Equality of Opportunity and Meritocracy” [90 Fed. Reg. 17,537] that criticized the concept of disparate-impact liability as divisive and instructed agencies to “deprioritize enforcement of all statutes and regulations to the extent they include disparate-impact liability.” It also directed the chair of the EEOC to assess “all pending investigations, civil suits, or positions taken in ongoing matters under every Federal civil rights law . . . that rely on a theory of disparate-impact liability” and to “take appropriate action with respect to such matters consistent with the policy” of the executive order.
Then, on September 15, 2025, the EEOC issued the Disparate Impact Rule that directed the end of EEOC investigations of disparate-impact charges by September 30 and “every pending charge of discrimination premised solely on disparate impact liability [was to] be administratively closed by issuing a Notice of Right to Sue and identifying ‘other’ as the reason for the closure,” unless a no cause finding could be determined by October 31, 2025. The rule further set out that it was the policy of Executive Order 14281 “to eliminate the use of disparate-impact liability in all contexts to the maximum degree possible” and to “deprioritize enforcement of all statutes and regulations to the extent they include disparate- impact liability, including but not limited to 42 U.S.C. 2000e-2.” The EEOC would not be “commencing, developing, or continuing to pursue litigation advancing disparate impact causes of action.”
Claims Against EEOC’s Action
Leah Cross, an Amazon driver, filed a charge of discrimination in May 2023 alleging that she had been discriminated against because of her sex. Specifically, Cross alleged that Amazon’s strict delivery quotas made it difficult or impossible for drivers to use the bathroom during their shifts and that policy had a disparate impact on female drivers.
The EEOC interviewed her in January 2025. But based on the executive order and Disparate Impact Rule guidance, the EEOC issued a dismissal and right to sue to Cross on September 29, 2025. Cross’ lawsuit brings multiple claims for violation of the Administrative Procedures Act, including that the EEOC’s actions with regard to the Disparate Impact Rule violated its statutory requirements, was arbitrary and capricious, exceeded its statutory jurisdiction, and violated procedural requirements as it engaged in rulemaking without a quorum of three commissioners. 
Takeaways
The Disparate Impact Rule will not prevent plaintiffs from pleading a disparate impact claim; it only means that the EEOC will not investigate or pursue such claims. The EEOC will simply issue a right to sue letter and be done.
The challenge to the EEOC’s position will be significant to watch and will certainly impact the EEOC’s actions and investigations going forward for any charges that may relate to disparate impact theory. Stay tuned to developments in this case.
Listen to this article

Inside the Exclusive – The EEOC’s New Enforcement Priorities, Part 1—National Origin Discrimination [Podcast]

In part one of this podcast series recorded at our recent Corporate Labor and Employment Counsel Exclusive® seminar, Scott Kelly (shareholder, Birmingham), Tae Phillips (shareholder, Birmingham), and Jim Paul (shareholder, St. Louis/Tampa) discuss the EEOC’s new enforcement priorities, with a particular focus on national origin discrimination and the agency’s increased emphasis on protecting workers from anti-American bias. Tae (who is co-chair of the firm’s Drug Testing Practice Group) and Scott (who chairs the firm’s Workforce Analytics and Compliance Practice Group) review recent statements from the EEOC’s acting chair, highlight the legal definitions and practical implications of national origin discrimination under Title VII of the Civil Rights Act, and share observations about a rise in related EEOC charges. The conversation also touches on the importance for employers to coordinate labor, employment, and immigration practices in light of these evolving enforcement trends.

Illinois Department of Human Rights Eliminates Fact-Finding Conferences: What It Means for Charges of Discrimination

On August 15, 2025, Illinois Governor J.B. Pritzker signed Senate Bill 2487 into law, amending the Illinois Human Rights Act (“IHRA”), 775 ILCS 5/7A-102. Among other reforms going into effect on January 1, 2026, the legislation fundamentally changes how the Illinois Department of Human Rights (“IDHR” or the “Department”) processes charges of discrimination.
Most notably, the law eliminates mandatory fact-finding conferences. For decades, these conferences were a hallmark of the Department’s investigative approach, giving complainants, respondents, and the agency a chance to gather information early, test the strength of claims, and sometimes even prompt informal resolution. Their removal will change how employers, employees, and practitioners must navigate discrimination charges in Illinois.
Fact-Finding Conferences Generally
A fact-finding conference is essentially an informal, investigative meeting in which the Department brings the parties together to discuss the allegations in the charge of discrimination. The investigator can question witnesses, review documentary evidence, and attempt to clarify disputed facts. While not a mediation, these conferences often create opportunities for early settlement discussions. For employers, they also offer a critical chance to gain insight into the complainant’s allegations and to present documents and testimony before the Department issues a determination.
From Mandatory to Discretionary Fact-Finding Conferences
Prior to Senate Bill 2487, fact-finding conferences were mandatory for nearly every charge of discrimination. An employer could only avoid this requirement if the charge was dismissed early, the Director issued a determination of substantial evidence, or the parties voluntarily agreed to waive the conference in writing. Nonattendance without good cause could result in dismissal of the charge or default judgment.
With the passage of the amendment, fact-finding conferences will be optional and occur only in limited circumstances.
If both the complainant and the respondent wish to have a fact-finding conference, they must make that request in writing within ninety (90) days of the charge being filed. Importantly, agreeing to a conference also means agreeing to extend the Department’s investigative deadline by an additional 120 days, giving the Department more time to complete its review. Even if the parties do not request one, the Department still has discretion to convene a conference on its own if it believes doing so would help clarify the case or resolve factual disputes.
The procedural rules surrounding attendance remain in place. If a conference is scheduled, the parties must appear. A complainant who fails to attend without good cause risks dismissal of the charge, while an employer who fails to appear can be held in default.
Rationale Behind the Switch to Discretionary Fact-Finding Conferences
The legislature cited efficiency and backlog reduction as central reasons for eliminating mandatory fact-finding conferences. The Department has long struggled with processing times, and fact-finding conferences have historically been resource-intensive.
The amendment also brings Illinois more in line with federal practice, where charges of discrimination are typically investigated through written position statements, document requests, and interviews.
Impacts on Employers

Position Statements Matter More Than Ever: Without a guaranteed fact-finding conference, employers lose the chance to “fill in the gaps” or clarify their story in person. This means that the position statement and its supporting documentary evidence will likely carry more weight. Employers assume these filings will be the primary record on which the IDHR bases its determination.
Employers Face a Critical Choice: Employers must decide whether a fact-finding conference helps or hurts their case. Opting in would give employers the opportunity to see the complainant’s testimony and test their credibility, the chance to directly explain their defenses to the investigator, and a possible opening for early settlement. On the other hand, opting in extends the IDHR’s timeline by 120 days, prolonging uncertainty, gives the complainant a platform to expand or refine their claims, and increases legal fees and preparation time.
Fewer Early Settlement Opportunities: Employers can no longer count on fact-finding conferences to provide an informal settlement forum. Instead, employers who are open to early resolution should explore the Department’s voluntary mediation program or work with counsel to negotiate directly with the complainant.

Isabelle Tate’s Death Highlights ADA Rights in Hollywood

Isabelle Tate’s Death Highlights ADA Rights in Hollywood Actress Isabelle Tate, who passed away at age 23 from a rare form of Charcot-Marie-Tooth disease (CMT), brought critical attention to disability rights in Hollywood. This article examines how the U.S. Americans with Disabilities Act (ADA) applied to her career in entertainment, specifically concerning employment accommodations and […]

Alec Baldwin’s $25 Million Lawsuit Against Prosecutors Jumps to Federal Court

Alec Baldwin’s $25 Million Lawsuit Against Prosecutors Jumps to Federal Court In a significant escalation of his legal battles, Alec Baldwin’s malicious prosecution lawsuit against New Mexico authorities has been officially moved to federal court this week. The actor’s complaint, which seeks damages for civil rights violations, alleges that local prosecutors and investigators intentionally withheld […]

Federal Court Narrows but Does Not End Debate Over Transgender Athletes and Title IX in College Sports

Whether and how transgender women may participate in women’s collegiate sports remains one of the most closely watched issues in the country. The U.S. District Court for the Northern District of Georgia’s September 2025 decision in Gaines v. National Collegiate Athletic Association—a case brought by former University of Kentucky swimmer Riley Gaines and other cisgender female college athletes—illustrates the unsettled legal terrain as state legislatures, the National Collegiate Athletic Association (NCAA), and federal courts address overlapping questions of eligibility and nondiscrimination.

Quick Hits

A federal court largely dismissed challenges to the NCAA’s former transgender-participation policies but allowed a narrow Title IX claim against the NCAA to proceed to targeted discovery focused on whether the NCAA is a federal funding recipient.
The court rejected constitutional claims against the NCAA, reaffirming that the NCAA is not a state actor, and found many claims against Georgia public institutions moot in light of Georgia’s new “Riley Gaines Act.”
The decision arrives as the Supreme Court of the United States prepares to hear cases addressing state laws governing participation by transgender girls and women in female sports, underscoring the unsettled national landscape.

Background
On September 25, 2025, U.S. District Judge Tiffany R. Johnson (N.D. Ga.) issued an order largely dismissing the lawsuit challenging the NCAA’s former transgender-participation policies, while permitting a limited aspect of the plaintiffs’ claim against the NCAA under Title IX of the Education Amendments of 1972 to proceed to targeted discovery. The ruling narrows the immediate dispute but positions the case for potentially consequential developments regarding the scope of Title IX and the NCAA’s obligations. The decision also arrives as the Supreme Court of the United States prepares to hear Little v. Hecox and West Virginia v. B.P.J., which concern the legality of state laws restricting participation by transgender girls and women in female sports.
The District Court’s Decision
At the heart of the lawsuit is a challenge to NCAA policies that, until recently, allowed transgender women (athletes assigned male at birth) to compete in women’s sports and access female-designated facilities. The plaintiffs alleged these policies undermined the fairness, safety, and privacy of women’s athletics and violated Title IX and constitutional protections.
The court dismissed most claims against the State of Georgia’s public universities as moot in light of Georgia’s “Riley Gaines Act.” That statute now bars Georgia’s public universities from hosting or participating in competitions where “biologically male athletes” may compete against women or use women’s facilities, thereby supplying the relief sought—at least within Georgia.
Separately, the court rejected constitutional claims against the NCAA, reaffirming that the NCAA, as a private association, is not a state actor. As a result, the plaintiffs’ constitutional arguments (including privacy and equal protection claims) may not be used to directly challenge NCAA policies.
Title IX Recipient Theory
Although most of the plaintiffs’ claims were dismissed, a potentially significant Title IX theory remains. The plaintiffs contend the NCAA is a “recipient” of federal financial assistance—specifically through a concussion-research partnership with the U.S. Department of Defense (now the “U.S. Department of War”)—and therefore directly subject to Title IX’s nondiscrimination requirements, including those that govern sex-based eligibility rules.
This theory is framed against the Supreme Court’s decision in National Collegiate Athletic Association v. Smith, 525 U.S. 459 (1999). In Smith, the Court held that the NCAA was not a Title IX recipient merely because it collected dues from member institutions that received federal funds. The Court’s ruling was narrow, however, and left open whether other forms of assistance—such as direct grants or formal partnerships—could trigger Title IX coverage.
Here, the court allowed limited discovery focused on whether NCAA research was directly or indirectly funded by the Department of Defense and whether the NCAA directly or indirectly plays a role in deciding how those funds are used. The court observed that, while it is unclear whether federal funds “ever rested in NCAA’s coffers,” the plaintiffs had plausibly alleged a funding relationship that, if proven, could bring the NCAA within Title IX’s ambit. The court concluded that the plaintiffs’ allegations, if substantiated, would be sufficient to treat the NCAA as a Title IX recipient.
What’s Next
The court ordered a ninety‑day discovery period limited to the NCAA’s relationship with the Department of Defense and whether that relationship makes the NCAA a recipient of federal funds for Title IX purposes. After this focused discovery, the NCAA may move to dismiss if the record does not support Title IX coverage. A determination that the NCAA is a Title IX recipient would mark a notable departure from the posture in Smith and could expand Title IX’s reach to the NCAA itself, potentially shaping national standards for how federal funding triggers nondiscrimination obligations in college sports.
Statements From the Parties
The plaintiffs’ counsel characterized the ruling as a meaningful step toward establishing that the NCAA violated Title IX by allowing transgender women to compete in women’s sports. The NCAA, for its part, stated that it does not receive federal financial assistance that would subject it to Title IX and emphasized its ongoing promotion of women’s sports, investments in women’s championships, and commitment to fair competition. The NCAA also asserted that its transgender participation policy aligns with current federal guidance.
The Supreme Court’s Docket and the Broader Landscape
The Gaines decision unfolds alongside significant Supreme Court activity. In this term, the Court is expected to hear Little v. Hecox and West Virginia v. B.P.J., each addressing state laws restricting participation in girls’ and women’s sports to athletes assigned female at birth. The Court is poised to consider, among other issues, the level of constitutional scrutiny applicable to such laws and the interplay with Title IX. These decisions could provide much‑needed clarity or sharpen the conflict among jurisdictions and governing bodies.
The Bottom Line
Gaines does not resolve the national debate over transgender athletes’ participation in women’s sports. It narrows the controversy for now and tees up a focused question that could be transformative: whether the NCAA is itself a recipient of federal financial assistance subject to Title IX. With a short discovery window on that issue and Supreme Court review of related state‑law challenges on the horizon, the legal landscape for college sports remains fluid. If the NCAA is found to be a Title IX recipient, its eligibility policies—including those concerning transgender athletes—would need to align with federal nondiscrimination requirements as interpreted by courts and federal agencies.

“It’s Getting Hot in Here” – Rhode Island’s New Workplace Accommodations for Menopause

Hot flashes at work? Rhode Island says: let’s cool things down. In a historic move, the Ocean State has become the first in the nation to mandate workplace accommodations for menopause and related conditions. Yes, you read that right—menopause is now officially protected under state employment law.
What Changed?
On June 24, 2025, Governor Dan McKee signed into law House Bill No. 6161, amending the Rhode Island Fair Employment Practices Act (RIFEPA) to include menopause, perimenopause, and related medical conditions as protected categories under the law. This amendment expands the scope of R.I. Gen. Laws § 28-5-7.4, which previously applied only to pregnancy, childbirth, and related conditions, to now explicitly cover menopause-related conditions as well. This means that employers in Rhode Island are now legally required to provide reasonable accommodations to employees experiencing menopause-related conditions—just as they would for pregnancy or other medical needs.
What Does “Reasonable Accommodation” Mean?
Under the law, employers must engage in a good faith interactive process to determine appropriate accommodations, unless doing so would cause an undue hardship. Examples of accommodations might include:

Flexible scheduling to manage fatigue or sleep disturbances,
Access to temperature-controlled environments for hot flashes,
Additional breaks for rest or hydration,
Modifications to uniforms or dress codes, or
Remote work options during flare-ups.

Employers cannot require an employee to take leave if another reasonable accommodation can be provided, nor can they deny employment opportunities based on a refusal to accommodate menopause-related needs.
Notice Requirements
The law also imposes strict notice obligations:

Employers must conspicuously post a notice about these rights in the workplace.
They must provide written notice to:

All new hires at the start of employment,
All current employees within 120 days of the law’s effective date, and
Any employee who notifies the employer of a menopause-related condition.

A model notice is available from the Rhode Island Commission for Human Rights which can be found here.
What Employers Should Do Now

Review and update workplace policies and handbooks.
Train HR and management on how to handle accommodation requests.
Ensure compliance with posting and notice requirements.
Foster a culture that supports open dialogue around health and inclusion.

Menopause is no longer a silent struggle in the workplace—at least not in Rhode Island. With this bold legislative step, the state is turning up the heat on equity and inclusion, and cooling things down for those who need it most.

Mapping the Boundaries of Algorithmic Authority

Sometimes the most revealing AI regulations aren’t the ones that say “you must” — they’re the ones that say “you must not.” 
We often focus on the rules for developing, deploying, and procuring AI. But what may be more telling is where the rules stop entirely. Not the “how-to” of compliance, but the “you must not” of prohibition. These hard lines, where legislators draw boundaries around algorithmic authority, reveal an emerging consensus about where algorithmic decision-making creates unacceptable risks.
The EU’s Forbidden Zone: Where Algorithms Fear to Tread
Article 5 of the EU AI Act (enforceable since February 2025) bans AI practices presenting “unacceptable risk,” regardless of safeguards or oversight. These are not regulatory speed bumps; rather, they are solid walls. These bans generally target manipulative or surveillance-heavy AI:

Article 5(1)(a): Prohibits AI systems deploying subliminal techniques (e.g., app nudges) beyond a person’s consciousness or purposefully manipulative or deceptive techniques to materially distort behavior, appreciably impairing the person’s ability to make an informed decision and causing them to make a decision they would not have otherwise made, resulting in or likely resulting in physical or psychological harm. Translation: No sneaky AI nudging you into decisions you wouldn’t normally make.

Article 5(1)(b): Bans systems exploiting vulnerabilities of specific groups (e.g., age, disability) to distort behavior, causing or likely causing harm.

Article 5(1)(c): Prohibits social scoring by public authorities evaluating/classifying individuals based on behavior or characteristics, leading to detrimental treatment.

Article 5(1)(h): Restricts real-time remote biometric identification in public spaces for law enforcement, with exceptions for serious crimes, missing persons, or imminent threats.

These prohibitions share a common thread: they challenge human autonomy by bypassing deliberation (subliminal tactics, vulnerability exploitation) or enabling comprehensive surveillance (social scoring, biometric ID).
The American Patchwork: When Algorithms Can’t Make the Call
US jurisdictions target algorithmic decision-making in employment with specific restrictions:

New York City Local Law 144 (effective July 2023):

Requires annual bias audits for automated employment decision tools (AEDTs), examining disparate impact by race/ethnicity and sex;

Mandates notice to candidates/employees about AEDT use; and

Requires publicly available audit results and data retention policy disclosure.
Think of it this way: if your company’s AI resume screener consistently filters out qualified candidates from certain ZIP codes (which can be a proxy for bias and discrimination), you’ll need documentation showing you tested for — and addressed — this bias.

Illinois Artificial Intelligence Video Interview Act (effective January 2020):

Requires notifying applicants about AI use in video interviews and its mechanics;

Mandates consent before use; and

Limits video sharing to evaluators and requires destruction within 30 days upon request.

California Civil Rights Council Regulations (effective October 1, 2025):

Clarify that automated decision systems (ADS) violating existing FEHA anti-discrimination protections are unlawful;
Extend recordkeeping requirements for ADS data to four years; and
Note that anti-bias testing is relevant to discrimination defenses (but not mandated).

The pattern: transparency and accountability in AI-assisted hiring, not outright bans, with a focus on preventing opacity and disparate impact.
State-Level Comprehensive Frameworks

Texas House Bill 149 (TRAIGA, effective January 1, 2026):

Prohibits development or deployment of AI with intent to discriminate against protected classes; and
Requires government entities to disclose to consumers when they interact with AI systems.

Colorado SB 24-205 (Colorado AI Act, effective June 30, 2026, delayed from February 2026):

Targets “high-risk” AI systems with impact assessments, risk management policies, and consumer notice requirements; and
Requires developers and deployers to use reasonable care to prevent algorithmic discrimination.

Other 2025 State-Level Developments:

Utah: Amended its Artificial Intelligence Policy Act (effective May 2025) to narrow disclosure requirements, focusing on “high-risk” AI interactions in regulated occupations and establishing safe harbor provisions for compliant systems.
Connecticut: SB 2, which would have mandated impact assessments for high-risk AI systems, passed the Senate but stalled in the House amid gubernatorial veto threats over innovation concerns.
Virginia: HB 2094, which would have established comprehensive high-risk AI consumer protections, was vetoed by the Governor in March 2025 over concerns about stifling innovation. This development highlights ongoing legislative friction despite broad support for AI regulation.

Credit and Financial Services
Preexisting laws apply to AI-driven credit decisions:

Fair Credit Reporting Act (FCRA, 15 U.S.C. § 1681 et seq.): Section 615 mandates adverse action notices when decisions are based on consumer reports. Combined with ECOA’s specific reasons requirement, CFPB guidance (Circular 2023-03) emphasizes that complex algorithms must produce explainable adverse action reasons.

Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691 et seq.) and Regulation B (12 CFR Part 1002): Section 1002.9(b)(2) requires creditors to provide specific, actionable reasons for adverse decisions. CFPB Circulars 2022-03 and 2023-03 confirm “the algorithm” is not a valid reason.

The Housing Context
The Fair Housing Act (42 U.S.C. § 3601 et seq.) supports disparate impact liability on AI in tenant screening, mortgage underwriting, and property valuations per the 2015 Inclusive Communities Supreme Court decision. However, HUD’s September 2025 withdrawal of disparate impact guidance — including the 2016 post-Inclusive Communities guidance and 2024 AI advertising guidance — signals a dramatic enforcement shift toward intentional discrimination claims only. While HUD has withdrawn its guidance and shifted enforcement priorities, the Fair Housing Act and Inclusive Communities precedent still stands — it’s the enforcement approach, not the law, that has changed.
Healthcare and Insurance
While housing regulators grapple with enforcement priorities, the healthcare sector is charting a clearer path forward.

Colorado SB 21-169: Requires certain insurers to establish governance frameworks and test external consumer data and AI systems for unfair discrimination based on protected classes.

HIPAA Privacy Rule (45 CFR § 164.524): Guarantees individuals access to their protected health information, which may indirectly support review of data used in AI-driven healthcare decisions.

Texas SB 1188 (effective September 2025): Requires healthcare practitioners to maintain human oversight of AI-generated medical decisions, disclose AI use to patients, and physically store electronic health records in the US.

What the Boundaries Reveal
These regulatory frameworks do not ban AI capability, but do generally establish boundaries requiring:

Transparency: Disclosing use and explaining outcomes.

Human Oversight: Preserving decision-making authority, not just involvement.

Contestability: Enabling challenges/appeals of algorithmic decisions.

Accountability: Mandating bias audits, impact assessments, and risk management.

Practical Governance Implications
For AI governance frameworks:

Risk Classification: Map AI use cases against prohibited practices (e.g., social scoring) and high-risk categories (employment, credit).

Human Oversight Architecture: Ensure humans have expertise and authority to evaluate/override AI (in accordance with Texas’s “preserve authority” standard).

Documentation: Conduct required assessments (e.g., NYC bias audits, Colorado discrimination assessments).

Explainability: Meet FCRA/ECOA standards with specific, defensible reasons — not “the algorithm decided.”

Notice and Consent: Comply with specific notice obligations (e.g., Illinois video interviews, Colorado consumer notices).

The Compliance Question
Evaluate AI implementations by asking:

Does this system make consequential decisions (employment, credit, housing, healthcare, benefits)? What specific requirements apply?

Can a human evaluate and override AI reasoning?

Could we defend an adverse decision to a regulator with specific reasons?

Have we conducted required bias audits/impact assessments?

Looking Forward
As of October 2025, states like New York (AI companion safeguards) and California (finalized AI discrimination regs) add layers, while federal efforts (e.g., the AI Bill of Rights) lag. Successful organizations will be those that hardwire human agency and accountability into AI architecture, ensuring compliance with evolving laws. The boundaries are being drawn now — and crossing them, even inadvertently, could prove costly.

PA’s Chester County Creates Human Rights Commission; Employers to Face Expanded List of Protected Classes

Takeaways

Nondiscrimination provisions covering employment, housing and public accommodations take effect 12.23.25.
Joining a state trend to fill perceived gaps in state and federal protections, the ordinance expands protections based on gender identity, gender expression, and more.
The new Chester County Human Relations Commission has investigatory and quasi-adjudicatory authority.

Relate link

Chester County Ordinance No. ORD-2025-03

Article
The Chester County Board of Commissioners recently adopted Ordinance No. ORD-2025-03, creating the Chester County Human Relations Commission (CCHRC) and enacting broad countywide nondiscrimination provisions covering employment, housing and public accommodations. Passed 2-1 along party lines on Sept. 24, 2025, the ordinance takes effect 90 days after enactment, Dec. 23, 2025.
Modeled on the Pennsylvania Human Relations Act (PHRA), 43 P.S. § 951 et seq., the Chester County ordinance substantially expands the list of protected classes beyond those already covered by state law. The ordinance expands protections based on gender identity, gender expression, sexual orientation, marital and familial status, source of income, veteran status, and status as a victim of domestic or sexual violence. 
These additional protections reflect a growing local trend in Pennsylvania as several other counties (including Delaware, Lehigh, and Montgomery) have enacted similar ordinances or taken steps in recent years to fill perceived gaps in state and federal protections. Chester County’s action makes it the first “collar county” around Philadelphia to implement a county-level enforcement mechanism.
The CCHRC is set to consist of seven to 13 volunteer members, appointed by the county commissioners for staggered three-year terms. While volunteers serve without compensation, the county has authorized limited funding and legal support through the Solicitor’s Office. The Commission has investigatory and quasi-adjudicatory authority: It may receive verified complaints, conduct investigations, issue subpoenas, and, if conciliation fails, hold public hearings. Remedies include cease-and-desist orders, restitution and civil fines of up to $500. Commission determinations are appealable to the Chester County Court of Common Pleas under the Local Agency Law, 2 Pa.C.S. § 751, et seq.
The Chester County ordinance was enacted under Section 12.1 of the PHRA, which expressly authorizes municipalities and counties to establish local human relations commissions. In practice, this creates concurrent jurisdiction. For instance, an aggrieved individual may file either with the state PHRC or with the local county commission, but generally not both for the same claim. The local body serves as a first-tier forum intended to offer faster, community-based resolution of discrimination complaints. The Chester County Commission also may refer or coordinate cases with the PHRC when state-level expertise or enforcement is warranted.
Complaints must be filed within 180 days of the alleged discriminatory act, either online or through the County Solicitor’s Office. The process includes preliminary jurisdictional review, investigation and conciliation, followed (if necessary) by an adjudicative hearing. Retaliation against complainants or witnesses is expressly prohibited.
The ordinance reflects a different approach to civil rights enforcement in Pennsylvania. Local governments may extend protections beyond state minimums while coordinating with the PHRC. Chester County intends its commission to serve as a complementary, rather than competing, mechanism for addressing discrimination at the community level. 
Critics are characterizing the Commission as duplicative, noting that residents already fund the PHRC through state taxes. Proponents view it as a subsidiary mechanism designed to improve accessibility and responsiveness on a local level. Which view eventually prevails remains to be seen; either way, Chester County employers should be prepared for a potential extra level of nondiscrimination enforcement starting in December.

Panuccio’s Appointment as Commissioner Secures EEOC Quorum: Key Implications for Employers

Takeaways

The quorum allows the EEOC to proceed with revising or replacing regulations, guidance documents and the EEOC’s Strategic Enforcement Plan, and also permits the agency to approve certain types of litigation.
The EEOC, which lacked a quorum since the president removed two commissioners in January, now has a three-person quorum.
The EEOC likely will eliminate or revise its Workplace Harassment Guidance, EEO-1 reporting requirements, and Final Regulations under the Pregnant Workers Fairness Act to bring them into alignment with the administration’s directives.

Related links

Strategic Enforcement Plan Fiscal Years 2024 – 2028
President Appoints Andrea R. Lucas EEOC Acting Chair
EEOC to Halt Investigations into Disparate Impact Claims
Position of Acting Chair Lucas Regarding the Commission’s Final Regulations Implementing the Pregnant Workers Fairness Act
EEOC Issues Final Regulations to Implement the Pregnant Workers Fairness Act  
Statement re: Vote on Final Rule to Implement the Pregnant Workers Fairness Act  

Article
The Senate confirmed President Donald Trump’s nominee, Brittany Panuccio, as an Equal Employment Opportunity Commission (EEOC) commissioner on Oct. 7, 2025. Panuccio’s confirmation establishes a Republican majority on the Commission and provides the three-member quorum needed to make significant policy changes in line with the president’s stated priorities.
Following the confirmation, Acting Chair Andrea Lucas stated in a post on X, “Now the agency is empowered to deliver fully on our promise to advance the most significant civil rights agenda in a generation.”
In January, President Trump removed Jocelyn Samuels and Charlotte Burrows from their commissioner positions, leaving only two members: Kalpana Kotagal, nominated by President Joe Biden in 2023, and Andrea Lucas, the acting chair of the Commission whom Trump initially nominated to the EEOC in his first term. Since then, the EEOC has been operating with only two of its five commissioners, leaving the agency without a quorum.
Although the EEOC did not need a quorum to investigate pending charges, it could not approve certain types of litigation, such as pattern and practice cases and systemic litigation. The EEOC also could not take formal action with respect to official EEOC guidance, regulations, and enforcement plans.
Priorities
Panuccio’s appointment paves the way for the EEOC to make official the unofficial changes the agency has instituted under the new administration. For example, under the Biden Administration, the EEOC approved its Strategic Enforcement Plan (SEP) for Fiscal Years 2024-2028, which included an enhanced focus on combating discrimination against religious minorities, racial or ethnic groups, LGBTQI+ individuals, and pregnant workers. The SEP also explained the Commission’s support of employer efforts to “proactively identify and address barriers to equal employment opportunity, cultivate a diverse pool of qualified workers and foster inclusive workplaces.” Although the current Commission could not modify the SEP in the absence of a quorum, it effectively abandoned the Plan and the Biden Administration’s enforcement priorities.
The current EEOC’s objectives and priorities, as stated in a Jan. 21, 2025, press release issued by the EEOC announcing Lucas’ appointment as acting chair of the Commission, include:
rooting out unlawful DEI-motivated race and sex discrimination; protecting American workers from anti-American national origin discrimination; defending the biological and binary reality of sex and related rights, including women’s rights to single sex spaces at work; protecting workers from religious bias and harassment, including antisemitism; and remedying other areas of recent under-enforcement.
Harassment Guidance
With the required quorum, the EEOC likely will move promptly to replace its current harassment guidance issued on April 29, 2024. In the Jan. 20, 2025, Executive Order 14168, “Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government,” President Trump directed the EEOC to rescind all guidance inconsistent with the terms of the Order, including the 2024 harassment guidance.
EEO-1 Reporting
As the EEOC prepares to act with a full quorum, one open question is how the agency will manage the future of workforce data collection. At the center of the conversation is the annual EEO-1 Report. The EEO-1 Report is a collection of employee race, ethnicity, and sex data reported by job category. For years, the EEOC and the Office of Federal Contract Compliance Programs used the report to identify potential workplace discrimination trends.
The Heritage Foundation’s Project 2025 explicitly called for eliminating EEO-1 reporting, claiming the data can “be used to support a charge of discrimination under a disparate impact theory. This could lead to racial quotas to remedy alleged racial discrimination.” According to a Sept. 15, 2025, internal agency memo obtained by the Associated Press, the EEOC will no longer pursue disparate impact complaints.
The EEOC’s authority to collect workforce demographic data is rooted in Title VII of the Civil Rights Act of 1964, § 709(c) (42 U.S.C. § 2000e-8(c)), and administrative procedure. Removing the obligation may require the EEOC to engage in new rulemaking to revise or remove current EEOC regulations.
Pregnant Workers Fairness Act
The EEOC published a statement earlier this year communicating Acting Chair Lucas’ intention to reconsider portions of the Pregnant Workers Fairness Act (PWFA) Final Regulations (Final Rule). Look for the Commission to remove certain conditions such as menstruation, infertility, menopause, and abortion from the list of conditions that require accommodation.
Acting Chair Lucas has been vocal about her support for the PWFA but has voiced strong disagreement with the Commission’s interpretation in the EEOC’s Final Rule of the phrase “pregnancy, childbirth, or related medical conditions.” According to Acting Chair Lucas:
[T]he PWFA was a tremendous, bipartisan legislative achievement. Pregnant women in the workplace deserve regulations that implement the Act’s provisions in a clear and reliable way. It is unfortunate that the elements of the final rule serving this purpose are inextricably tied to a needlessly expansive foundation that does not.
Lucas explained that this foundation built by the EEOC, the expansive way the Final Rule interprets the phrase pregnancy, childbirth, and related medical condition, arguably requires accommodation of “virtually every condition, circumstance, or procedure that relates to any aspect of the female reproductive system.” In Lucas’ view, the PWFA’s focus is on accommodating the actual pregnancy and childbirth of an individual worker.
Menstruation, infertility, menopause, and the like are not caused or exacerbated by a particular pregnancy or childbirth—but rather the functioning, or ill-functioning, of the female worker’s underlying reproductive system—and so are not subject to accommodation under the PWFA.
The EEOC also is faced with a May 21, 2025, federal court order from the court in the Western District of Louisiana vacating a portion of the EEOC’s Final Rule interpreting the PWFA as requiring employers to accommodate what the court refers to as “elective abortions” and ordering the EEOC to revise the PWFA Final Rule. The EEOC has been unable to revise the rule without a quorum.

Artificial Intelligence Bias – Harper v. Sirius XM Challenges Algorithmic Discrimination in Hiring

On August 4, 2025, Plaintiff Arshon Harper (“Harper”) filed a class action complaint in the Eastern District of Michigan against Sirius XM Radio, LLC (“Sirius”) asserting claims of both unintentional and intentional racial discrimination under Title VII of the Civil Rights Act.
Harper alleges that Sirius’ use of a commercial AI hiring tool that screens and analyzes resumes resulted in racial discrimination against him and other similarly situated African American applicants.
In his complaint, Harper claims that he applied to approximately 150 job positions, for which Sirius rejected him from all but one, despite allegedly meeting or exceeding the job qualifications. Sirius’ use of AI to screen applications, according to Harper, disproportionately rejected him and other African American applicants by relying on certain inputs—such as educational institutions, employment history, and zip codes—as proxies for race. Similarly, Harper alleges that the AI screening tool improperly removed his resume from further consideration due to factors that were unrelated to his qualifications and Sirius’ business needs.
Harper’s lawsuit is not the first to allege an employer’s use of workplace AI constitutes unlawful discrimination. One of the more noteworthy cases is Mobley v. Workday, pending in the Norther District of California. In that case, Mobley alleges that he applied to over 100 jobs using Workday’s applicant‑screening AI platform and was rejected each time, despite meeting the job requirements. Like Harper, Mobley claims that the AI tool improperly generated its decisions to reject his applications based on protected categories. In May 2025, the Court certified a preliminary collective action for Mobley’s age-discrimination claims.
Both Harper and Mobley raise critical legal questions about the role of workplace AI tools, potentially biased and discriminatory AI, and employer liability. These cases serve as reminders to employers that they are subject to liability under existing federal and state anti-discrimination statutes when they rely on AI, in whole or in part, to make decisions impacting recruiting and hiring decisions, or employees’ terms and conditions of employment. Therefore, when employers use AI, they should audit, monitor, and validate the fairness of their selection processes to ensure compliance with the law.
Plaintiffs will likely continue filing cases similar to Harper and Mobley as employers adopt and implement workplace AI tools. Also, employers’ obligations will likely expand as an increasing number of states are passing laws regulating the use of AI workplace tools. Accordingly, employers should take the following proactive steps to minimize exposure to legal and reputational risk:
1. Assess Best Use Cases for the Company.
Identify areas where AI can increase efficiency with minimal risk by making core human judgments in the recruiting and hiring process.
2. Conduct Due Diligence on Third-Party AI Vendors.
Vet third party vendors by reviewing how the AI model was trained and obtain the datasets that were used. Ask how the vendor deals with potentially discriminatory decision making by the model.
3. Develop a Comprehensive Plan for AI Implementation.
Ensure that a system for checking AI work product is in place to catch any problematic outputs by the AI model.
4. Conduct Internal Workplace AI Audits and Assessments.
Conduct regular audits of all workplace AI to ensure the tools are functioning as intended and are not causing disparities among protected categories.
5. Ensure Compliance with Federal, State, and Local Laws, Regulations, and Guidance.
Regularly check for updates on the ever-changing legal landscape in the AI space and consult with counsel on best practices to remain compliant with the law.
6. Monitor Outcomes and Review for Bias.
Keep detailed records of AI outputs and conduct statistical analyses to identify potential bias.
7. Establish Safeguards and Opt-Out Alternatives.
Provide prospective employees with a clear notice of the use of AI in the hiring process and allow them the ability to opt-out of the use of AI with respect to their application.
8. Use AI as a Support Tool, Not the Final Decision Maker.
Ensure that a human employee always has the final say in employment decisions like recruiting, hiring, firing, or promotions, and is not solely relying on the decision of an AI tool.
9. Create Company Policies.
Provide policies that clarify for employees when the use of AI tools is appropriate and the best practices when using them.
10. Verify Proper Human Oversight.
Train employees who oversee AI tools on how to check the AI’s outputs and identify potential legal risk from these outputs.

AI Police Surveillance Bias – The “Minority Report” Impacting Constitutional Rights

One of my favorite Steven Spielberg movies is the 2002 dystopian thriller Minority Report, in which “precogs” working for the “Precrime” unit predict murders before they happen, allowing arrests for crimes not yet committed. Recently, Pennsylvania attorneys at the Philadelphia Bench-Bar Conference raised an alarm that AI surveillance could soon create such a world – mass rights violations due to biased facial recognition, unregulated predictive policing, and “automation bias,” the dangerous tendency to trust computer conclusions over human judgment.
The comparison may no longer be hyperbole. Although we haven’t created mutant psychics, we’ve built AI surveillance claiming to predict crimes, locations, and violence timing. Unlike Spielberg’s film, where technology worked relatively well, real-world predictive policing has been documented to have bias, opacity, and constitutional issues that alarm organizations considering these tools.
Facial Recognition Bias: The Documented 40-to-1 Accuracy Gap
One Philadelphia criminal defense attorney at the conference emphasized that the core issue with AI in law enforcement isn’t the technology itself but “the physical person developing the algorithm” and “the physical person putting his or her biases in the program.” The data confirms this concern with devastating precision.
The landmark 2018 “Gender Shades” study by Joy Buolamwini of MIT Media Lab and Timnit Gebru (then at Microsoft Research) found that commercial facial recognition systems show error rates of just 0.8% for light-skinned men but 34.7% for darker-skinned women — a 40-fold disparity. A 2019 National Institute of Standards and Technology (NIST) report, which tested 189 facial recognition algorithms from 99 developers, found that African American and Asian faces were between 10 and 100 times more likely to be misidentified than white male faces.
Another panelist highlighted that gait recognition and other biometric identification tools display “reduced accuracy in identifying Black, female and elderly people.” The technical limitations extend beyond demographics: gait recognition systems struggle with variations in clothing, occlusion, viewing angles, and lighting conditions — exactly the real-world circumstances law enforcement officers encounter.
Automation Bias in Criminal Justice: Why Police Trust Algorithms Over Evidence
Panelists also warned about “automation bias,” describing how “people are just deferring to computers,” assuming AI-generated analysis is inherently superior to human reasoning. Research confirms this tendency, with one 2012 study finding that fingerprint examiners were influenced by the order in which computer systems presented potential matches.
The consequences are devastating. At least eight Americans have been wrongfully arrested after facial recognition misidentifications, with police in some cases treating software suggestions as definitive facts — one report described an uncorroborated AI result as a “100% match,” while another said officers used the software to “immediately and unquestionably” identify a suspect. In most cases, basic police work, such as checking alibis or comparing tattoos, would have eliminated these individuals before arrest.
Mass Surveillance Infrastructure: Body Cameras, Ring Doorbells, and Real-Time AI Analysis
Minority Report depicted a surveillance state where retinal scanners tracked citizens through shopping malls and personalized advertisements called out to them by name. But author Philip K. Dick — who wrote the 1956 short story that inspired the film — couldn’t have imagined the actual police surveillance infrastructure: body cameras on every officer, Ring doorbells on every porch, and AI systems analyzing it all in real time. (In fact, consumer-facing companies like Ring have partnered with law enforcement to provide access to doorbell cameras, effectively turning residents’ home security devices into mass surveillance infrastructure – some say without homeowners’ meaningful consent.) Unlike Spielberg’s film, where the Precrime system operated under federal oversight with clearly defined rules, real-world deployment is happening in a regulatory vacuum, with vendors selling capabilities to police departments before policymakers understand the civil liberties implications.
These and other examples reveal a fundamental flaw in predictive policing that Minority Report never addressed: the AI systems cannot distinguish between future perpetrators and future victims. Current algorithms struggle to differentiate the two, and research suggests they would require “a 1,000-fold increase in predictive power” before they could reliably pinpoint crime. Like the film’s Precrime system that operated on the precogs’ visions without questioning their accuracy, real-world police departments are deploying predictive policing tools without proof that they perform better than traditional police work.
What Organizations Must Do Now: AI Surveillance Compliance Requirements
For any entity developing, deploying, or enabling AI surveillance systems in law enforcement contexts, three immediate actions are critical:
Mandate rigorous bias testing before deployment. Document facial recognition error rates across demographic groups. If your system shows disparate accuracy rates similar to those documented in the Gender Shades study — where darker-skinned individuals face error rates forty times higher — you’re exposing yourself to civil rights liability and constitutional challenges under Fourth and Fourteenth Amendment protections.
Require human verification of all consequential decisions. AI-generated results should never serve as the sole basis for arrests, searches, or other rights-affecting actions. Traditional investigative methods — alibi checks, physical evidence comparison, witness interviews — must occur before acting on algorithmic suggestions to comply with probable cause requirements.
Implement transparency and disclosure requirements. Police departments should maintain public inventories of AI tools used in criminal investigations and disclose AI use in police reports to ensure prosecutors can meet their constitutional obligations under Brady v. Maryland to share this information with criminal defendants.
Bottom Line: AI Surveillance Legal Risk
Minority Report ends with the dismantling of Precrime after the conspiracy is exposed — a moral conclusion showing that no amount of security justifies sacrificing individual freedom and due process. Twenty-three years later, law enforcement agencies are making the opposite choice. Police departments worldwide are treating Spielberg’s cautionary tale as an implementation manual, deploying the very systems Dick and Spielberg warned against. Argentina recently announced an “Applied Artificial Intelligence for Security Unit” specifically to “use machine learning algorithms to analyze historical crime data to predict future crimes.” Researchers at the University of Chicago claimed 90% accuracy in predicting crimes a week before they happen. The UK’s West Midlands Police researched systems using “a combination of AI and statistics” to predict violent crimes.
As the Philadelphia lawyers emphasized, the problem isn’t the technology — it’s the people programming it and the legal framework (or lack thereof) governing deployment. Without rigorous bias testing, mandatory human oversight, and transparency requirements, AI surveillance will do precisely what Precrime did in the film: create the appearance of safety while systematically violating the constitutional rights of those the algorithm flags as “dangerous” based on opaque criteria and historical prejudices embedded in training data.