Sometimes the most revealing AI regulations aren’t the ones that say “you must” — they’re the ones that say “you must not.” 

We often focus on the rules for developing, deploying, and procuring AI. But what may be more telling is where the rules stop entirely. Not the “how-to” of compliance, but the “you must not” of prohibition. These hard lines, where legislators draw boundaries around algorithmic authority, reveal an emerging consensus about where algorithmic decision-making creates unacceptable risks.

The EU’s Forbidden Zone: Where Algorithms Fear to Tread

Article 5 of the EU AI Act (enforceable since February 2025) bans AI practices presenting “unacceptable risk,” regardless of safeguards or oversight. These are not regulatory speed bumps; rather, they are solid walls. These bans generally target manipulative or surveillance-heavy AI:

These prohibitions share a common thread: they challenge human autonomy by bypassing deliberation (subliminal tactics, vulnerability exploitation) or enabling comprehensive surveillance (social scoring, biometric ID).

The American Patchwork: When Algorithms Can’t Make the Call

US jurisdictions target algorithmic decision-making in employment with specific restrictions:

The pattern: transparency and accountability in AI-assisted hiring, not outright bans, with a focus on preventing opacity and disparate impact.

State-Level Comprehensive Frameworks

Credit and Financial Services

Preexisting laws apply to AI-driven credit decisions:

The Housing Context

The Fair Housing Act (42 U.S.C. § 3601 et seq.) supports disparate impact liability on AI in tenant screening, mortgage underwriting, and property valuations per the 2015 Inclusive Communities Supreme Court decision. However, HUD’s September 2025 withdrawal of disparate impact guidance — including the 2016 post-Inclusive Communities guidance and 2024 AI advertising guidance — signals a dramatic enforcement shift toward intentional discrimination claims only. While HUD has withdrawn its guidance and shifted enforcement priorities, the Fair Housing Act and Inclusive Communities precedent still stands — it’s the enforcement approach, not the law, that has changed.

Healthcare and Insurance

While housing regulators grapple with enforcement priorities, the healthcare sector is charting a clearer path forward.

What the Boundaries Reveal

These regulatory frameworks do not ban AI capability, but do generally establish boundaries requiring:

  1. Transparency: Disclosing use and explaining outcomes.

  2. Human Oversight: Preserving decision-making authority, not just involvement.

  3. Contestability: Enabling challenges/appeals of algorithmic decisions.

  4. Accountability: Mandating bias audits, impact assessments, and risk management.

Practical Governance Implications

For AI governance frameworks:

The Compliance Question

Evaluate AI implementations by asking:

Looking Forward

As of October 2025, states like New York (AI companion safeguards) and California (finalized AI discrimination regs) add layers, while federal efforts (e.g., the AI Bill of Rights) lag. Successful organizations will be those that hardwire human agency and accountability into AI architecture, ensuring compliance with evolving laws. The boundaries are being drawn now — and crossing them, even inadvertently, could prove costly.

Leave a Reply

Your email address will not be published. Required fields are marked *