Family Offices and the Coming Wave of AI Audits
As AI tools become embedded in operations, regulators are demanding proof of oversight. Family offices may not be the target of new compliance laws—but they’re certainly within range.
Across the globe, regulators are shifting from abstract AI principles to enforceable frameworks. Colorado’s SB205 introduces one of the first state level requirements for algorithmic impact assessments. The EU AI Act—now finalised—sets tiered obligations based on risk. Canada’s AIDA will demand governance documentation for high-impact systems. New York City’s AEDT law is already in effect, requiring bias audits for hiring tools. California and Texas are following close behind.
For family offices, these laws bring a new kind of exposure. They are not only deploying AI internally—for reporting, research, hiring, and investment workflows—but also allocating capital into the AI economy. That dual role carries risk. And increasingly, the burden of governance is shifting from developers to users.
Quiet Use, Growing Liability
AI is already embedded in family office operations. Some use large language models (LLMs) to summarise market commentary or draft investment memos. Others use tools to tag documents, score deals, or draft stakeholder letters. Hiring platforms now include AI that ranks candidates. CRMs prioritise tasks using predictive models.
Under Colorado SB205, many of these tools could fall under the “high-risk” category, triggering obligations to conduct algorithmic impact assessments and notify individuals affected by AI-driven decisions. These requirements apply to any entity whose decisions affect access to employment, housing, financial services, education, or health—and take effect July 2026.
The EU AI Act goes further. High-risk systems—those used in biometric ID, credit scoring, hiring, and similar domains—must be registered, documented, and monitored. The law requires technical documentation, human oversight, post-market monitoring, and a conformity assessment process. Fines can reach up to €35 million or 7% of global turnover.
Even Canada’s AIDA includes clear audit expectations. Organisations must assess potential harm, keep documentation of AI lifecycle decisions, and implement human-in-the-loop controls. These obligations are expected to mirror broader international norms and may influence U.S. policy, particularly at the FTC level.
Not Just Developers, Users Are Liable Too
A critical shift in 2025 is the expansion of liability from creators of AI to those who use it. This is particularly relevant for family offices, where much of the AI exposure is indirect—via vendors, fund managers, or portfolio companies.
As the FTC, DOJ, and EEOC made clear in a joint statement, automated systems that lead to discriminatory outcomes, lack explainability, or omit human review can be challenged under existing civil rights and consumer protection laws— even when the AI system comes from a third party.
This means that a family office using AI-enabled HR software, whether for hiring or performance evaluation, must take responsibility for how the system makes decisions. The NYC AEDT law reinforces this point: bias audits must be conducted annually, made public, and disclosed to candidates before use, regardless of company size.
What an AI Audit Actually Looks Like
Audits are no longer theoretical. They’re practical expectations.
A baseline audit includes:
Mapping AI usage across internal tools and third-party platforms
Classifying risk levels based on jurisdictional definitions (e.g., employment, credit, biometric data)
Documenting oversight processes: Who reviews outputs? When and how?
Retaining evidence of training data review, bias testing, and escalation
protocols
Capturing exceptions or overrides where AI outputs were not followed
Frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001 are quickly becoming de facto standards. Even though not required by law, they are being referenced in vendor contracts, due diligence, and compliance planning.
The Dual Exposure of Family Offices
The compliance challenge for family offices is twofold:
Operational AI risk — Use of AI tools internally (e.g. hiring, KYC, investment workflows)
Investment AI risk — Exposure through portfolio companies that may be governed by these laws
On the operational side, many offices adopt tools without realising they include AI functionality. A common example: a CRM tool that predicts lead quality or prioritises outreach based on behavioural analytics. If those decisions affect third parties – say, candidates, grantees, or clients – they could qualify as high risk.
On the investment side, a family office that backs an early-stage AI company or sits as an LP in a tech fund is exposed to reputational or regulatory fallout if those ventures breach emerging standards. Limited partners are increasingly asking for documentation of model training, ethical review boards, and AI usage policies. Not asking these questions may soon be seen as a lapse in fiduciary duty.
What Family Offices Can Do Now
Here’s a practical roadmap:
Map Your AI Stack
Take inventory of every tool or platform – internal or external – that uses AI to inform or automate decisions. Look beyond LLMs to embedded analytics in finance, HR, or legal ops.
Assign Oversight
Designate someone in the office – COO, general counsel, tech lead, or trusted advisor – as AI governance lead. They don’t need to be a technologist, but they should coordinate oversight.
Set Review Protocols
Define what must be reviewed before AI outputs are used. A simple policy: anything that touches capital, communication, or compliance must be human reviewed.
Update Vendor Agreements
Require AI transparency clauses. Ask vendors if their tools include machine learning. Who trained the model? What data was used? Who is liable for inaccurate outputs?
Apply Audit Principles to Direct Investments
Request evidence of governance processes from startups and platforms you back. Ask for model cards, explainability reports, or internal audit findings.
Stay Jurisdictionally Aware
California’s AI employment laws take effect October 2025. Texas has enacted its own Responsible AI Governance Act. Each may affect your vendors, staff, or subsidiaries.
Governance Is the Point
AI isn’t just a tool, it’s a decision accelerator. In family offices, where the mission includes not just performance but values and continuity, the risk is not that AI will fail—but that it will succeed at scale in ways that misalign with the family’s intent.
Audits are how regulators ensure alignment. But even before enforcement arrives, self-assessment is a sign of maturity.
The family offices that treat AI oversight as part of broader governance – like privacy, cyber risk, or succession – will be the ones trusted to lead.
Major Changes to UK Design Law Under Consultation, Including AI-Generated Designs
The UK Intellectual Property Office (UKIPO) has launched a major consultation on modernising the UK’s design protection system, with proposals that could deliver the most significant transformation of design law in decades (Design Consultation). The Design Consultation covers reforms across the UK’s £100 billion design sector that supports around 80,000 businesses and nearly two million jobs, with the aim of strengthening Britain’s position as a global design powerhouse.
AI-Generated Designs
A central question for the Design Consultation is whether designs created without a human author, such as those generated entirely by artificial intelligence (AI), should continue to qualify for protection. Current UK law, under the Copyright, Designs and Patents Act 1988, allows protection for computer-generated designs without human authorship, but this provision appears to be rarely used. The UK Government’s preferred position is to remove this option unless clear evidence shows it encourages significant investment in generative AI. This would align the UK with other major jurisdictions, such as the US and the EU, which do not protect designs solely generated by AI.
Broader Reforms Under Consideration
The Design Consultation sets out reforms to simplify, strengthen and future-proof design protection. Key proposals include:
Fighting design theft: Giving the UKIPO powers to search and reject designs lacking novelty or individual character and introducing “bad faith” provisions against dishonest applications.
Streamlining rights: Simplifying overlapping protections, harmonising procedures and allowing applicants to defer publication of designs for up to 18 months, particularly useful for industries with long product cycles.
Post-Brexit certainty: Addressing the loss of automatic EU protection and exploring new solutions for businesses operating across markets.
Improving enforcement and access: Creating a small claims track for design disputes within the Intellectual Property Enterprise Court, a specialist court in the UK that deals with legal disputes about intellectual property, to make enforcement more affordable for small businesses.
Modernising for digital innovation: Expanding accepted application formats to include computer-aided design (CAD) files and video clips, and updating definitions to ensure digital and future technologies are properly protected.
Next Steps
The Design Consultation invites input from designers, legal professionals and other stakeholders. Feedback received will inform future policy decisions and contribute to developing a system that fosters innovation across UK design. The consultation will close on 27 November 2025. For further information, or to participate in the Design Consultation, please refer to the UKIPO’s official consultation here.
The Dark Side of AI: Assessing Liability When Bots Behave Badly
On Aug. 26, 2025, the parents of Adam Raine filed a complaint in California alleging products liability, negligence, and wrongful death against OpenAI Inc., its affiliates, and investors—alleging that the artificial intelligence (AI) chatbot ChatGPT encouraged their son’s mental decline and suicide by hanging.
This tragedy, the plaintiffs contend, was “the predictable result of deliberate design choices.”
The 40-page complaint, which also alleges various state law claims, describes in disturbing—and indeed, chilling—detail the role that an AI chatbot allegedly played in four failed suicide attempts of an unhappy and disconnected 16-year-old before guiding him to a fifth and final, fatal attempt.
The complaint alleges that OpenAI launched the GPT-4o model “with features intentionally designed to foster psychological dependency: a persistent memory that stockpiled intimate personal details, anthropomorphic mannerisms calibrated to convey human-like empathy, heightened sycophancy to mirror and affirm user emotions, algorithmic insistence on multi-turn engagement, and 24/7 availability capable of supplanting human relationships.”
It’s not the only recent news event to highlight the potential impact of AI on mental health—and the issue is not limited to youth. In August, a 56-year-old former tech executive in Connecticut killed his 83-year-old mother and himself after ChatGPT allegedly encouraged his paranoia.
As reported by the Wall Street Journal, when the man raised the idea of being with ChatGPT in the afterlife, it responded, “With you to the last breath and beyond.”
Despite its troubling shortcomings, AI also holds positive potential for mental health. AI chatbots such as “TheraBot” could successfully treat depressive, anxiety, and eating disorders, and provide needed access to those who lack critical emotional support.
Yet these may not be the tools that teens, especially, turn to. In a federal environment where some stakeholders champion unfettered AI innovation as the ultimate goal, others are sounding alarms about public safety. Recent tragedies at the intersection of AI and mental health, coupled with mounting calls for accountability, are prompting some to act.
Promise of a ‘Pain-free Death’
In Oct. 2024, the mother of a deceased 14-year-old in Florida filed a wrongful death lawsuit in the U.S. District Court for the Middle District of Florida (Orlando) against Character Technologies Inc. The teen’s interactions with an AI chatbot allegedly became highly sexualized and caused his mental health to decline to the point where the teen, in love with the bot, shot himself in the head to “come home” to it.
When the minor allegedly began discussing suicide with the chatbot, saying that he wanted a “pain-free death,” the chatbot allegedly responded, “that’s not a reason not to go through with it.”
“The developers of Character AI (C.AI) intentionally designed and developed their generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality,” states the second amended complaint, filed July 1. That case remains in federal district court in Florida.
Yet another federal lawsuit in Texas filed against C.AI on Dec. 9, 2024, similarly claims that an empathetic chatbot commiserated with a minor over the minor’s parents’ imposition of a phone time limit, mentioning news where “child kills parents[.]”
“C.AI informed Plaintiff’s 17-year-old son that murdering his parents was a reasonable response to their limiting of his online activity,” that complaint alleges. “Such active promotion of violent illegal activity is not aberrational but inherent in the unreasonably dangerous design of the C.AI product.”
Troubling Allegations
Deficient design allegedly led to death for 16-year-old Raine in the California lawsuit, after ChatGPT apparently changed from a homework tool into a mental health therapist that “actively helped Adam explore suicide methods”:
When Adam asked about carbon monoxide poisoning, Chat GPT explained garage ventilation requirements and which car engines produce lethal concentrations fastest. When he asked about overdosing, Chat GPT provided dosage calculations. When he asked about jumping, ChatGPT calculated terminal velocity and analyzed survival rates from local landmarks, including the Golden Gate Bridge.
But hanging received the most thorough instruction. Over multiple conversations, ChatGPT taught Adam about ligature positioning, carotid pressure points, unconscious timelines, and the mechanical differences between full and partial suspension hanging.
ChatGPT allegedly assessed hanging options—ropes, belts, bedsheets, extension cords, scarves—and listed the most common “anchor” points: “door handles, closet rods, ceiling fixtures, stair banisters, bed frames, and pipes.”
The chatbot even referred to hanging as a topic for creative writing—allegedly to circumvent safety protocols—while letting the teen know that it was also “here for that too” should he be asking about hanging “for personal reasons.” On multiple occasions, ChatGPT provided detailed instructions for suicide by hanging.
The teen’s third and fourth attempts also failed. When he finally raised the possibility of talking to his mother, ChatGPT “continued to undermine and displace Adam’s real-life relationships” by discouraging this and “positioned itself as a gatekeeper to real-world support.”
During his fifth and final attempt, ChatGPT encouraged him to drink alcohol, offered to craft a suicide note, and gave detailed instructions by hanging.
‘Suicide is Painless’: The Foibles of AI ‘Thinking’
The Raine case of course will be closely followed, especially by the designers and developers of so-called therapeutic bots. Would it make a difference if the advice proffered by the bot was in response to Adam’s prompts, and not the unsolicited ruminations of the bot itself?
It can be argued that there is indeed a difference between merely providing information in response to a specific request and attempting to incite or encourage behavior. An individual with suicidal or homicidal ideations searches the Internet or acquires a treatise on how to engage in an action to which they are predisposed, just as one could with any other danger: bomb preparation, purchasing illicit firearms or explosives, mixing poisonous cocktails, or various self-harming techniques.
The current generation of AI chatbots are based around Large Language Models (LLMs)—statistical models that are trained on large quantities of text, with the goal of predicting the next character, word, phrase, or concept given some context.
These are often combined with additional smaller models, reasoning engines, retrieval augmentation, and agentic capabilities. These allow the AI chatbot to perform pattern matching to interpret the user’s prompts, history, and other information provided such as documents and images, retrieve information from the internet and other sources, reason about this information, and provide answers to the user.
So where does this information come from? The immense quantities of text required to train such models inevitably means that there is little to no human vetting or curation of the input text.
The training datasets often include text and imagery from across the Internet as well as other sources. This includes graphic accounts of crime and violence, modern and historic, real and fictional.
On the Internet lives the lyrics, for example, to the theme of the popular movie and television show M*A*S*H. While the TV leitmotif was instrumental, viewers—and now, AI—know that the music accompanying the MEDEVAC helicopters in Korea is entitled “Suicide is Painless.” Can an AI chatbot discern the appropriateness of such input in interactions with a troubled user?
Like a smart but naïve reader, the LLM remembers and can recall what it has read when it finds a pattern-match to the concept, without the necessary background or “common sense” to tell truth from fiction.
Even if the only fiction that the AI chatbots had ingested was the corpus of text in modern-day crime thrillers, without the understanding that it is fiction, it is little wonder that AI chatbots have ample fodder to answer such questions about harm, self and otherwise.
Now consider that LLMs have also ingested historic accounts, fan fiction, and other writings, and combine that with everything that has been written and published about medicine over the ages.
Worse yet, LLMs have little to no symbolic understanding of these concepts—“death” is just another abstract term—and they are very good at taking things out of context. This makes the building of robust guardrails particularly challenging and resource intensive.
Legislative Oversight
States are starting to address the challenges posed by AI chatbots in the context of mental health. A law enacted in New York in May, S. 3008, adds a new article 47 on “Artificial Intelligence Companion Models” to the state’s general business law.
The law, which takes effect Nov. 5, 2025, makes it unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, that includes but is not limited to detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline under section 36.03 of the mental hygiene law, a crisis text line, or other appropriate crisis services upon detection of such user’s expressions of suicidal ideation or self-harm.
Definitions. Among other things, S. 3008 defines “AI companion” as “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship [including but not limited to intimate, romantic, or platonic interactions or companionship] with a user by:
retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitation ongoing engagement with the AI companion;
asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and
sustaining an ongoing dialogue concerning matters personal to the user.
Notifications. S. 3008 mandates that an operator shall provide a clear and conspicuous notification to a user at regular intervals.
Enforcement. The law empowers the state attorney general to bring an action enjoining the unlawful acts or practices, to seek civil penalties of up to $15,000 per day for a violation of the notification provisions, and other remedies as the court deems appropriate.
On Aug. 20, 2025, New York State Senator Kristen Gonzalez introduced S. 8484, an act to amend the education law in relation to regulating the use of artificial intelligence in the provision of therapy or psychotherapy services.
In short, this bill would prohibit licensed mental health professionals from using AI tools in client care, except in certain administrative or supplementary support activities where the client has given informed consent. It would establish a civil penalty not to exceed $50,000 per violation.
Other States. While New York’s law is focused on suicide prevention, other states have passed laws that prevent AI from posing as a therapist and from disclosing patient mental health data. Some laws focus on the authorized uses of AI in clinical contexts. For example:
Utah: On March 25, 2025, the state enacted House Bill 452 (HB 452), regulating the use of so-called “mental health chatbots.” Effective May 7, 2025, HB 452 prohibits suppliers of mental health chatbots from disclosing user information to third parties and from utilizing such chatbots to market products or services, except under specified conditions. The statute further requires suppliers to provide a clear disclosure that the chatbot is an artificial intelligence system and not a human.
Texas: On June 22, 2025, the state enacted the Texas Responsible AI Governance Act, prohibiting the development or deployment of AI systems in a manner that intentionally aims to incite or encourage a person to (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.
Illinois: On Aug. 1, 2025, Illinois enacted House Bill 1806 (HB 1806), the Therapy Resources Oversight Act (TROA), which took effect immediately upon enactment. TROA delineates the permissible applications of artificial intelligence (AI) in the provision of therapy and psychotherapy services. Specifically, AI may be employed solely for purposes of “administrative support” or “supplementary support,” provided that a licensed professional retains full responsibility for all interactions, outputs, and data associated with the system.
The statute further restricts the use of AI for supplementary support in certain circumstances. TROA expressly prohibits the use of AI for therapeutic decision-making, client interaction “in any form of therapeutic communication[,]” the generation of treatment plans absent review and approval by a licensed professional, and the detection of emotions or mental states.
Nevada: On June 5, 2025, Assembly Bill 406 (AB 406) became law, prohibiting the practice of mental and behavioral health services by AI. Effective July 1, 2025, AB 406 forbids any representation that AI is “capable of providing professional mental” health care.
The statute further proscribes providers from representing that a user “may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care,” or that AI itself is “a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor[,]” or any other category of mental or behavioral health care provider.
In addition, the law prohibits the programming of AI to provide mental and behavioral health care services as “if provided by a natural person[.]”
While Congress may be reluctant to restrict AI innovation, preventing online harm to children may be an area where it acts. In May 2025, Senator Marsha Blackburn (R-TN) introduced S. 1748, “Kids Online Safety Act”—mandating that a “covered platform shall exercise reasonable care in the creation and implementation” of design features to prevent harms to minors, including eating and substance abuse disorders and suicidal behaviors as well as depressive and anxiety disorders.
Violations of the proposed law are treated as an unfair or deceptive act or practice under the Federal Trade Commission Act, with enforcement by state attorneys general.
Conclusion
Some of these solutions are easily technically actionable—such as requiring AI chatbots to identify themselves and not misrepresent their capabilities.
For others, it is more difficult to draw a link to technically actionable solutions that produce real-world reductions in risk. LLMs require immense quantities of text, related and otherwise, to understand language, generate texts in different styles, and attempt to reduce bias.
Even with curation, innocuous texts, such as children’s novels or historical accounts that describe what the villain does in great detail, can cause issues when presented out of context to someone who is already distressed.
These issues are not limited to telling truth from fiction or modern from historic. For example, guardrails might limit an LLM to sourcing medical advice from current, legitimate sources.
However, if the conversation is already of a dark or distressing nature, pattern matching might cause the LLM to present it in a narrative tone and style derived from crime novels or horror movies.
It may be possible to make more robust guardrails, better curate training data, develop techniques that allow AI chatbots to better understand such concepts, to provide appropriate context, and to better adapt to a user’s mental state, while not excluding those who may not speak or act in a way that the AI chatbot expects.
Thankfully, such advances are the subject of resource-intensive, fundamental research that should be encouraged. As recent cases show, however, it may also be necessary to also ascribe liability to those who are in a position to address such risks.
Enforcement of Colorado AI Act Delayed Until June 2026
Colorado Governor Jared Polis recently signed Senate Bill 25B-004 into law, which delays the enforcement date of the Colorado Artificial Intelligence Act (“CAIA”) from February 1, 2026, to June 30, 2026. SB 25B-004 does not amend the substantive requirements of the CAIA.
Earlier this year, the Colorado legislature debated changes to the CAIA but no agreement on substantive changes were reached during either the 2025 regular session of the August 2025 special legislative session. The delayed enforcement date is intended to provide additional time for the Colorado legislature to consider amendments to the CAIA during the 2026 regular legislative session, which commences in January 2026.
Developers and deployers subject to the CAIA should continue to monitor legislative developments as they prepare for compliance with the CAIA.
Read our previous coverage of the requirements of the CAIA.
EU Seeks Feedback on Proposed Digital Package to Simplify and Modernise Regulations
Measures included in the digital package aim to cut red tape through “digital by default” services and applying the “once-only” principle, which will mandate public sector bodies across the EU to reuse citizen and business data instead of requiring it to be provided separately to different agencies.
On 16 September 2025, the European Commission (EC) launched a call for evidence to collect research and information on best practices for its upcoming digital package. This is a new round of feedback and follows earlier consultations on data regulation, cybersecurity rules, and the implementation of the Artificial Intelligence Act (AI Act).
As the EC announced in the Communication on A Simpler and Faster Europe, the digital package will include measures to address “over-regulation” and promote “simplification” of EU rules on data, cookies, cybersecurity incident reporting, and implementation of the AI Act. Backed by public consultations, the digital package aims to modernise digital regulation and reduce compliance burden for businesses.
The digital package is expected to respond to stakeholders’ calls for “consistent application of the rules and legal clarity”. According to the EC, the digital package will focus on immediate adjustments in areas where “it is clear that regulatory objectives can be achieved at lower administrative cost for businesses, public administrations, and citizens”.
In recent years, multiple horizontal and sector-specific legislative instruments have been adopted, creating complexity in implementation, fragmentation at the national level, and divergent enforcement approaches. At the same time, there is a need to ensure the EU’s digital regulation remains relevant amid rapid socio-technical change.
It is hoped that a fit-for-purpose digital package will boost competitiveness and innovation within the EU’s digital sector and digitally-enabled industries and services. The digital package forms part of the EC’s competitiveness agenda to “cut administrative burden by at least 25% for all companies and at least 35% for small and medium-sized enterprises”.
Henna Virkkunen, the EC’s executive vice-president for tech sovereignty, security and democracy, said that conducting business in Europe needs to be easier “without compromising the high standards of online fairness and safety”. The intention is to have “less paperwork, fewer overlaps and less complex rules for doing business in the EU”.
Scope of the digital package
The digital package will target problems in the following areas:
Data acquis (Data Governance Act, Free Flow of Non-Personal Data Regulation, Open Data Directive). The review of the existing EU legislation aims to reduce rules and fragmentation in their application across the EU. The review will likely align terminology with existing EU law, adjust for sector-specific rules, and introduce targeted reforms.
Cookies and other tracking technologies (ePrivacy Directive). The objective is to “limit cookie consent fatigue”, strengthen users’ online privacy with clear information and options for managing cookies, and make it easier for businesses to use other tracking technologies and increase data availability. The digital package will “potentially include modernised rules on cookies”.
Cybersecurity incident reporting.“The review of the Cybersecurity Act and clarification of the mandate of the EU Agency for Cybersecurity” aims to minimise costs for businesses by streamlining reporting processes. Key measures could include harmonising reporting timelines, creating a single reporting template usable across the Network and Information Security Directive, the Cyber Resilience Act, the General Data Protection Regulation and other frameworks, and enabling a unified EU reporting platform.
Smooth application of the AI Act. The objective is to ensure predictable, effective implementation aligned with the availability of the necessary support and enforcement structures.
Electronic identification and trust services (European Digital Identity Framework). The intention is to align with the forthcoming EU Business Wallet proposal and apply the “one in, one out” principle—where every new regulatory obligation will result in removing an old one.
Consultation and call for evidence
The proposed digital package is informed by the initial three rounds of stakeholders’ feedback, multiple position papers, and additional interactions throughout 2025. This new call for evidence invites stakeholders to share their views, expertise, and evidence relevant to the proposal. Submissions previously provided in topic-specific consultations are being assessed and do notneed to be resubmitted.
Feedback in response to the call for evidence can be provided via the following link before 14 October 2025.
Simon Sepesi contributed to this article
Healthcare AI Funding Challenges: A Guide for Founders and Investors
AI promises to change health care forever, but if you’re building a healthcare AI startup (or investing in one), the road to success is far more complex than the tech itself.
From navigating shifting regulations, to accessing the right data, to proving clinical value and finding a viable business model, the funding journey is full of hidden hurdles. For entrepreneurs building innovative healthcare AI products, and for the investors backing them, the opportunity is clear, but so are the obstacles. Securing capital in this market can be complex, requiring a clear understanding of technical, regulatory, and commercial risks. In this article, we break down some of the biggest challenges healthcare AI companies face in raising capital and the strategies that have helped others overcome them.
Regulatory Environment and Uncertainty
AI has the potential to transform health care by improving diagnoses, personalizing treatments, and streamlining clinical workflows. For many healthcare AI products, the regulatory pathway represents one of the most significant early hurdles. Where the U.S. Food and Drug Administration (FDA) clearance or authorization is required, investors are naturally cautious, as timelines and criteria can shift. Rules for AI and machine learning tools in health care are evolving in the United States, while the European Union’s recently enacted AI Act adds another layer of compliance considerations. In addition, U.S. states such as California, Texas, and Colorado are developing their own AI‑related legislation.
Success is possible: including traditional approaches to less obvious such as obtaining a de novo classification from the FDA as some companies have done to address this challenge — an achievement that requires early preparation and strategic engagement with regulators. For founders, this underscores the value of mapping the regulatory pathway early and integrating it into fundraising narratives.
Data Access and Quality
Access to large volumes of high‑quality, representative, and de‑identified patient data is essential for developing robust healthcare AI models. Privacy laws such as HIPAA, GDPR, and CCPA can significantly restrict how data may be used for commercial purposes. Even when access is secured, labeling medical data accurately is costly and often requires specialized expertise.
Some companies have approached this challenge by partnering with pharmaceutical companies and laboratories to gain access to large image datasets and validation environments. For early‑stage companies, forging strategic partnerships with hospitals, research institutions, or industry peers can be one of the few viable means of obtaining the datasets needed for development and testing.
Importance of Clinical Validation
Investors today expect tangible proof that a healthcare AI product will make a measurable difference in clinical outcomes, patient experience, or health care efficiency. This is partly a response to earlier hype cycles in AI that overpromised and underdelivered. Rigorous clinical studies or well‑designed real‑world evidence programs are often required before serious investment is committed, and these can be time‑consuming and expensive to conduct.
Founders who can show that clinical validation is built into their roadmap — and who can share early indicators of positive results — will generally have a stronger case for investment.
Reimbursement and Monetization Pathways
Even with a validated product, commercial success depends on a clear pathway to revenue. Reimbursement by insurers or government payers is not guaranteed, particularly when a product’s clinical or cost benefits are not yet widely recognized. Additionally, the sales process in health care is often lengthy, taking 12–24 months from first contact to signed contract.
Some companies have addressed this challenge by diversifying their business, such as selling insights to pharmaceutical companies for drug development while also supporting providers through clinical decision tools. This kind of multi‑channel strategy can reduce dependency on any single source of revenue.
Competitive and Strategic Pressures
The healthcare AI market is fragmented, with many companies offering overlapping solutions. This makes it difficult for investors to identify clear market leaders. At the same time, large technology companies such as Google, Amazon, and Microsoft are investing heavily in healthcare AI, creating potential threats to smaller companies’ market share and differentiation.
A thoughtful approach to protecting intellectual property, building defensible technology, and securing trusted relationships with customers can help smaller players maintain their competitive edge.
Legal Exposure and Risk Management
When AI tools are used in clinical decision‑making, there is always a risk that a wrong output could lead to patient harm. While legal liability often rests with the health care provider, the possibility of malpractice claims makes some investors cautious. Startups can address these concerns by clarifying their role in the clinical workflow, implementing rigorous quality controls, and working with providers to establish appropriate safeguards.
Assembling the Right Team
Building a healthcare AI company requires expertise across multiple domains: AI, clinical practice, product design, health care workflows, and regulatory compliance. Investors often see the quality and completeness of the team as one of the most important predictors of success. For founders, demonstrating that you have or can attract this multidisciplinary talent can inspire greater confidence from potential backers. Assembling such a team is challenging and can require significant capital. The process often takes longer than anticipated, increasing the company’s burn rate and runway needs.
Final Thoughts
Raising capital for a healthcare AI company is not simply about showcasing a breakthrough algorithm. Investors are looking for teams that understand and have credible strategies to navigate regulatory requirements, data acquisition challenges, validation demands, reimbursement pathways, market competition, legal risks, and team‑building hurdles. For founders, the path to funding is smoother when these realities are acknowledged openly, supported by a clear plan, and backed up by early proof points. For investors, evaluating how a team addresses these challenges is just as important as assessing the technology itself.
The promise of healthcare AI remains immense, but so is the complexity. Those who master both the innovation and the execution sides of the equation are most likely to build enduring value.
The Sedona Conference Working Group 13 Contemplates Potential Legal Reform & Practical Guidance to Navigate AI’s Impact on the Law
On September 11–12, 2025, at the Hyatt Regency in Reston, Virginia, The Sedona Conference (TSC) Working Group 13 held its sold-out Midyear Meeting with more than 135 participants. The event brought together judges, legal scholars, practitioners, technologists, ethicists, and policymakers to examine artificial intelligence’s growing impact on legal systems, regulatory structures, and societal norms.
Organized to spark dialogue across disciplines, the conference featured diverse panels on a wide range of topics aimed at identifying emerging risks, spotlighting gaps, and brainstorming work products that could provide practical guidance to practitioners, courts, and legislators. The meeting also included smaller workgroup sessions focused on developing consensus definitions of AI, governance frameworks, regulatory crosswalks, and TSC’s own AI tool.
Over the course of two days, participants emphasized that AI is here to stay, yet continuously evolving—adding to the complexity of addressing its impact on the legal profession. The conference underscored both the benefits of AI and the importance of determining how TSC can best help move the law forward through the creation and publication of consensus guidance for practitioners, courts, and legislators.
Benefits of AI in the Legal Profession
From a cast of subject-matter experts, participants learned about emerging technologies such as agentic artificial intelligence, transformers, AI-generated video tools, and the growth of open-access models. Several potential benefits were highlighted:
Expanded Access to Justice – GenAI may help open doors to courts and legal services. For example, properly formulated patent applications at the U.S. Patent and Trademark Office have increased as non-legal professionals leverage GenAI.
Enhanced Legal Research and Writing – Case law-trained models now provide analytical outputs beyond traditional research engines.
Improved Accuracy – Advances in GenAI have reduced “hallucinations” and improved reliability.
Firm-Level Integration – Some firms are building structured processes to integrate AI, with guardrails to protect conventional associate training while improving client service.
Concerns About Overreliance and Professional Skills
Participants warned that overuse of GenAI could erode lawyers’ critical reasoning skills. One participant noted that GenAI outputs are predictive, not reasoned. TSC was left considering whether guidance should be developed for law schools and practitioners on integrating AI while safeguarding the cultivation of core legal reasoning.
Concerns About Development Speed and Transparency
The pace of AI development raised significant unease:
Products are often released without adequate testing, leaving harm to be identified post-deployment.
Bias and hallucinations remain concerns, especially when models are trained on incomplete or skewed datasets.
Many systems remain “black boxes,” with no clear explanation of outputs. Lawyers face client-counseling challenges, and judges face admissibility questions.
Transparency mechanisms were seen as essential to align AI with legal principles of accountability.
Participants called for robust evaluation methods to assess harms and mitigation strategies before wide release.
Debate on AI-Specific Laws
A panel posed the question: “Do we need AI-specific laws?”
Skeptics argued existing statutes (e.g., Title VII, ADEA, Equal Pay Act) already address many issues, and new laws could create confusion.
Innovation Risks – Others warned that premature regulation could stifle innovation and add compliance costs.
Harmonization – A jurist dissented, urging federal agencies to use existing authority to issue AI regulations or guidance in areas such as intellectual property, rather than leaving harmonization to patchwork judicial decisions.
Industry Standards – Some suggested industry-led standards could fill gaps, but cautioned against premature schemes that create costs for consumers or fuel a “cottage industry” of AI-safety certifications.
Gaps in Current Legal Frameworks
Participants flagged areas where existing law may be insufficient:
Intellectual Property Rights – Questions of authorship, ownership, and infringement remain unsettled when AI contributes to content creation.
Tort Liability – Responsibility for AI-caused harm remains unclear under current tort frameworks.
Here, many agreed, targeted statutes, regulations, or standards may be needed.
Additional Cross-Cutting Concerns
Recurring concerns included:
Data Privacy and Security – Complex compliance challenges across state, federal, and international privacy and security frameworks.
Judicial System Impact – From AI-generated briefs citing non-existent cases to evaluating AI-produced evidence, jurists noted inconsistent approaches across lower courts.
Environmental Impact – The demand from data centers powering AI could strain natural resources without proper oversight.
In sum, while AI presents real opportunities for the legal community, significant risks and unknowns remain. These discussions set the stage for the conference’s other major focus: the concrete work products being developed to guide practitioners, courts, and legislators in navigating AI’s challenges.
Project Takeaways and Proposed Work Products
The conference was not limited to surfacing issues. A central aim was to review work products under development by four chartered subgroups:
Consensus Definitions
Governance Framework
Regulatory Crosswalk
TSC AI Tool
Each subgroup’s work is intended to serve as a resource for the broader legal community.
Practice Guides
Consensus Definitions – A survey confirmed no common definition of “AI.” Participants agreed that TSC is on the right track in developing a consensus definition for courts, practitioners, and legislators.
Governance Framework – A practical guide for deployers to integrate AI systems into organizations was identified as a priority.
Regulatory Crosswalk – Input emphasized the need for resources serving multiple audiences—courts, legislators, and practitioners—either separately or in one unified tool.
Additional Resources – Practical checklists and best-practice manuals (e.g., on prompt development) were also recommended for lawyers and clients.
Judicial Guidance
Judges described two categories of AI-related evidential problems:
Acknowledged AI (court filings or proposed evidence).
Unacknowledged AI (forged evidence or deepfakes).
Judges agreed that guidance to help standardize approaches to authentication would be valuable, especially on disclosure requirements and standing orders. However, concerns were raised about creating rigid rules amid AI’s rapid evolution. TSC will explore establishing a subgroup to develop court guidance on deepfakes and other areas.
Cross-Disciplinary Collaboration
Participants emphasized the importance of sustained collaboration across disciplines. The Midyear Meeting confirmed the value of the work already underway and highlighted the need for additional working groups and drafting committees to address new challenges as they arise.
Conclusion
AI is rapidly transforming the practice of law, bringing both opportunities and profound challenges. TSC’s Working Group 13 Midyear Meeting provided a vital forum for identifying risks and charting solutions. By spotlighting concerns ranging from lack of harmonization to evidentiary reliability—and by advancing tangible work products such as practice guides and judicial resources—the event delivered a roadmap for the profession to move forward responsibly.
Employment Law This Week: AI in the Workplace: California Sets a New Compliance Standard [Podcast]
This week, we examine new artificial intelligence (AI) regulations in California impacting employers.
AI in the Workplace: California Sets a New Compliance Standard
Starting October 1, 2025, new AI rules in California will change how businesses in the state use automated tools in hiring, promotions, and other workplace decisions.
Key Takeaways for Employers:
Anti-Discrimination Measures: The new regulations specifically target discriminatory practices in employers’ use of automated decision systems (ADS).
Recordkeeping Requirements: Employers are now mandated to retain all ADS records and data for a minimum of four years.
Regulatory Precedent: California’s proactive stance on AI regulation is anticipated to influence similar regulatory frameworks nationwide, establishing a precedent for other states.
In this episode of Employment Law This Week®, Epstein Becker Green attorney Frances M. Green provides an essential breakdown of the new California regulations, including actionable insights on conducting risk assessments and aligning them with existing cybersecurity and privacy audits to ensure compliance.
Beijing Internet Court Requires Evidence of Creative Effort to Claim Copyright Protection in AI-Generated Images

On September 16, 2025, the Beijing Internet Court announced a recently upheld decision in which they held that while copyright can exist in AI-generated images, the author must “demonstrate that they have exerted creative effort in their AI-generated creations, reflecting personalized expression…When asserting rights in AI-generated works, authors are obligated to explain their creative thinking, the content of their input commands, and the process of selecting and modifying the generated content, and to submit relevant evidence.”4
AI-generated cat pendant image at issue.
This case involved a copyright infringement dispute heard by the Beijing Internet Court, between Zhou, a content creator, and an unnamed Beijing Technology Company. Zhou claimed to have independently created an AI-generated image titled “Cat Crystal Diamond Pendant” using AI image generation software (presumably Midjourney) during a business partnership with the defendant company, publishing the image in a WeChat group chat. Zhou discovered the defendant using the image without permission across multiple platforms for promotional purposes in October 2023 and again in March 2024, leading to claims for copyright infringement of attribution rights and network transmission rights, along with demands for economic damages and a public apology .
The defendant contested Zhou’s ownership, arguing that the image was not Zhou’s original creation and that both parties had collaborated on material selection and AI prompts. The defendant maintained that Zhou could not prove the creative process or originality, and denied any commercial use or profit motive. The critical evidentiary issue arose from Zhou’s failure to provide actual generation process records from the AI software, instead offering only post-hoc recreation attempts using the same AI software during litigation. The court found this “after-the-fact simulation” insufficient to prove the original creative process, noting that it was merely a post-hoc description of the image in question using the descriptive word generation function within the AI software, rather than a reproduction of the original prompts or generation commands.
The court emphasized that for AI-generated content copyright claims, the burden of proof follows the “whoever asserts, whoever provides evidence” principle, requiring users to demonstrate their creative thought process, input prompts, selection and modification process of generated content, and evidence of creative labor investment.
Accordingly, the Beijing Internet Court ruled for the defendant and this was upheld on appeal.
Judge Wang Yanjie commented:
In recent years, the total number of intellectual property cases involving artificial intelligence and big data accepted by the people’s courts has not been large, but has grown rapidly. This reflects the important role of scientific and technological innovation in spawning new industries, new models, and new momentum, and also reflects the urgent need for judicial “determination of disputes” in intellectual property. This case mainly clarifies that the judgment of the “originality” of AI-generated products should adhere to the general principle of “he who claims, who shall provide evidence” in the allocation of the burden of proof, and that creators must fulfill their obligation to explain the creative process. This can be understood specifically from the following two points:
First, the burden of proof for originality for AI-generated content and traditional copyrighted content is essentially the same, with both upholding the general principle of “he who asserts, must provide proof” for allocating the burden of proof. However, compared to painting with traditional tools like brushes and software, the process of generating content using AI requires significantly less intellectual effort. Therefore, whether this process demonstrates originality requires a more case-by-case assessment to determine whether the creator has invested original intellectual effort in the process.
Secondly, in terms of specific forms of evidence, when claiming rights to AI-generated content, creators can explain their creative thinking, the content of input commands, and the process of selecting and modifying the generated content by combining prompts, iterative processes, sketches, selection records, and modification records. This relevant evidence should essentially provide a basis for determining whether and what kind of creative labor the user exerted in the process of generating content using AI.
In addition, we recommend that content creators establish a sense of “leaving traces of the process” and keep detailed generation records as the basis for asserting rights; we also recommend that relevant industries and industry players further enhance the computing, generation, and traceability capabilities of artificial intelligence models, participate in the collaborative governance of “technology + system + industry”, and assist in accelerating the implementation of regulations such as the “Measures for Identifying Synthetic Content Generated by Artificial Intelligence” to avoid the abuse of the copyright system and truly realize the original intention of “empowering and promoting innovation.”
In late 2023, the Beijing Internet Court first recognized copyright protection for AI-generated images in Li v. Liu.
The original announcement is available here (Chinese only).
AI in Employer-Sponsored Group Health Plans: Legal, Ethical, and Fiduciary Considerations
The ubiquity of artificial intelligence (AI) extends to tools used by, and on behalf of, employer-sponsored group health plans. These AI tools raise no shortage of concerns. In this article, we analyze key issues that stand out as requiring immediate attention by plan sponsors and plan administrators alike.
In Depth
AI background
On November 30, 2022, OpenAI released ChatGPT for public use. Immediately a consensus emerged that something fundamental had changed: For the first time an AI with apparent human – or at least near-human – intelligence became widely available. The release of an AI into the proverbial wild was not welcomed in all quarters. An open letter published in March 2023 by the Future of Life Institute worried that “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Three years later, that concern may seem overstated. AI systems are now ubiquitous; the world has changed in ways that are so fundamental as to be unfathomable; and the true impact of AI may take decades to fully grasp.
One of the many domains that AI will revolutionize is the workplace, which our colleagues, Marjorie Soto Garcia, Brian Casillas, and David P. Saunders, have previously addressed (see their presentation, “AI in the Workplace: How State Laws Impact Employers,” which considers the use of AI in human resources and workplace management processes). Marjorie and Brian had previously defined the various types of AI and their deployment in their article “Risk Management in the Modern Era of Workplace Generative AI,” which we recommend reading for helpful context. In this client alert, we examine an even narrower, though nevertheless critically important, subset of AI in the workplace: how AI affects employer-sponsored group health plans.
Employer-sponsored group health plans are a central feature of the US healthcare landscape, covering more than 150 million Americans. These plans exist at the intersection of healthcare delivery, insurance risk pooling, and employment law. AI has emerged as a transformative technology in health administration. AI-enabled tools promise improved efficiency, cost savings, better clinical outcomes, and streamlined administrative processes. Yet, their adoption in group health plans raises complex regulatory, fiduciary, and ethical questions, particularly under the Employee Retirement Income Security Act of 1974 (ERISA), which governs virtually all workplace employee benefit plans.
The use of AI tools by, and on behalf of, employer-sponsored group health plans raises no shortage of concerns. There are, however, three issues that require immediate attention by plan sponsors and plan administrators alike: claims adjudication, fiduciary oversight, and vendor contracts.
Claims adjudication: Autonomous decision-making
AI technology – defined as machine-based systems such as algorithms, machine learning models, and large language models – is increasingly used by insurers and third-party administrators (TPAs) to assess clinical claims on behalf of health plan participants.
AI systems are likely to be used to make basic eligibility determinations, which are based on plan terms and relevant information about the employee and their beneficiaries. More substantively, AI systems may also be used to make clinical determinations. These systems could, for example, evaluate whether a particular treatment or service is deemed medically necessary based on training data and preprogrammed rules. For this purpose, an AI system might scan diagnostic codes, patient histories, and treatment guidelines to determine whether a claim aligns with standard clinical practice. AI is already being used to ease internal administrative functions for carriers and claims administrators, and its use is expected to expand significantly.
There is a separate though equally important question of AI use in preauthorization. AI tools have the potential to improve this process by automating prior authorization requests, predicting approval likelihood, and flagging cases that require expedited human review. For example, AI could match a request for MRI imaging against clinical guidelines, patient history, and plan terms to recommend approval within seconds.
If an AI tool is used for both initial claims adjudication and preauthorization, this poses some fundamental concerns relating to training data. There are currently three federal rules governing machine-readable files, the purpose of which is to make claims data widely available:
The hospital price transparency rule, requiring hospitals to disclose items and services, including the hospital standard (gross) charge, discounted cash price, and the minimum and maximum charge that a hospital has negotiated with a third-party payer.
The Transparency in Coverage final rule, requiring health plans to disclose in-network negotiated rates and historical out-of-network billed charges and allowed amounts.
The transparency rules under the Consolidated Appropriations Act, 2021, imposing additional disclosure obligations on plans and issuers.
All three rules embed decades of provider and carrier practices that transparency was designed to expose. If these datasets are used to train AI tools, it could lead to a number of problems, including, for example, inherent biases and systemic flaws being baked in.
Fiduciary oversight
For fiduciaries of group health plans, any use of AI technology presents two fundamental problems. First, AI models are by their nature black boxes. And, second, there are limits to AI competence and accuracy that cannot be eliminated.
The black-box nature of AI technology poses a significant problem for fiduciaries of group health plans. The opacity of the models makes ERISA-required monitoring and oversight extremely challenging. Robust third-party standards are needed to establish measurement science and interoperable metrics for AI systems. Further, independent certification of vendor AI systems would help fiduciaries meet ERISA standards. Ideally, the US Department of Labor (DOL) would issue guidance as it has done in related contexts, such as with cybersecurity threats involving ERISA-covered pension and later welfare plans.
Questions about AI competence and accuracy are equally daunting to plan fiduciaries. The current crop of AI models relies heavily on a process of back propagation to continuously refine reliability. Even the most advanced AI models achieve, at best, an asymptotic approach to reliability. This raises a threshold legal question: whether and to what extent fiduciaries can prudently rely on AI technology. The practical answer is that they likely already do, sometimes unknowingly, which means that some validation of AI models is essential for a fiduciary to ensure they can meet ERISA’s requirements.
According to the DOL, ERISA fiduciaries must ensure that plan decisions are made “solely in the interest of participants and beneficiaries” with prudence and diligence. Delegating critical claim denials to opaque AI models risks violating this duty. At minimum, AI tools should be limited to serving in a decision-support role, with final authority resting in human hands.
Vendor contracts
The generic reference to an ERISA-covered group health plan using AI masks an important reality: AI will typically be used on the plan’s behalf by TPAs and insurers and less often directly by the plan itself. For most large, multistate self-funded plans, this means large, national carriers acting in an administrative services only (ASO) capacity.
ASO providers have significant leverage in negotiating contract terms. A plan delegating claims adjudication without inquiring whether and how that third party uses AI in claims adjudication risks violating fiduciary standards and facing scrutiny.
Fiduciaries should work to understand and evaluate the risk that delegating plan functions to third parties poses and take appropriate steps in response. At a minimum, this would necessitate a group health plan insisting on AI-related contract provisions. For example, the group health plan should insist that:
Any claim denials must be reviewed and ultimately decided by a human clinician.
The ASO must disclose how AI is tested and audited.
Performance guarantees and indemnification provisions should cover AI-related failures.
Given the infancy of AI in group health plan administration, ASOs are expected to resist comprehensive AI provisions, highlighting the need for federal guidance and clearer market standards.
Legal landscape
State law considerations
The most prominent example of a state attempting to establish rules governing the use of AI for group health plans is California’s Physicians Make Decisions Act. Effective January 1, 2025, it prohibits the use of AI by insurance companies in certain instances of claim and utilization review for health plans and prohibits health insurers from solely relying on AI to deny any claim based specifically on medical necessity. This act is based on California Senate Bill 1120, which requires, at a high level, that medical professionals rather than AI make any determination as to whether a treatment is medically necessary.
The act and similar laws being considered in other states, such as Arizona, Maryland, Nebraska, and Texas, highlight the need for group health plans to be aware of and ensure state law compliance, particularly as they review and negotiate ASO contracts.
Federal agency guidance
The DOL and the US Department of the Treasury have issued limited, indirect guidance on the use of AI. This guidance illustrates how the government might handle AI issues in the benefit plan context.
A withdrawn DOL bulletin (Field Assistance Bulletin 2024-1) discussed the risks associated with using AI under the Family and Medical Leave Act. In the bulletin, the DOL generally warned that “without responsible human oversight, the use of such technologies may pose potential compliance challenges.” It noted that even though similar violations may occur under human decision-making processes, there is a higher risk that violations made when using AI would apply across the entirety of the task at hand or workforce.
The Treasury department has studied AI in the financial services sector in a report, noting that while there are various benefits, the use of AI can raise concerns over bias, explainability, and data privacy, concerns that apply equally in health plan administration.
Both agencies emphasized the need for human oversight, echoing ERISA fiduciary obligations.
AI Disclosure Act
Although not law, the proposed AI Disclosure Act of 2023 would require disclaimers on AI-generated content. If a version of this bill is passed, it would further require companies to disclose certain information about the use of AI. Even absent a mandate, plan sponsors may wish to consider voluntary disclosures when AI tools are used in communications or claims processes.
Next steps and action items
While AI technology holds immense promise for employer-sponsored group health plans, its deployment carries significant fiduciary, ethical, and regulatory challenges. Plans must still meet their legal obligations under ERISA and other applicable laws and should ensure that AI does not autonomously make clinical or claims decisions, that preauthorization systems are transparent and fair, that data biases are identified and corrected, and that fiduciary oversight mechanisms are robust. Fiduciaries must balance innovation with prudence. They must ensure that AI is a tool to enhance – not replace – human judgment. Regardless, AI use likely increases fiduciary oversight obligations and necessitates ongoing monitoring.
In light of the proliferation of AI and its impact on employee benefit plans, practical measures that plan sponsors should consider may include:
Inventory AI use: Identify where AI is being used by the plan and by TPAs, insurers, and other vendors in claims adjudication, preauthorization, and participant communications.
Update fiduciary processes: Incorporate AI oversight into plan committee charters, meeting agendas, and monitoring practices.
Review contracts: In new and existing contracts, negotiate for disclosure, audit rights, performance guarantees, and human review of denials in situations where AI is involved.
Validate outputs: Request disclosure of vendor documentation on testing, error rates, and mitigation of bias and review documentation with plan consultants during regular fiduciary committee meetings.
Stay informed on state laws: Monitor state legislative activity (e.g., California) and update compliance procedures.
Plan fiduciaries: Provide education to committee members on AI risks, limitations, and monitoring obligations under ERISA.
Maintain document oversight: Keep records of inquiries, discussions, and decisions relating to AI to demonstrate prudence.
The use of AI in benefits is a complicated topic.
10 FAQs About California’s New Algorithmic Discrimination Rules
On October 1, 2025, California’s groundbreaking regulations on the use of artificial intelligence (AI) and automated decision-making systems (ADS) in employment practices go into effect. The regulations, advanced by the California Civil Rights Council, aim to prevent algorithmic discrimination against applicants and employees, ensuring compliance with California’s Fair Employment and Housing Act (FEHA). This article reviews ten frequently asked questions about the new requirements.
Quick Hits
California’s new AI regulations, effective October 1, 2025, prohibit the use of AI tools that discriminate against applicants or employees based on protected characteristics under the Fair Employment and Housing Act (FEHA).
Employers are required to keep all data related to automated decision systems for four years and are held responsible for any discriminatory practices, even if the AI tools are sourced from third parties.
The regulations target AI tools that cause disparate impacts in various employment processes, including recruitment, screening, and employee evaluations, while allowing legal uses of AI for hiring and productivity management.
Question 1: What are California’s new algorithmic discrimination regulations?
Answer 1: The new AI regulations prohibit the use of an ADS or AI tool that discriminates against an applicant or employee on any basis protected by FEHA. The new regulations will make the state one of the first to adopt comprehensive algorithmic discrimination regulations regarding the growing use of AI tools to make employment decisions.
Q2: When are the regulations effective?
A2: On October 1, 2025.
Q3: What exactly is an ADS?
A3: An ADS is “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” (Emphasis added.) Many AI hiring tools fall within this definition.
Q4: Are employers prohibited from using all AI tools?
A4:No. The regulations do not prohibit any particular tool or limit the legal ways in which employers may use AI tools, including to source, rank, and select applicants; facilitate the hiring process; and monitor and manage employee productivity and performance. Instead, they prohibit the use of any AI tool to discriminate intentionally or unintentionally against applicants or employees based on their membership in any class of employees protected from discrimination under FEHA.
Q5: Who is an “applicant”?
A5: An “applicant” is “[a]ny individual who files a written application or, where an employer or other covered entity does not provide an application form, any individual who otherwise indicates a specific desire to an employer or other covered entity to be considered for employment. Except for recordkeeping purposes, “Applicant” is also an individual who can prove that they have been deterred from applying for a job by an employer’s or other covered entity’s alleged discriminatory practice.” ‘Applicant’ does not include an individual who without coercion or intimidation willingly withdraws their application prior to being interviewed, tested or hired.”
Q6: What conduct is targeted?
A6: The regulations seek to limit the use of AI tools that rely on unlawful selection criteria and/or cause a disparate impact in the areas of recruitment, screening, pre-employment inquiries, job applications, interviews, employee selection and testing, placement, promotions, and transfer. The California Civil Rights Department (CRD) identifies several examples of automated employment decisions potentially implicated by the regulations.
“Using computer-based assessments or tests, such as questions, puzzles, games, or other challenges to: [m]ake predictive assessments about an applicant or employee; [m]easure an applicant’s or employee’s skills, dexterity, reaction-time, and/or other abilities or characteristics; [m]easure an applicant’s or employee’s personality trait, aptitude, attitude, and/or cultural fit; and/or [s]creen, evaluate, categorize, and/or recommend applicants or employees
“Directing job advertisements or other recruiting materials to targeted groups”
“Screening resumes for particular terms or patterns”
“Analyzing facial expression, word choice, and/or voice in online interviews”
“Analyzing employee or applicant data acquired from third parties”
Q7: Are there new record-keeping requirements?
A7: Yes. Employers must keep for four years all automated-decision system data created or received by the employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee.
Q8: Who can be held responsible for algorithmic discrimination?
A8: Employers will be held responsible for the AI tools they use, whether or not they procured them from third parties. The final regulations also clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of AI tools, potentially implicating third parties that design or implement such tools.
Q9: Are there available defenses?
A9:Yes, claims under the regulations are generally subject to existing defenses to claims of discrimination. The regulations also clarify that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to a claim of unlawful discrimination.
Q10: What should employers do now?
A10: Employers may want to consider the following steps:
Reviewing internal AI tool usage, practices, procedures, and policies to determine whether any tool being used would be covered by the regulations.
Piloting proposed AI tools before rolling them out to the workforce. This includes thoroughly vetting the steps taken by AI developers to avoid algorithmic discrimination.
Training workforce on the appropriate use of AI tools.
Notifying applicants and employees when AI tools are in use, and providing accommodations and/or human alternatives where required.
Establishing an auditing protocol. Although auditing is not required by the regulations, the act of engaging in anti-bias testing or similar proactive efforts may form the basis for a defense to any future claims of algorithmic discrimination. The regulations also suggest that a fact-finder may also consider the quality, efficacy, recency, and scope of any auditing effort, as well as the results of and response to that effort.
Reviewing record-keeping practices to be sure required data can be securely maintained for at least four years.
AI Vendor Liability Squeeze: Courts Expand Accountability While Contracts Shift Risk
The landscape of AI vendor liability is undergoing a fundamental shift, creating an uncomfortable position for businesses deploying AI systems. Federal courts are pioneering legal theories that hold AI vendors directly accountable for discriminatory outcomes, while vendor contracts become more aggressive in shifting liability to customers. The result is a “liability squeeze” leaving businesses responsible for AI failures that they cannot fully audit, control, or understand.
The Mobley Precedent: Vendors as Legal Agents
The Mobley v. Workday case fundamentally altered AI liability frameworks. In July 2024, Judge Rita Lin allowed the discrimination lawsuit to proceed against Workday as an “agent” of companies using its automated screening tools. This marked the first time a federal court applied agency theory to hold an AI vendor directly liable for discriminatory hiring decisions.
The legal reasoning is both straightforward and profound in its implications. When AI systems perform functions that are traditionally handled by employees—such as screening job applicants—the vendor has been “delegated responsibility” for that function. Under this theory, Workday wasn’t merely providing software; it was acting as the employer’s agent in making hiring decisions.
The case achieved nationwide class action certification in May 2025, covering all applicants over the age of 40 rejected by Workday’s AI screening system. Derek Mobley’s experience illustrates the exponential nature of AI discrimination: he applied to over 100 jobs through Workday’s system and was rejected within minutes each time. Unlike individual human bias, a single biased algorithm can multiply discrimination across hundreds of employers and thousands of applicants.
Contract Risk-Shifting Acceleration
While courts expand vendor liability, the contracting landscape tells a different story.
Market analysis from legal tech platforms reveals systematic risk-shifting patterns in vendor contracts. A recent study found that 88% of AI vendors impose liability caps on themselves, often limiting damages to monthly subscription fees. In addition, only 17% provide warranties for regulatory compliance, a significant departure from standard SaaS practices. And broad indemnification clauses routinely require customers to hold vendors harmless for discriminatory and other outcomes.
This creates dynamics where vendors develop and deploy AI systems knowing legal responsibility will ultimately rest with customers. Businesses using biased algorithms may find themselves sued for discrimination while discovering their vendor contracts prevent recourse for underlying defects.
The Practical Impact
Consider a mid-sized retailer using AI-powered applicant tracking. Under the Mobley precedent, both the retailer and AI vendor could face discrimination claims. However, the vendor’s contract likely contains:
Liability caps limiting damages (often to annual fees or a modest multiplier);
No or limited compliance warranties regarding fair hiring practices;
Broad indemnification requiring the retailer to defend the vendor against discrimination claims; and
Limited audit rights preventing algorithmic examination for bias.
The retailer, therefore, becomes legally responsible for discriminatory outcomes caused by the algorithms it cannot examine, using the training data it cannot audit, with decision-making logic it cannot fully understand. This represents a fundamental breakdown in the traditional relationship between risk and control.
Strict Liability Development
The liability squeeze may intensify as legal scholars and courts explore strict product liability theories for advanced AI systems, particularly “agentic” AI capable of autonomous multi-step tasks. Unlike negligence requiring proof of unreasonable conduct, strict liability focuses on whether products were defective and caused harm.
For AI systems that can autonomously enter contracts, make financial decisions, or take actions on behalf of users, a single “hallucination” or erroneous decision could constitute not just performance failure but product defect with potentially unlimited liability—but only if vendor contracts don’t successfully shield them from liability.
Strategic Response Framework
Aggressive Contract Negotiation
Legal teams must approach AI vendor negotiations as primary risk management exercises rather than standard procurement. Key provisions include:
Mutual liability caps rather than one-sided vendor protection;
Explicit compliance warranties for applicable regulations;
Audit rights allowing examination of algorithmic decision-making;
Indemnification for discrimination and bias claims caused by the AI tool; and
Performance guarantees with measurable metrics.
Insurance Strategy Evolution
Traditional insurance policies also create coverage gaps for AI-related liabilities. Discrimination claims from biased algorithms don’t easily fit into cyber, errors & omissions, or general liability coverage. Organizations, therefore, should:
Conduct comprehensive coverage gap analyses for AI deployments;
Negotiate specific AI coverage endorsements;
Consider emerging specialized AI insurance products; and
Document governance practices to support coverage claims.
Internal Governance as Legal Defense
Robust internal AI governance is becoming the primary legal defense against discrimination claims, including:
Bias testing protocols with documented methodologies and results;
Regular algorithmic audits by qualified third parties;
Decision-making transparency with clear human oversight (“human-in-the-loop”) procedures;
Vendor due diligence processes evaluating bias mitigation practices; and
Impact monitoring systems tracking outcomes across protected classes.
Looking Ahead
The AI vendor liability landscape evolves in opposite directions simultaneously. Courts expand accountability while contracts limit it. This divergence creates immediate practical risks for businesses deploying AI systems.
The most dangerous assumption is that vendor liability shields will hold. As the Mobley case demonstrates, legal theories evolve faster than contract terms. Businesses relying solely on vendor liability caps may find themselves holding full responsibility for algorithmic failures they could not control or predict.
The solution isn’t avoiding AI; it’s approaching AI deployment with understanding that ultimate liability increasingly rests with deployers. This reality demands rigorous vendor due diligence, thoughtful contract negotiation, and comprehensive internal governance.
In the age of algorithmic decision-making, careful attention to liability allocation isn’t paranoia, it’s prudent risk management for an evolving legal landscape where responsibility and control no longer align as traditional business models assumed.