Navigating FLSA Overtime Exemptions in AI-Integrated Positions
Over the past two years, the use of artificial intelligence (AI) by employees—especially within white-collar professions—has surged, with nearly twice as many workers now relying on AI tools for a portion of their daily tasks.1 This rapid growth is transforming routine workflows and the very contours of job responsibilities, as professionals leverage AI not only for analysis and reporting, but also decision support and automation of complex processes. While responsible AI use in the workplace may mean more efficiency and accuracy, it can also complicate employee classification under the Fair Labor Standards Act (FLSA). This alert considers how the integration of AI with an employee’s job duties may impact a position’s FLSA overtime exemption status.
FLSA Overtime Exemptions
All U.S. employees must be classified as either “exempt” or “non-exempt” from the overtime pay requirements of the FLSA.2 If an employee is “non-exempt”, they are entitled to minimum wage and overtime pay for hours worked in excess of 40 in a workweek. To qualify as “exempt”, employees must satisfy the FLSA’s salary threshold3 and meet certain tests regarding their job duties. For example:
The administrative exemption requires, among other things, the exercise of independent judgment and discretion over matters of significance;
The executive exemption requires, among other things, authority over personnel decisions, such as hiring, supervision, discipline, promotion, and termination; and
The learned professional exemption requires, among other things, advanced knowledge and consistent use of discretion and judgment.
At a high level, the common thread in the FLSA’s job duties tests is that employees must exercise a certain degree of discretion and judgment to be classified as “exempt.”
Integration of AI
AI threatens to replace independent judgment, discretion, and other key elements of decision making, which could cause historically “exempt” jobs to fail the FLSA’s job duties tests. The result: increased risk of misclassification claims and resulting damages in the form of unpaid overtime, back pay, liquidated damages, attorney fees, and related penalties.
Consider, for example, a department supervisor or general manager. This type of position generally is responsible for supervising a team—i.e., hiring, assigning daily tasks, handling employee evaluations, and terminations. However, with the implementation of AI-powered office management systems that automatically allocate workloads and generate staffing recommendations based on historical performance and real-time metrics, the manager’s traditional duties may now be automated. Although the employee may still hold a managerial title, the actual authority and discretion formerly exercised over personnel and operational decisions may be significantly reduced by shifting the role more towards simply monitoring AI outputs and rubber-stamping automated decisions.
Looking more granular, consider a professional, such as a laboratory scientist—a position historically classified as exempt because its primary duties require the consistent use of independent discretion and judgment in the interpretation of complex diagnostic data. If the employer integrates a sophisticated AI diagnostic platform capable of rapidly analyzing lab results and flagging potential anomalies, rather than manually reviewing the raw data, the employee now begins their work by reviewing the AI’s preliminary findings. Even with AI integration, the employee may still need to validate the results, flag inconsistencies, or override recommendations based on clinical context. The AI may, for example, misinterpret rare disease markers or fail to recognize certain test result nuances due to a lack of sufficient data. In this situation, the human employee still plays a role, but, increasingly, the work may resemble that of an overseer or editor rather than a primary analyst; someone following a checklist, rather than independently deciding which actions should be taken.
This shift raises a key question: if the employee’s duties become more about confirming or lightly editing AI-generated findings rather than using deep expertise to interpret data from scratch, does the role still qualify for an FLSA exemption? As with the difference between a journalist rewriting press releases and a writer producing original content, a lab scientist or general manager primarily validating AI outputs rather than exercising independent judgment may begin to resemble a “non-exempt” employee under the FLSA. Thus, as AI assumes functions once performed by humans, employers must ask: are employees still making meaningful decisions, or merely reviewing or abiding by AI-generated outputs?
Take-Aways
As courts and regulatory bodies evaluate exemption status based on the employee’s actual duties—not just job titles or descriptions—employers must consider whether AI systems have fundamentally altered the nature of an employee’s role. Perhaps the FLSA overtime exemptions will be adjusted to account for the changes AI is bringing to the workplace, but, until then, it is essential that employers:
Conduct FLSA classification audits for AI-integrated roles, considering whether the role requires the use of judgment that is real, substantial, and on matters of significance, and whether the employee has the actual authority to interpret or override AI outputs, not merely to monitor them.
Update job descriptions to clearly define judgment-based responsibilities, not just technical tasks.
Monitor role evolution to determine when and if reclassification is required as AI capabilities increase and human oversight potentially diminishes.
Train human resources and legal teams to recognize how AI may affect FLSA exemption status and to adjust compliance strategies accordingly.
1 See AI Use at Work Has Nearly Doubled in Two Years (June 2025), available at https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx#:~:text=AI%20adoption%20has%20increased%20primarily,%25)%20and%20finance%20(32%25
2 Please note that this client alert does not consider state and local laws that may be applicable. Many state and local governments have their own wage and hour rules, which may impose different (and often stricter) tests to determine whether an employee classifies as “exempt” from minimum wage and overtime pay.
3 To qualify for the FLSA executive, administrative, or professional exemption, the employee must be paid on a salary basis at a rate at least equal to the federal salary standard (currently US$684 per week (which equates to an annual salary of US$35,568)). To be paid on a salary basis, the employee must regularly receive each pay period a predetermined amount constituting all or part of their compensation. State and local law may impose higher compensation minimums.
European Commission Issues Draft Guidance Issued on Serious Incident Reporting under EU AI Act
On September 26, 2025, the European Commission (EC) published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. For organizations developing or deploying AI systems that may fall within the Act’s high-risk AI system scope, understanding these new reporting obligations is essential for compliance planning.
Key Takeaways
The Commission published a draft incident reporting template and guidance document on September 26, 2025.
Providers of high-risk AI systems will be required to report “serious incidents” to national authorities.
Reporting timelines vary from two (2) to fifteen (15) days depending on the severity and type of incident.
Public consultation is open until November 7, 2025.
Understanding the Incident Reporting Framework
Article 73 of the EU AI Act establishes a tiered reporting system for serious incidents involving high-risk AI systems. While these requirements will not take effect until August 2026, the newly released draft guidance offers valuable insights into the Commission’s expectations.
The reporting framework serves multiple purposes: creating an early warning system for harmful patterns, establishing clear accountability for providers and users, enabling timely corrective measures, and fostering transparency to build public trust in AI technologies.
What Qualifies as a “Serious Incident”?
Under Article 3(49) of the Act, a serious incident occurs when an AI system incident or malfunction directly or indirectly leads to:
Death of a person, or serious harm to a person’s health;
Serious and irreversible disruption of critical infrastructure;
Infringement of fundamental rights obligations under EU law; or
Serious harm to property or the environment.
What is particularly important in the draft guidance is its emphasis on both direct and indirect causation. An AI system providing incorrect medical analysis that leads to patient harm through subsequent physician decisions would qualify as an indirect serious incident. This means organizations must account for downstream effects in their risk management frameworks.
Intersection with Existing Reporting Regimes
For clients managing multiple compliance frameworks, the guidance provides welcome clarification on overlapping reporting obligations. High-risk AI systems already subject to equivalent reporting obligations under other EU laws (such as NIS2, DORA or CER) generally need only report fundamental rights violations under the AI Act.
This reflects the Commission’s attempt to minimize duplicative reporting burdens, though the practical implementation still requires careful cross-functional coordination between AI governance, legal, and compliance teams.
Practical Implications for Organizations
Organizations should begin mapping their AI systems against the high-risk criteria and preparing internal processes for incident detection, investigation, and reporting. Key considerations include:
Establishing clear incident response protocols;
Implementing monitoring systems to detect potential serious incidents;
Developing investigation procedures that preserve evidence;
Creating cross-functional teams to manage reporting obligations; and
Updating risk assessments to account for serious incident scenarios.
Next Steps
I encourage clients to participate in the public consultation, which remains open until November 7, 2025. The Commission is particularly seeking feedback and examples regarding the interplay with other reporting regimes.
Organizations should also begin reviewing their AI governance frameworks to ensure they can effectively implement these reporting requirements when they become applicable in August 2026.
McDermott+ Check-Up: September 26, 2025
THIS WEEK’S DOSE
A government shutdown looms. No progress was made this week to fund the government beyond the September 30, 2025, deadline.
HHS takes action on autism causes and treatments. The US Department of Health and Human Services (HHS) released findings on a link between acetaminophen use during pregnancy and childhood autism.
GAO releases reports on provider consolidation and urban hospital closures. One report examines the impact of provider consolidation on quality, cost, and access, and the other reviews causes of urban hospital closures.
OSTP requests information on AI regulatory reform. The Office of Science and Technology Policy (OSTP) request follows the administration’s July 2025 artificial intelligence (AI) action plan.
Federal court vacates Biden-era RADV rule. The 2023 rule modified how CMS recovers overpayments from Medicare Advantage plans after risk adjustment data verification (RADV) audits.
CONGRESS
A government shutdown looms. After the Republican-led bill to fund the government through November 21, 2025, passed the House but failed in the Senate last week, both chambers have been out of session all week for a scheduled recess. As the stalemate between the two parties stretched into this week, President Trump initially agreed to meet with Democratic leaders Schumer (NY) and Jeffries (NY) on Thursday to potentially negotiate an agreement on the short-term continuing resolution (CR), but he later cancelled the meeting after reported pushback from Republican leaders Thune (SD) and Johnson (LA).
Democratic leadership continues to state that they will not vote for a CR unless it addresses policies to protect healthcare access, such as repealing Medicaid provisions in the One Big Beautiful Bill Act (OBBBA) or making the Marketplace enhanced advanced premium tax credits (APTCs) permanent. While some moderate Republicans support a short-term APTC extension or some version of an extension, others are firmly opposed. This week, some Republicans also began to discuss a need to address Hyde protections regarding abortion if enhanced APTCs were to be extended. This issue is a non-starter for Democrats, who have noted that Hyde protections are already included in the Affordable Care Act.
The Senate will return from recess on Monday. The House is not currently scheduled to be back in session until after the fiscal year (FY) deadline (although House Minority Leader Jeffries has called Democrats back to Capitol Hill early next week), making a government shutdown of some duration very likely as of this moment. Some healthcare programs and policies, known as extenders, will also expire on September 30, 2025, if a funding bill does not pass. These include Medicare telehealth flexibilities, the hospital at home waiver, and community health center funding. Federal agencies began work this week on contingency plans in case of a shutdown, but public HHS contingency plans haven’t been updated since 2024. On Wednesday, the Office of Management and Budget sent a memo directing agencies to create a list of workers to be laid off in the case of a shutdown, increasing existing concerns about how a shutdown would play out.
ADMINISTRATION
HHS takes actions on autism causes and treatments. Following a press conference, HHS released a fact sheet outlining interagency findings and action plans to address increasing rates of autism:
HHS reported a correlation between acetaminophen use in pregnancy and increased risk of autism in children, and the US Food and Drug Administration (FDA) announced its intention to change the safety label for acetaminophen.
FDA released a formal notice to physicians informing them of the correlation between acetaminophen and autism and asking them to consider minimizing its use for pregnant patients, while also acknowledging that acetaminophen is “the safest over-the-counter alternative in pregnancy” compared to other over-the-counter painkillers and fever reducers.
FDA announced its initiative to update labeling for leucovorin calcium tablets, used to treat cerebral folate deficiency, to authorize access for children with speech-related deficits associated with autism.
The National Institutes of Health Autism Data Science Initiative announced 13 new awards totaling more than $50 million to research autism prevalence, etiology, and treatment.
The HHS press release can be found here.
GAO releases reports on provider consolidation and urban hospital closures. The US Government Accountability Office (GAO) issued a report showing increased physician consolidation with hospital systems, corporate entities, and private equity firms, and finding that all 10 of the largest health insurers have acquired physician practices or management services organizations in recent years. While studies reviewed by GAO found that such consolidation generally led to higher spending in traditional Medicare, they showed no change in the quality of care provided. The GAO report was required by the FY 2023 Labor-HHS appropriations report.
GAO also issued a report detailing the factors contributing to closures of urban hospitals, including financial declines, aging physical infrastructure, declining volume of inpatient admissions, challenges operating independently without the support of a multihospital system, poor management practices, and separate ownership interests. Sen. Grassley (R-IA), Rep. Sewell (D-AL), and Rep. Sanchez (D-CA) requested the study.
OSTP requests information on AI regulatory reform. The request for information notes that “the realization of the benefits from AI applications cannot be done through complete de-regulation, but require policy frameworks, both regulatory and nonregulatory” to foster innovation while protecting the public interest. OSTP seeks information on federal regulations that hinder AI development, deployment, or adoption, particularly rules that were established before current AI capabilities were anticipated. OSTP invites responses to one or more of the following questions:
What AI activities, innovations, or deployments are currently inhibited, delayed, or otherwise constrained because of federal statutes, regulations, or policies?
What specific federal statutes, regulations, or policies present barriers to AI development, deployment, or adoption in your sector?
Where existing policy frameworks are not appropriate for AI applications, what administrative tools (e.g., waivers, exemptions, experimental authorities) are available but underutilized?
Where specific statutory or regulatory regimes are structurally incompatible with AI applications, what modifications would be necessary to enable lawful deployment while preserving regulatory objectives?
Where barriers arise from a lack of clarity or interpretive guidance on how existing rules cover AI activities, what forms of clarification (e.g., standards, guidance documents, interpretive rules) would be most effective?
Are there barriers that arise from organizational factors that impact how federal statutes, regulations, or policies are used or not used? How might federal action appropriately address them?
Comments are due October 27, 2025.
COURTS
Federal court vacates Biden-era RADV rule. The US Northern District of Texas District Court sided with plaintiffs to overturn the February 2023 final rule, which eliminated an adjuster used in RADV audits. Plaintiffs claimed the rule was arbitrary and capricious and that CMS abused its discretion by applying the new policy retroactively. It is unclear if the Trump Administration will appeal the ruling and what impact it will have on the May 2025 announcement to audit all MA contracts annually and expedite remaining RADV audits.
QUICK HITS
USDA terminates future Household Food Security Reports. The US Department of Agriculture (USDA) announced it will terminate future Household Food Security Reports, a 30-year study initiated under the Clinton administration to support expanded Supplemental Nutrition Assistance Program eligibility and benefit allotments.
USDA swears in Ben Carson as health advisor. Ben Carson, MD, former US Department of Housing and Urban Development secretary and a retired pediatric neurosurgeon, will serve as USDA’s national advisor for nutrition, health, and housing. He will advise both President Trump and USDA Secretary Rollins on nutrition and rural healthcare matters and will join Secretary Rollins on the Make America Healthy Again (MAHA) Commission.
CMS approves temporary Georgia work requirements waiver extension. The Centers for Medicare & Medicaid Services (CMS) approved an extension of Georgia’s Section 1115 waiver, which provides Medicaid coverage to the expansion population as long as beneficiaries meet specified work requirements, through December 31, 2026. CMS stated it will work with Georgia to ensure compliance with the OBBBA work requirements provision, which is effective January 1, 2027. This approval follows a recent GAO report on Georgia’s waiver, requesting that CMS implement recommendations to improve federal oversight of administrative costs.
CMS increases supplemental benefit information listed on Medicare Plan Finder. For contract year 2026, CMS will expand the display of Medicare Advantage (MA) supplemental benefits on the MA Medicare Plan Finder (MPF). CMS’s notice follows last week’s final rule requiring plans to submit provider directory information for the MPF and CMS’s announcement of a special enrollment period for 2026 for individuals who enrolled in a plan through the MPF and later learned that the provider directory information was incorrect.
Senate Democrats urge CMS to halt WISeR Model. In a letter, 18 Democratic senators urged CMS to halt the Wasteful and Inappropriate Service Reduction (WISeR) Model pending a full analysis of the program’s impact on patient access, including input from beneficiaries, advocates, providers, and suppliers. The senators expressed concerns that the model isn’t voluntary and relies on AI.
ACF expands supplemental nutrition funding for Head Start. The Administration for Children and Families (ACF) announced a $61.9 million investment in supplemental nutrition funding for more than 290 Head Start programs across the country. The funding follows the recently published MAHA strategy, which aims to promote access to nutritional foods for children and families. The full list of grantees and award amounts can be found here.
SAMHSA provides more than $1.5 billion in substance use grants. The Substance Abuse and Mental Health Services Administration (SAMHSA) announced $1.5 billion in FY 2025 funding for State Opioid Response (SOR) and Tribal Opioid Response grants. The agency also announced $45 million in supplemental funding for SOR recipients to focus on sober or recovery housing for young adults following the “Ending Crime and Disorder on America’s Streets” executive order.
US Department of Commerce initiates investigation into medical device imports. The investigation will focus on the national security impact of imports of personal protective equipment, medical consumables, and medical equipment and devices. The investigation could lead to the imposition of, or increase in, tariffs on such imports.
BIPARTISAN LEGISLATION SPOTLIGHT
A bipartisan group of five representatives and 19 senators reintroduced the Safe Step Act (H.R. 5509/S.2903). The legislation, which was introduced in previous Congresses, would require group health plans to implement an exemptions process for medication step therapy protocols.
NEXT WEEK’S DIAGNOSIS
Any congressional action next week will likely be focused on government funding; no healthcare hearings are currently scheduled. The Senate is scheduled to return to session on Monday. At present, the House is not scheduled to be in session next week, although that is subject to change should the government shut down. If a funding deal is not reached, the government will shut down on October 1, 2025. The parameters of government agency operations during a shutdown remain unclear as we await publicly available contingency staffing plans.
Are You Ready? – New York DFS Cybersecurity Regulation Approaches Its Final Compliance Phase
Are you operating as a financial services business? Are you aware of the new cybersecurity rules that will soon apply to New York–regulated financial firms? If you are a financial services business and are unaware of the upcoming compliance date for New York’s cybersecurity requirements, please mark your calendar. On November 1, 2025, the final phase of compliance under the New York Department of Financial Services’ (DFS) 23 NYCRR Part 500 (Cybersecurity Regulation) will take effect. These requirements stem from the second amendment to the Cybersecurity Regulation (Second Amendment), which was originally adopted in 2017 and has been rolling out in phases since the Second Amendment was finalized in November 2023.
Compliance is mandatory for firms licensed or supervised by DFS. The Cybersecurity Regulation applies broadly to financial services companies regulated by DFS, including those engaged in banking, insurance, mortgage lending and servicing, money transmission, and virtual currency (i.e., crypto) activities, among others. With ransomware, extortion, and third-party breaches continuing to rise, DFS has made it clear that cybersecurity is now a core compliance obligation.
From 2017 to Today
The DFS Cybersecurity Regulation was first issued in 2017, making New York the first state in the country to impose binding cybersecurity standards across the financial services sector operating within its jurisdiction. Early requirements included appointing a Chief Information Security Officer (CISO), adopting multi-factor authentication (MFA) requirements, performing regular risk assessments, maintaining records of audit trails, and reporting cybersecurity events within 72 hours of discovery.
Since then, the cyber threat landscape has changed significantly, with ransomware, supply-chain compromises, and cloud vulnerabilities putting pressure on financial firms. In response, DFS adopted the Second Amendment to the 2017 Cybersecurity Regulation, effective November 1, 2023. The amendment introduced stricter governance, technical, and reporting standards, with staggered compliance dates designed to give firms time to adapt. The final compliance phase, which starts on November 1, 2025, will impose the most significant cybersecurity requirements to date.
For additional background on annual compliance submissions under the Cybersecurity Regulation, see Katten’s 2025 advisory.
Background on the Cybersecurity Regulation
The Cybersecurity Regulation is structured as a risk-based framework, not a one-size-fits-all checklist. Each covered entity must establish a cybersecurity program that protects sensitive information, detects and responds to threats, and ensures business continuity. Programs must be led by a CISO, overseen by the organization’s board or other senior governing body, and supported by written policies covering areas such as access controls, vendor oversight, data retention, incident response, and disaster recovery.
Transparency is a central feature of the regulation. Companies must notify DFS promptly of significant cybersecurity incidents and submit an annual certification of material compliance or an acknowledgment of noncompliance with a remediation plan. The Second Amendment builds on this foundation by raising governance expectations, strengthening technical requirements, and tailoring obligations for larger institutions now classified as “Class A companies.”
The Road to Phase 3: November 1, 2025
DFS designed the Second Amendment to roll out in phases over two years. Early deadlines required faster incident reporting, annual certifications, and updated risk assessments, while later milestones strengthened governance, encryption policies, and business continuity planning. More recent obligations added advanced technical safeguards such as automated scanning, privileged access management, and centralized monitoring.
All of this leads to phase 3, the final compliance deadline on November 1, 2025, when every major element of the amended regulation becomes fully enforceable. By this date, covered entities must show that critical safeguards are operational and producing evidence of effectiveness, including the following.
Expanded MFA (500.12). Broader MFA coverage across users and systems, with exceptions only where the CISO has approved documented compensating controls.
Comprehensive asset inventory (500.13). A detailed, validated catalog of all information system assets, including ownership, location, sensitivity, support lifecycle, and recovery objectives.
Continuous vulnerability management (500.5). Automated scans, manual reviews, and prompt remediation prioritized by risk.
Strict access privilege controls (500.7). Annual reviews, enforcement of least privilege, rapid deprovisioning, and Privileged Access Management tools for Class A companies.
Enhanced monitoring and training (500.14). Logging of user activity, malware filtering, annual social-engineering training and advanced tools like endpoint detection and centralized logging at Class A companies.
Phase 3 is not about drafting policies. It is about proving consistent, auditable implementation. DFS examiners will expect to review logs, scan results (i.e., reviews typically performed using automated tools to detect potential weaknesses in system vulnerability, configuration and compliance), board minutes, training records, and remediation documentation. Firms that wait until the deadline to implement these safeguards risk being caught unprepared when DFS demands proof.
What This Means for Firms
The DFS Cybersecurity Regulation has become one of the most detailed and enforceable cybersecurity frameworks in the United States. It elevates cybersecurity from an information technology function to a core governance and compliance discipline. Boards and executives will need to show not only that they have policies, but that those policies are actively implemented and enforced. The annual certification process places personal accountability squarely on leadership.
Cybersecurity programs must demonstrate operational maturity, with asset inventories, vulnerability scans, access controls, vendor oversight, training, and incident response all generating clear, auditable evidence of ongoing use. Smaller firms must also watch exemption thresholds carefully, as even modest growth in employees, revenue, or assets can disqualify them from relief. If that happens, they will have only 180 days to comply fully.
If you are unsure whether your firm qualifies as a Class A company, whether your existing program can withstand stricter scrutiny, or how to prepare your next compliance certifications, now is the time to act. The deadline for November 1, 2025, is the turning point for firms with New York-based operations subject to the Cybersecurity Regulation. A failure to prepare early can make all the difference between regulatory scrutiny and regulatory confidence.
AI in Civil Defense Litigation: A Powerful Tool… When used Correctly
The rise of sophisticated Artificial Intelligence (AI) is transforming numerous industries, and the legal field is no exception. From legal research and document review, to trial – AI offers the promise of increased efficiency, hyper-comprehensive and novel insights. However, AI tools in legal settings are not immune to errors. While the allure of AI-driven efficiency in tasks such as discovery, predictive coding, and case analysis is strong, legal professionals must remain vigilant about the technology’s potential pitfalls and verify all AI based production. It is crucial to approach the integration of AI in critical areas such as civil defense litigation with a clear understanding of both its potential and its inherent limitations.
The Promise of AI in Civil Defense:
● Enhanced Efficiency: AI can process vast amounts of data far more quickly than a human lawyer alone, accelerating tasks and streamlining costs( Impact of AI on Law firms). One of the most profound impacts of AI in civil defense litigation is the exponential increase in operational efficiency across the legal workflow. This enhancement goes far beyond simple document review to touch every facet of case preparation and management. By automating repetitive and time-consuming tasks, AI frees up lawyers to focus their expertise where it matters most: strategic legal thinking, complex problem-solving, and client-facing work.
For example, in the initial stages of a complex case involving copious amounts of electronically stored information , AI-powered eDiscovery tools can be a game-changer. Instead of having a team of associates and paralegals manually tag, categorize, and summarize thousands—or even millions—of documents, AI can perform this task in a fraction of the time. These tools can automatically extract key entities, such as names, dates, and organizations, and analyze documents for relevance using advanced machine learning algorithms. The result is a highly prioritized and organized dataset, allowing the legal team to immediately begin analyzing the most pertinent information rather than being bogged down by preliminary sorting.
Beyond eDiscovery, AI tools streamline the entire legal research process. While traditional research involves extensive manual searches through statutes, case law, and regulations, AI-powered platforms can rapidly analyze vast databases of verified legal precedents. These systems can not only find on-point cases but also identify patterns in judicial decisions, anticipate counterarguments, and suggest relevant points of law that might be missed by human researchers. This capability not only reduces the time spent on research but also elevates the quality and comprehensiveness of the legal arguments developed.
Furthermore, AI-driven case management systems help automate routine administrative tasks that consume valuable time. These systems can automatically generate document chronologies, track deadlines, manage filing schedules, and monitor case developments, including new filings in similar cases. This automation ensures that no critical task is overlooked and that the legal team has a real-time, centralized view of the case status. For clients, this translates into quicker turnaround times and more proactive case management. Ultimately, by offloading the grunt work to AI, lawyers are empowered to act as strategic advisors rather than information processors, enhancing their productivity and delivering better outcomes.
● Deeper Insights: Building on the ability of AI to process information at a scale and speed impossible for humans, its most valuable contribution to civil defense is the discovery of truly deeper, hidden insights. This goes far beyond simple summarization and is more akin to an investigative superpower, revealing connections and patterns that form the foundation of a robust defense.
For instance, in a product liability case involving extensive consumer feedback and warranty claim data, an AI can analyze millions of data points to identify a subtle, previously unrecognized pattern. It might detect that a specific batch of a product component, manufactured during a certain week, has a disproportionately high failure rate, but only when used under a particular, rare environmental condition. This kind of latent insight could be easily missed by human reviewers, who might be overwhelmed by the sheer volume of data or biased towards more obvious patterns. By uncovering this, AI enables the defense to shift its strategy from a broad attack on the product to a focused, defensible position concerning a specific and limited manufacturing defect.
Similarly, in complex financial litigation, AI’s pattern recognition capabilities can reveal sophisticated fraud schemes. By analyzing transaction data, communications, and public filings, an AI might flag a network of seemingly unrelated transactions and communications between various shell companies and individuals, revealing a coordinated attempt to misrepresent assets. These connections might be invisible to a human eye but are laid bare by an AI’s ability to process and cross-reference information at a micro and macro level simultaneously. This is a level of forensic analysis that can fundamentally change the course of a lawsuit.
Moreover, AI can serve as a “co-counsel” by testing hypothetical case theories against a vast database of legal precedents and court records. A lawyer can query the system with a novel legal theory, and the AI can analyze how similar arguments have fared in the past across various jurisdictions, with different judges, or under specific legal interpretations. This provides data-driven support for or against a particular legal strategy, allowing the defense team to round-table a case with unprecedented informational backing. This level of insight moves beyond intuition and experience, grounding legal strategy in quantifiable data and offering a significant competitive advantage.
● Predictive Capabilities: Beyond predicting static outcomes, AI’s predictive capabilities extend to forecasting various dynamic elements of the litigation process, fundamentally altering the strategic planning of a civil defense case. It enhances the practice from relying solely on an attorney’s intuition and jurisdictional experience to a more data-driven, probabilistic science.
For example, AI can analyze vast datasets of court filings, judicial histories, and party behaviors to generate motion-specific forecasts. When considering a motion to dismiss for example, AI can analyze how a particular judge has ruled on similar motions in the past, considering factors such as the type of legal issue, the procedural stage, and even the opposing counsel’s past track record. A lawyer can receive a data-backed probability of success, allowing them to make a more calculated decision about whether to pursue the motion and how to frame their arguments for maximum impact.
Similarly, AI can predict the likely trajectory and timeline of a case. By analyzing historical case data within a specific jurisdiction and under similar circumstances, AI can provide a more accurate estimate of how long discovery might last, when a motion for summary judgment is likely to be filed, and the probability of reaching a settlement at different stages. This capability allows defense lawyers to provide their clients with more realistic expectations regarding costs, timelines, and potential outcomes, strengthening the lawyer-client relationship. It also enables more efficient resource allocation, as firms can prioritize cases with a higher probability of success or those requiring more intensive attention.
Another powerful application of predictive AI is in settlement strategy. AI models can forecast potential settlement values and probabilities by analyzing past settlements in comparable cases, taking into account factors like jurisdiction, damages, verdicts and opposing counsel’s history. This provides attorneys with a data-driven negotiation framework, enabling them to approach settlement talks with a clearer understanding of potential outcomes and risks. By quantifying the risks of proceeding to trial versus accepting a settlement, AI empowers legal teams to advocate more effectively for their clients’ best interests and pursue resolutions that offer the most certain and beneficial outcome.
● Cost Reduction: In addition to streamlining laborious tasks like document review, AI offers significant cost savings through more strategic applications throughout the litigation lifecycle. While the billable hour has long been the industry standard, AI’s ability to compress the time required for certain tasks is forcing a reevaluation of billing models. Forward-thinking firms can use AI to offer competitive pricing and a reduced overall cost per case.
Furthermore, AI-powered predictive analytics can help defense teams and their clients make more informed decisions about case strategy and settlement negotiations. By analyzing historical case data, including verdicts, settlement amounts, and opposing counsel’s track record, AI can provide a data-driven risk assessment. This reduces financial uncertainty for clients and allows legal teams to allocate resources more effectively, pursuing settlement when the data suggests a high probability of an expensive, prolonged trial, or preparing for trial with greater confidence when the data is favorable.
Another area of substantial cost reduction is in legal invoice review and matter management. For in-house legal departments and insurers, AI-driven billing software can automatically audit invoices from outside counsel, flagging entries that violate billing guidelines, checking for duplicate charges, or identifying unusual billing patterns. This moves beyond simple compliance checking to offer sophisticated spend-management, where AI identifies areas of inefficiency and helps negotiate better rates and fee structures. The result is not only more accurate billing but also a more predictable and controlled legal budget for the client.
Finally, AI is enabling legal teams to “insource” more work, reducing reliance on expensive external vendors for routine tasks. For example, instead of outsourcing high-volume, low-complexity document review or due diligence to a third party, AI tools can enable in-house legal teams to handle this work internally at a fraction of the cost. This empowers legal departments to manage larger workloads without a proportional increase in staff, allowing legal professionals to focus their expertise on high-value, strategic work that truly requires their unique judgment.
The Inherent Limitations:
Despite these benefits, it’s essential to acknowledge the limitations of AI in its application to litigation, especially with the use of open facing LLMs.
● The “Hallucination” Factor: Just as an open source large language model (LLM) can sometimes generate incorrect or nonsensical information, closed system AI legal tools can produce inaccurate summaries, misinterpret case law, or even fabricate non-existent information. A common source of such incorrect outcomes are due to human error, by incorrect prompting or incorrection use of the application AI tools. It is imperative for any lawyer who is integrating AI in their practice to be fully trained on the use of the software, including that of query generation and the capabilities of the system being implemented. Most notably, the necessity of verification of all AI production is not only necessary, but ethically required, to ensure all work is factually and legally accurate.
● Lack of Nuance and Contextual Understanding: Legal arguments often hinge on subtle nuances, contextual understanding, and human judgment. AI, while adept at processing data, may struggle to grasp these complexities in the same way an experienced legal professional can. Here again, it is imperative the lawyer accurately query the system with the correct prompting of legal questions and verify its output. While AI is a powerful tool for streamlining litigation, it still requires a lawyer in the driver’s seat.
● Ethical Considerations: The use of AI in litigation introduces novel ethical challenges, with client confidentiality and data privacy being paramount. Firms must ensure any AI system, whether proprietary or third-party, has robust data encryption and security protocols to protect sensitive client information. Lawyers are ethically obligated to discuss the use of such technology with their clients, obtaining informed consent for any potential sharing of confidential data with AI vendors. Additionally, legal professionals must vet AI systems to understand how client data is handled, ensuring compliance with professional conduct rules and upholding the duty of confidentiality.
Another critical ethical concern is algorithmic bias, which can be inadvertently embedded in AI systems trained on biased historical data. This can lead to unfair outcomes and risks violating ethical duties related to fairness and due process. Legal teams must favor transparent AI systems that allow for auditing and validation of their decision-making processes. This requires a proactive approach to continually monitor for and mitigate bias, ensuring that the technology does not perpetuate or amplify existing societal prejudices, especially in sensitive areas like sentencing or case outcome predictions.
Ultimately, AI serves as a tool to support, not replace, a lawyer’s professional judgment. Competence requires legal professionals to understand the capabilities and limitations of AI, thoroughly verifying any AI-generated output, such as legal research or document drafts. Over-reliance on AI can lead to errors, including the “hallucination” of legal citations, which can result in professional sanctions. Firms should establish clear internal policies on AI use, provide ongoing education for staff, and maintain human oversight to ensure that technology is used responsibly and ethically, safeguarding the integrity of legal practice and providing competent client representation.
Human Oversight Remains Crucial:
The integration of AI into civil defense litigation should be viewed as a powerful tool to augment, not replace, the expertise and judgment of human legal professionals. Lawyers must maintain control and critical oversight of AI-generated insights, and AI-generated results should always be carefully reviewed and verified by experienced attorneys. Understanding the technology’s limitations is key to effective implementation, and vigilance is required to identify potential errors and biases in AI output, necessitating verification of all results. To that end, legal professionals require comprehensive training on the effective use and critical evaluation of AI tools. Some state bar associations have released guidance on the use of AI tools in the legal profession, and a growing number of state and federal courts are requiring attorneys to disclose or monitor the use of AI in their courtrooms.
Conclusion:
AI holds immense potential to revolutionize civil defense litigation, offering significant gains in efficiency, insight and reduction of cost. However, a cautious and critical approach, emphasizing human oversight and a deep understanding of the technology’s use and limitations, is essential to ensure that AI serves to enhance your practice, rather than inadvertently undermining it. The future of legal practice will likely continue to evolve to more seamlessly involve a collaborative partnership between human expertise and artificial intelligence, where the strengths of each are leveraged responsibly and ethically.
We Get Privacy for Work — Episode 9: The Explosion in BIPA Litigation [Podcast] [Video]
Transcript
Introduction
From timekeeping technologies to dash cams, the Illinois Biometric Information Privacy Act (BIPA) is now being used to challenge a number and variety of time-saving programs and tools.
On this episode of We get privacy for work, we discuss the factors leading to the rise in BIPA cases.
Today’s hosts are co-leaders of the Privacy, AI and Cybersecurity Group.
Damon Silver and Joe Lazzarotti, principals, respectively, in the New York City and Tampa offices of Jackson Lewis. They are joined by Jody Mason and Jason Selvey, principals in the Chicago office.
Damon, Joe, Jody, and Jason, the question on everyone’s mind today is: What BIPA compliance risks should you consider before adopting new technologies, and how will that impact my organization?
Content
Joseph Lazzarotti
Principal, Tampa
Welcome to the We get privacy podcast. I’m Joe Lazzarotti, and I’m joined by my co-host, Damon Silver. Damon and I co-lead the Privacy, Data, and Cybersecurity Group at Jackson-Lewis. In that role, we receive a variety of questions every day from our clients, all of which boil down to the core question of how do we handle our data safely? In other words, how do we leverage all the great things data can do for our organizations without running headfirst into a wall of legal risk, and how can we manage that risk without unnecessarily hindering our business operations?
Damon Silver
Principal, New York City
On each episode of the podcast, Joe and I talk through a common question that we’re getting from our clients. We talk it through in the same way that we would with our clients, meaning with a focus on the practical. What are the legal risks? What options are available to manage those risks, and what should we be mindful of from an execution perspective?
Our question today is what’s going on with all this BIPA litigation? To answer that question, we’re bringing on two special guests, Jody Mason and Jason Selvey, who sit in the firm’s Chicago office and spearhead our group’s BIPA litigation practice.
Jody, welcome to the podcast. To get us started, could you just share a little background about BIPA? How did we get to where we are today with all these claims?
Jody Mason
Principal, Chicago
I am actually going to punt that one to Jason. Jason, if you want to give us background and an intro about the statute.
Jason Selvey
Principal, Chicago
First of all, thanks for having me. How did we get here? Bad luck. Let’s start with that. Let’s say the Illinois legislature is not doing us any favors. Really, the statute is actually a number of years old – 17 years old. About seven or so years ago, some enterprising plaintiff’s attorneys all of a sudden looked at the statutes and said, this looks like a good statute to use, since it’s amenable to class actions. All of a sudden, we were the envy of the nation with a tidal wave of class actions. There have been thousands of them. There have only been a couple that have gone to trial, but at the same time, it’s been quite a few. That’s how we are where we are today.
Things are still developing even now. We can talk more about it, but there are still cases that are defining the law and defining what the covered data is. We’re still on the road here, so to speak. That’s how we got to where we are today.
Silver
Thanks, Jason. We’re level setting for the audience, so that’s a helpful recap of what BIPA is, how the statute has evolved and seized down by the plaintiff’s bar.
When we’re talking about biometric information that is covered by the statute, what are we talking about? Can we get into some of the specifics around the types of data that might trigger the application of the law?
Mason
BIPA actually regulates two different types of data. These are both terms of art under the statute – it covers biometric identifiers and biometric information. They sound similar, but they’re actually two different things. Biometric identifiers are based on a body part. It’s a retina or iris scan, fingerprint, voice print, or a scan of hand or face geometry. Biometric information is data that’s derived from a biometric identifier; regardless of how it’s captured, converted, stored, or shared, it is used to identify an individual. That’s really the key part that we see a lot of litigation about – used to identify an individual. What does that mean? As you can imagine, there’s been a fair amount of litigation over that issue. There continues to be a lot of litigation about what that means and about the terms themselves. The statute doesn’t define voice print, scan of hand or face geometry, so what does that mean? The courts are still grappling with those issues. We expect that it will continue to be the subject of litigation as these cases continue to progress.
Lazzarotti
Jody, what kinds of use cases are you seeing that the litigation that you are handling is coming from? I imagine there are a lot of similar types of uses of what is generally referred to as biometric data information. What are you guys seeing in terms of use cases?
Mason
There are a lot of different types of technologies that are being targeted. In the employment context, cases are brought by employees or temporary workers. By and large, the most prevalent type of case that we’re seeing involves timekeeping technology. A time clock that allows people to clock in or out by scanning a finger, a face, an eye, or a hand.
That’s certainly not the only type of case we see under the statute. We’ve seen litigation involving dash cam technology. We’ve seen cases involving point-of-sale systems and security access systems. Somebody scanning a finger or a body part to access a part of a building, a key storage system, or a medication dispensing cabinet. There have been cases involving video game avatars, online photo storage, and theme park entry. We’ve seen vending machines and test-taking software.
The plaintiffs’ bar has really tried to be creative and push the envelope in terms of the types of technology that they target. As these technologies become more ubiquitous, they’re spurring more and more of this type of litigation.
Lazzarotti
One question about that. I presented at a conference a couple of days ago, and there was an interesting discussion. A lot of problems that some employers are having are identifying remote IT workers who are trying to get these jobs fraudulently. One of the techniques is, can we ask an applicant to show their driver’s license to verify that that’s who they say they are? I believe the Illinois statute may exclude photographs, but are you seeing any issues like that, where something that might be excluded could be included? Have you seen that issue come up where you use a photograph or something?
Mason
Yes, there is a fine line. Like I said, the scan of face geometry is not defined in the statute. We don’t have a lot of guidance as to what that means. Yes, you are right that photographs are exempted from coverage under the statute. The question becomes, when is something just a photograph, and when is something being done to the photograph to scan facial geometry? That line is not always as clear as you might expect it to be. It’s been the subject of litigation as to when something is covered and when it is exempted or outside the scope of the statute. Certainly, something to be mindful of.
We’ve seen a lot of litigation, for example, targeting financial services companies where somebody provides perhaps a photo ID that is then compared against a live photo or somebody’s taking a selfie. Is this the same person in the driver’s license as the person who’s sitting here on the camera in front of me? That is something that has been targeted in litigation because it’s that act of comparing those two photos that the plaintiff’s bar has argued involves facial geometry scans.
Silver
Jason, beyond the fact that various companies out there are using different types of biometric technologies for timekeeping or access control, including for vending machines, that’s definitely one I was unaware of. What is it that these companies are doing wrong? What are the core elements of the claims that are being brought under BIPA?
Selvey
I’d say the first question is whether or not it’s covered data, of course. Let’s say that it is covered data; under those circumstances, you’ve got the big three. The big three are what we see in almost all the cases.
The first one is Section 15A, which I’m going to shorten because it’s long and confusing. It’s having a retention and destruction policy for your covered biometric data. You must check it within the time that it’s no longer needed in Illinois, or within three years, whichever comes earlier. That’s a common one.
The next one is the 15B subsection claim, which is really where it is. That’s the consent and information disclosure claim. You have to get consent before collecting or taking any of those. There are several terms for it within the BIPA, like get knowing and written consent. Also, you have to make certain disclosures, such as how you are going to take it, what you are going to use it for, and things like that. There are multiple requirements. That is an easy thing to hone in on and easy to, at least in the past, have issues with.
Then, the last one is the disclosure of the biometric data. Here, I sometimes think the plaintiffs are a little wishy-washy, and they don’t know what they might be alleging, but here it is. The most common thing is to go to your vendor or somebody who’s keeping that biometric data for you, and say that you have to get consent for that too. It says it in the statute. Let’s say that you don’t. Then in that case, you’ve got another claim.
Now, there are a couple of others that are just not as common. For example, safeguarding the data appropriately. You’d think that might actually be common, but it would involve questions of fact for the plaintiffs; that’s more complicated, so why get into that? Those are what we see most.
Then, the thing we really see that is also important is the damages under the statute and what’s being requested. This has actual damages. Let’s throw those to the side because we’re not aware of a single case under the belt where there has been any harm, like someone had their identity stolen or anything like that. It goes to the statutory damages, which are $5,000 for reckless or intentional violation and $1,000 for negligent violation, plus attorney’s fees and costs. That’s really where and why the plaintiff’s bar is interested, of course. That’s why they bring those claims in the first place.
Silver
Just a quick follow-up on that, Jason. On the topic of consent specifically, are you finding that most defendants didn’t get consent at all, or that their consent was defective? What does the fact pattern typically look like?
Selvey
That depends on the era. When this first came down, back then, it was very common for someone to say, what are you talking about? As time has gone by, Jody would agree, they will be more and more compliant. There are plenty of times now where we get a case and say, plaintiff’s attorney, look at this, and they’re out of there. The kinds of things you see, though, are often things that are defective. Actually, I’m not going to say defective. I’m going to say less specific, perhaps, where you might see a claim based on that one we wouldn’t agree with, but something like that. These days, there are a good number of companies that are really doing it. It’s easier to get examples from other companies to start with.
Mason
Damon, on that, we also are seeing, as more and more companies have become compliant, courts are going to grapple with the question of post-use consent and what the effect of that is. We don’t have clear guidance from courts on what it means when a company gets consent after someone has been using a system for some period of time, but that’s something that’s going to be working its way through the courts as well.
Silver
Just to clarify, Jody, there are claims being brought saying that maybe as of June 1st of this year, you are getting compliant consent, but prior to that, you were not. So, we’re making a claim based on the prior violations.
Mason
Yes, the statute suggests, although it’s not entirely clear, that consent should be obtained prior to collection. If there was no consent in place prior to the first collection or disclosure of data, that’s where we see a claim brought a lot of times.
Silver
Are you seeing that it’s the case that when a company goes to remedy the fact that it either wasn’t getting consent at all or the consent might not have been as specific as it could have been, that then prompts the bringing of claims because it puts this on people’s radars?
Mason
That’s certainly something that we do see periodically. We always recommend that companies work closely with counsel with respect to policies and consents.
Lazzarotti
Do you have any other kind of trends, Jody, that listeners may find interesting or helpful as they think about or implement different types of technologies that you’re seeing in the litigation that might be helpful?
Mason
BIPA continues to be the subject of a lot of litigation, including a lot of appellate review. We don’t expect that to change. Two big issues right now that we can talk about are on the radar that we expect will be decided probably within the next year, and are working their way through the courts. One is with respect to the scope of the healthcare exemption under BIPA.
BIPA expressly excludes from the definition of biometric identifiers and biometric information, information that is collected, used, or stored for healthcare treatment, payment, or operations under HIPAA. The Illinois Supreme Court decided a case at the end of 2023 called Mosby v. Ingalls Memorial Hospital. In that case, it involved healthcare workers who were scanning their fingers to access a medication dispensing cabinet. The Illinois Supreme Court said that any collection of data in connection with that medication dispensing cabinet falls within the scope of that healthcare exemption. It’s just excluded from coverage under the statute. The big question that the courts are now grappling with is what happens when it involves a timekeeping system used by healthcare workers? Is data generated from that type of system, and can that be excluded from the scope of the statute as well? That issue is currently up on appellate review, so we’re watching closely.
The other issue that we’re seeing that’s been a big subject of attention from the courts over the past year is the effect of a clarification to the statute that was enacted by the Illinois General Assembly about a year ago, in August of 2024, in response to the Illinois Supreme Court’s decision in Cothron v. White Castle System, Inc. The Illinois General Assembly essentially said that when the same entity collects the same biometric identifier information from the same individual using the same mechanism of collection, that, at most, that’s a single violation of the statute for which there can be, at most, one recovery. The real big question right now is, does that clarification legislation apply to cases that were pending at the time that it was enacted, or does it only apply prospectively to future cases? Obviously, that will have a potentially very large impact on cases that are pending because it could take off the table the possibility of what we call per-scan damages. The Federal Appellate Court for the Seventh Circuit has recently decided to take that issue up. We’re all waiting to see what happens with that issue.
Silver
Jason, would you mind just, at a very high level, for those who maybe haven’t had the pleasure of going through one of these cases, just take us through what they look like, starting from the filing, which is that usually multiple filings by different plaintiff firms, or is it just one filing? Then, do we usually make a motion to dismiss or answer? What does the case usually look like, and how does it play out?
Selvey
I’ll say this: maybe in the earlier days, we saw more things, like two quick filings for the same thing. We don’t see that as much anymore; now, it’s really just usually one filing brought by. This is interesting, maybe not interesting, but there are around 10 firms that bring all of these. Maybe you’re not going to be so surprised, but we get the complaint in, and I have never been in an area of law where it has been such a chance to be created. We do everything from filing a demand for a of particulars just to slow them down and say, give us all this information about your damages claim and so forth. Moving to dismiss on, just these great grounds, different ones like railway preemption, even though it may be a railroad, an airline or something like that. Knocking people out because they’re union members. We try to be our most creative because we’re really facing it here. At a high level, we fight and fight, which wears them out. We stay cases when it makes sense and we can do so.
In the end, where do these end? Well, I mean, most do end in settlement, but not always. You can win that motion to dismiss sometimes. That, of course, is a good feeling. Like I said, only a couple have gone to trial, and that’s really the way that they go. Again, it’s great to be creative in these.
Silver
Are you seeing any that go to a summary judgment motion, or is it typically either we went on a motion to dismiss or resolve a certain discovery?
Selvey
It’s not usually that we’re afraid; it’s that we’re so busy slowing things down and making sure that we get a good result. I have filed motions for summary judgment before, and they do happen, certainly. At the same time, some of the things that we’re bringing defenses about may not look very different at summary judgment. For example, is this consent good enough? Yes, you could find later evidence, maybe. Summary judgment may be different, but it may be exactly the same. We do get there, and we do get to other motion practice certainly along the way.
Lazzarotti
Have you guys seen any developments in other states that are raising any similar types of claims at this point?
Selvey
The one that’s just like BIPA, or maybe not just like BIPA, but quite a bit, is Texas. It’s also a little older; I don’t remember the year. The biggest difference between Texas and Illinois is that it has to be destroyed. The data has to be destroyed within the time it’s not needed, essentially, or three years, whichever is earlier. It’s one year in Texas. Washington also has one that’s more about the use of data for commercial purposes. Two key things about this and the other ones I might have mentioned are that there’s no private right of action. That’s why you haven’t had a massive movement of plaintiff’s attorneys to Texas or to Washington.
One of the big ones is Colorado, which just enacted a law on July 1st of this year that deals with biometric information as well. It uses some of the same words as BIPA, some not. An important thing is that it does allow requiring, as a condition of employment, that employees give you permission to use their biometrics for a discrete number of purposes. Now, for us, the one we see most often on that list is timekeeping. Not everything we see is out there. Also, there are some other requirements there. Also, you have to have a crisis response plan. There are a couple of others, too. We’ve seen for years, New York toying with doing one itself, but that’s it for now. I’m sure we’ll see more as time goes on.
Silver
Jody, to help us land this plane and wrap up, could you talk a little bit about what companies can do to get out in front of these claims and put themselves in a better position?
Mason
The first thing, Damon, that I would recommend that companies do is really just take stock of the technologies that they’re using. To be aware of whether or not any of the technologies implemented company-wide could potentially implicate BIPA or similar statutes in the states in which the company operates. Even if a company has a policy and consent in place, it’s really a good idea to have a periodic review of those policies and consents with counsel. As Jason mentioned, we’re seeing new laws enacted, and the case law continues to develop. We’re seeing how the plaintiffs are bringing these types of claims and how they’re framing the issues. It’s just a good idea to periodically have those policies and consents reviewed. Once you have those policies and consents in place, you do a periodic review of them. Then, looking at what you’re doing to implement those policies and making sure that you’re doing what you say that you’re going to do – you’re following the policies and getting the consents that you need to be getting. You can have the greatest policies in the world in place, but if you’re not following them, you could still have a potential issue.
Certainly, if you operate in Colorado, you want to make sure to update your policies to account for that new statute that’s in place. This is the big one. I would say in a perfect world, if a company can have a process in place for the review of new technologies before they’re implemented, that’s really key. To have legal review, either with inside or outside counsel, to say, this is the technology we’re thinking of using or implementing, and here’s how we’re planning to use it. Does it implicate BIPA or any other potential privacy laws in the states where we’re using it? What do we need to do to get compliant, ideally, before that technology is implemented? If you do those things, knock on wood, you will stay out of the crosshairs of the plaintiff’s bar.
Lazzarotti
Thank you guys so much for coming on. This has really been helpful, I think, for us and for everyone. Damon, it’s always a pleasure presenting with you.
Law Firm Ordered to Reimburse $24,000 in Legal Fees for AI Generated Cite Hallucinations
In the ongoing saga of lawyers who are sanctioned for AI generated hallucination citations in pleadings , FIFA (and other defendants) in an antitrust lawsuit filed by the Puerto Rico Soccer League in Puerto Rico, recently obtained an order from Chief U.S. District Judge Raul M. Arias-Marxuach requiring counsel for the plaintiff defunct league to pay FIFA and the other defendants $24,000 in attorney’s fees and costs “for filing briefs that appeared to contain errors hallucinated by artificial intelligence.” Puerto Rico Soccer League NFP, Corp. v. Federacion Puertoriquena de Futbol, No, 23-1203 (D.P.R. 9.23.25)
The judge noted that the motions filed by the Puerto Rico Soccer League “included at least 55 erroneous citations ‘requiring hours of work on the court’s end to check the accuracy of each citation.’ Plaintiffs’ counsel denied using generative AI, but this assertion was questioned by the judge by “the sheer number of inaccurate or nonexistent citations.” The judge noted that the citations were violations of Rule 11 of the Federal Rules of Civil Procedure and applicable ethical rules.
The ordered sanctions are another reminder to lawyers to check and recheck all cases cited in any pleading filed to comply with Rule 11.
New Long Beach Ordinance Requires Human Cashiers at All Times
On August 21, 2025, the mayor of the City of Long Beach approved an ordinance (Ordinance No. ORD-25-0010) requiring food and drug retail establishments to employ at least one human cashier at all times and to assign at least one employee to supervise every three self-checkout lanes. The ordinance took effect on September 21, 2025.
Quick Hits
Effective September 21, 2025, the City of Long Beach Ordinance No. ORD-25-0010 imposes new staffing requirements for self-checkout registers in food and drug retail establishments.
Covered establishments with self-checkout registers must employ at least one human cashier at all times.
Covered establishments must assign at least one employee to supervise every three self-checkout lines.
The ordinance added Chapter 5.93 to the Long Beach Municipal Code. It requires “drug retail” and “food retail” establishments with self-service checkout stations to provide at least one non-self-service checkout station staffed with an employee to provide human assistance for scanning, bagging, and accepting payments for purchases.
Covered establishments also must advertise and enforce a fifteen-item limit for self-checkout purchases and prohibit purchases of items that require proof of ID or items with special theft-deterrent measures that require employee intervention prior to purchase, such as removing a surveillance tag or opening a locked cabinet.
Additionally, self-checkout stations must be in a location that enables observation and surveillance from employees and local law enforcement.
Covered establishments must staff at least one employee to supervise self-checkout operations. At least one employee must oversee every three self-checkout stations (1:3 ratio). The assigned supervisor cannot have any other work responsibility that could interfere with the supervisor’s ability to maintain direct visual inspection of self-service checkout operations.
The ordinance applies to large grocery stores and defines a “food retail establishment” as a store that is either “over fifteen thousand (15,000) square feet in size and sells primarily household foodstuff” or “over eighty-five thousand (85,000) square feet and with ten percent (10%) of their sales floor area dedicated to the sale of” groceries.
In contrast, the ordinance’s definition of a “drug retail establishment” does not focus on size and applies to any “retail store that sells a variety of prescription and nonprescription medicines and miscellaneous items ….”
Enforcement
The ordinance delegates enforcement to private litigation and creates a private right of action for customers and employees to sue establishments in California courts.
Courts may award civil penalties for each violation in the amount of $100 “for each employee” of the establishment. The civil penalty increases by an additional $100 per employee for each day that the violation is not fixed, up to $1,000 per employee for each day the violation remains unfixed. Courts may also award attorneys’ fees and costs.
The ordinance prohibits retaliation against any employee seeking to enforce its requirements.
Notice Requirements
All covered establishments must prominently post signage in a location accessible to customers that includes a link or a QR code to the City of Long Beach’s website regarding the ordinance, a summary of the public’s rights, and the enforcement options available.
Next Steps
Covered Long Beach drug and food retailers may wish to review the ordinance to ensure compliance. Other employers may consider remaining vigilant and up-to-date as these issues evolve.
Seasonal Hiring Concerns: How Pay Transparency, Privacy, and AI Laws Still Apply
In September and October of each year, many businesses hire seasonal workers to prepare for the upcoming holiday season, particularly in the retail and hospitality industries. Employers that are staffing up must exercise caution to avoid hiring mistakes and missteps concerning requirements related to pay transparency, privacy protections, background checks, and the use of artificial intelligence (AI) in the recruitment process and hiring.
Quick Hits
It may be a mistake to assume that state and federal laws governing pay transparency, privacy, background checks, and the use of AI do not apply to seasonal hires.
The variation in state laws can complicate compliance efforts for multistate employers.
Job Listings
The requirements of pay transparency laws can vary significantly by state and locality, often requiring employers to disclose a wage range in all job postings, regardless of whether a position is seasonal or permanent. Currently, fourteen states and the District of Columbia have enacted such pay transparency laws.
Data Privacy
It is important for employers to adhere to the relevant state privacy laws. While some states’ privacy laws do not cover the data collected by employers on job applicants and employees, California’s law does. In California, employers must provide a privacy notice at the point of data collection of job applicant data; allow individuals to access and correct their data; and permit individuals to opt out of the sharing or sale of their data.
Background Checks and Obtaining Required Documentation
Although the demands of the season might create pressure to “fast-track” the hiring process, neglecting to conduct compliant background checks can increase legal risk—ranging from potential negligent hiring liability to exposure related to the laws governing what can and cannot be asked during the hiring process. Moving too fast could also lead to unintended violations of immigration laws or child labor laws, especially if proper documentation is not obtained.
Generally, employers must obtain consent before conducting a criminal background check on a job applicant. Several states and cities prohibit employers from asking about criminal histories on initial job applications. In such jurisdictions, employers may be required to wait until after extending a job offer to initiate a background check.
Artificial Intelligence
Employers occasionally leverage AI to streamline the hiring process by screening job applications, evaluating skills through gamified tests, and scheduling follow-up interviews. This approach can prove to be an efficient method for recruiting a sufficient workforce during the busy season.
As companies increasingly rely on AI, including algorithms and automated decision-making tools to facilitate hiring decisions, they may inadvertently risk violating state and federal antidiscrimination laws, particularly under Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. Certain state laws require human involvement and oversight in hiring decisions to prevent discrimination based on age, race, gender, disability, or other protected characteristics. These laws typically apply whether the job opening is seasonal or permanent.
Next Steps
Amid the flurry of seasonal hiring, employers must not presume that laws governing pay transparency, data privacy, background checks, and the use of artificial intelligence are inapplicable to seasonal employees. It is prudent for employers to review and update their hiring policies and practices, ensuring that managers are thoroughly trained on recent changes to state laws. This diligence can help to mitigate the risk of claims or enforcement related to discrimination, immigration violations, child labor violations, negligent hiring, or other liability.
AI Hallucinations are Creating Real-World Risks for Businesses
We all know by now what an AI hallucination is, at least generally – output that sounds right but in reality, is not. They can be amusing – like when Google’s Bard (now Gemini) chatbot confidently claimed during a promotional video and live demo that the James Webb Space Telescope took the first picture of an exoplanet – a fact it simply invented, since the first such image was actually taken by a different telescope years earlier. This was not so amusing to Google’s management and shareholders – Google’s parent company, Alphabet, lost roughly $100 billion in market capitalization, with its stock plunging about 8–9% immediately after the demo.
These are not mere technical glitches; they represent cracks in an AI’s reliability. Studies report that some of the newest, most powerful AI models are generating more errors, not fewer, even as their fluency and capabilities improve.1 For businesses and society, the dangers and consequences of AI hallucinations are increasingly evident in real-world incidents – from embarrassing chatbot mistakes to costly legal liabilities.
Businesses are paying more attention to hallucinations and the damage they can cause to the company.
What Are AI Hallucinations?
In simple terms, an AI hallucination occurs when an AI model outputs information that sounds plausible but is false, or otherwise not grounded in reality. The “hallucination” metaphor is apt because, much like a person seeing something that is not there, the AI is perceiving patterns or answers that do not actually exist in reality.
Why do these hallucinations happen? It is because generative AI models are designed to predict plausible outputs rather than verify truth. An LLM generates text by statistically predicting the next word or sentence based on patterns learned from vast training data. This process optimizes for fluency (the answer sounds right and is well-drafted) and relevance to the prompt – not for accuracy. As one AI engineer explains, the model’s primary goal is to continue the sequence of words in a way that looks right – “regardless of whether the output aligns with reality or the question’s context”.2
Thus, if the model has not learned a correct answer or the prompt is outside its knowledge, it will often improvise, stitching together snippets of information that sound credible but may be entirely wrong. The enormous scope of training data (much of it unverified internet content) also means the model has absorbed countless inaccuracies and biases, which it can regurgitate or recombine into new falsehoods. Today’s AI lacks a built-in fact-checking mechanism – it has no grounded understanding of truth versus fiction.
Even if it were trained only on accurate information, an AI can still recombine facts in incorrect ways due to the probabilistic nature of its text generation. Instead of truly understanding information, AI systems statistically digest vast amounts of text and recombine words based on patterns learned from training data, without awareness of context or factual grounding. This “knowledge-blind” generation is why AI outputs can sound authoritative yet be completely erroneous, catching users off guard.
Why and How AI Hallucinations Can Be Costly to Businesses
When AI systems hallucinate information, the consequences can range from mild inconvenience to serious harm. What makes these false outputs especially dangerous is that they are often delivered in a very confident, coherent manner, making it hard for users to tell fact from fiction.
Erosion of Trust and Brand Reputation. AI hallucinations can severely undermine trust in a company and its products and services. Customers rarely distinguish between “the AI made a mistake” and “your company gave me false information.” Either way, the company’s credibility is on the line. A single high-profile mistake can shatter hard-won trust. The Google Bard incident referenced above highlights how public hallucinations can translate into huge financial and reputational costs. Executives note that if an AI-powered service gives even one piece of false advice or a fake citation, years of customer trust can evaporate instantly. In one case, an airline’s chatbot provided incorrect policy information. When the truth emerged, the company faced legal consequences and had to disable the bot, damaging customer trust and confidence.3 The defense (in court and public opinion) that it was the AI’s fault did not work.
Hallucinations are Worse than Human Error. A mistake from AI can be more damaging than human error. Consumers generally find human mistakes more understandable and forgivable than AI-generated errors because they empathize with human fallibility and expect higher accuracy from AI systems. AI hallucinations seem arbitrary, lack accountability and empathy, and diminish consumers’ sense of control, amplifying frustration and eroding trust. Ultimately, the perception of a company relying on faulty AI is more unsettling to consumers than its employees’ being fallible.
Misinformation and Poor Decision-Making. Hallucinated outputs can lead employees and their companies to make bad decisions with real consequences. For instance, consider a financial services scenario. If an AI assistant gives an out-of-date interest rate or a miscalculated risk assessment, a client or banker acting on that bad info could lose money or violate compliance rules.
In the public sector, a striking example occurred in New York City, where a municipal chatbot meant to help citizens gave out advice that was wrong and actually illegal – it suggested actions that would inadvertently break city and federal laws. Had users followed such guidance (on topics from food safety to public health), they could have faced fines or other penalties.4
Financial Losses and Hidden Costs. The direct cost of an AI hallucination can be substantial. In the airline incident mentioned above, beyond the refund itself, the airline incurred legal fees, bad press, and a hit to customer goodwill. If a chatbot gave faulty investment advice or incorrect compliance guidance, the financial fallout could be even larger.
Aside from such direct losses, hallucinations introduce hidden costs. Any time an AI produces an error, humans must catch and fix it. For example, software developers using code-generation AI have found that hallucinated code (bugs, wrong APIs, etc.) can nullify productivity gains – they spend extra time debugging AI-written code, sometimes more than if they wrote it themselves. Companies also must invest in oversight mechanisms (human review, testing, etc.), effectively paying a “tax” on AI outputs to ensure quality. All these overheads mean that if hallucinations are frequent, the purported efficiency gains of AI are eroded or even reversed.
Legal Liability and Compliance Risks. When AI systems supply false information in regulated or high-stakes fields, it can land organizations in legal trouble. Lawyers are by no means immune.
The website “AI Hallucination Cases Database,” curated by legal scholar Damien Charlotin, catalogs a growing number of judicial decisions highlighting instances of AI-generated hallucinated legal content, including fabricated citations, false quotations, and misrepresented precedents.5 As of a recent update, the database lists over 200 cases globally, and over 125 in the U.S. alone.6 Such episodes can constitute professional misconduct and have led to real penalties.
Beyond the courtroom, defamation and misinformation generated by AI present a growing liability concern. ChatGPT notably fabricated a false accusation of bribery about an Australian mayor, nearly triggering a defamation lawsuit against its maker, OpenAI. (The mayor was actually a whistleblower in that case, not a perpetrator.)7
Regulators are keenly aware of these risks, and organizations may face regulatory action if AI-driven mistakes cause consumer harm. At minimum, companies face the risk of lawsuits, sanctions, or regulatory penalties when someone relies on AI output that turns out to be false and damaging. The legal principle is clear – if your AI acts as an agent of your business, you likely bear responsibility for what it tells people.
Safety and Personal Injury Risks. If an AI system responsible for controlling physical processes – such as autonomous vehicle navigation, drone operations, or robotic surgery – hallucinates false signals or misinterprets sensor data, it can cause serious accidents, physical harm, or even fatalities. Although such incidents are rare, as fully generative AI isn’t broadly deployed in safety-critical areas, the potential for catastrophic consequences remains significant. Customer-support AI could hallucinate incorrect guidance, instructing users to take unsafe actions like incorrectly handling hazardous machinery, improperly mixing dangerous chemicals, or attempting unsafe repairs. Similarly, AI-powered healthcare assistants that hallucinate incorrect medical advice or medication dosages could lead directly to patient injury or death.
Impact on the Future of AI Use
Caution in Adoption. Many organizations remain cautious about integrating AI into critical processes until the hallucination problem is better controlled. Surveys in sectors like finance show that accuracy and data integrity concerns rank among the top barriers to AI adoption. Leaders know that a single AI error in a high-stakes context (e.g., giving wrong compliance info, or misreporting data to a client) could have outsized fallout. Thus, soon, we can expect AI to be used in a limited or heavily supervised capacity for mission-critical tasks.
In healthcare, AI diagnostic suggestions will likely require sign-off by a medical professional rather than being fully automated. This necessary practice of “human in the loop” reflects a broad realization – until AI can reliably not hallucinate performing a particular task, full automation is risky. The flip side is that organizations embracing AI must budget for ongoing oversight costs, which could slow down AI-driven efficiency gains.
Trust and User Acceptance. The persistence of hallucinations threatens to undermine public trust in AI at a critical juncture. If customers, clients, and users come to see AI outputs as unreliable, they will be less inclined to use these tools for important matters. After experiencing chatbots giving wrong answers or bizarre responses, users often revert to traditional search engines or human advisors for reliable information. Awareness of hallucinations has prompted positive behavior in some user groups – studies indicate that knowing an AI can be wrong leads users to verify information more diligently, which is a good digital literacy skill to develop.8
Technical Arms Race for Reliability. Hallucinations have sparked intense research and development efforts, essentially an arms race to make AI more “truthful” and reality-based. Major AI labs are exploring various techniques – reinforcement learning from human feedback (to penalize false outputs), integrating real-time knowledge retrieval, better architectures that “know when they don’t know,” and more. Sam Altman, the CEO of OpenAI, has expressed optimism that progress will come – predicting that the problem will be “much, much better” in a year or two, to the point we might not talk about it as much.
Regulatory and Legal Environment. Notably, the prevalence of AI hallucinations is drawing attention from lawmakers and regulators, which will influence how AI can be used. We are already seeing requirements and proposals in U.S. laws and elsewhere mandating transparency in AI-generated content (to prevent the spread of AI-fueled misinformation). In the future, companies are more likely to have to implement specific safeguards (or face liability).
All of this is shaping corporate strategies – businesses need to have compliance frameworks for AI, much as they do for data privacy or cybersecurity. Hallucinations are forcing the maturation of AI governance. Companies that manage the risk well – through policies, technology, and training – will be able to leverage AI more freely, whereas those that do not will either be blindsided by incidents or be kept on the sidelines by regulators and public distrust.
Managing and Mitigating Hallucination Risks
1. Assume Errors Until Proven Otherwise. Companies should cultivate a mindset that fluency (a well-drafted and good-sounding AI response) does not equal accuracy. Employees should treat every confident AI output as potentially incorrect unless it has been verified. This principle should be baked into company culture and training. By default, AI users should always double-check critical facts that an AI provides, just as we would verify a surprising or important answer or analysis from a human junior employee.
2. Implement Human-in-the-Loop Oversight. Human review and validation are the most reliable backstop against AI hallucinations. Companies deploying AI chatbots or content generators should ensure that for any customer-facing or high-stakes output, a qualified person is in the loop beforehand or closely monitoring after the fact. For example, with the emphasis on “qualified,” an AI-written draft of a legal contract needs to be reviewed by an experience attorney. An AI customer service agent might handle simple FAQs autonomously, but hand off to a human agent for anything beyond a low-risk threshold.
3. Use Retrieval and Verified Data Sources. One technical remedy that has proven effective is Retrieval-Augmented Generation (RAG) – essentially, connecting the AI model to a source of truth. Instead of relying solely on the AI’s internal learned knowledge (which might be based on generic, incomplete, and/or outdated data), the system is designed to fetch relevant information from relevant trusted databases or documents and incorporate that into its answer. For example, it is now common for companies to equip employee-facing chatbots to pull policy details from the official policy database when answering questions, ensuring that they quote the actual policy text rather than a possibly misremembered summary from general knowledge.
Likewise, an AI could be set up to retrieve the latest pricing data or medical guidelines from a verified repository when those topics come up. By grounding responses in up-to-date, vetted data, the AI is far less likely to hallucinate. Many enterprise AI platforms now offer plugins or architecture for retrieval-based Q&A. Organizations should use domain-specific models when accuracy is paramount – a smaller model trained only on authoritative data in your field may be more reliable than a large general model that might wander off-topic.
4. Addressing the “Sycophant” Problem. Companies should implement guardrails to keep AI systems within safe and accurate bounds. Guardrails can include input filters, output rules, and context constraints. Early generative AI LLMs were trained to provide a response almost at any “cost.” More recently, models are being trained to restrict an AI from answering questions that should be out of scope (instead replying “I’m sorry, I can’t help with that”), to avoid it guessing and hallucinating. This addresses the “AI sycophant” problem—where models tend to produce answers aimed at pleasing or aligning with the user’s expectations, even if those answers are incorrect or misleading.
5. Use automated fact-checkers. Some solutions will scan the AI’s output against a knowledge base and flag or block answers that introduce facts not found in the sources (identifying ungrounded content). Technically, setting a lower “temperature” on the model (i.e. making it less random) can also force it to stick to safer, more predictable phrasing – reducing creative flourishes that could be incorrect.
Some solutions will scan the AI’s output against a knowledge base and flag or block answers that introduce facts not found in the sources (identifying ungrounded content). Technically, setting a lower “temperature” on the model (i.e., making it less random) can also force it to stick to safer, more predictable phrasing—reducing creative flourishes that could be incorrect. However, remember the inherent friction between the original goal of training AI to produce engaging, human-like, creative responses and the need to ensure it remains strictly factual and reliable. Efforts to minimize hallucinations often mean sacrificing some of the conversational fluidity and inventive qualities that make AI feel less robotic.
6. Educate and Alert Users. Transparency with end-users is important to managing the risks of hallucinations. If employees know that an AI’s answer might be incorrect, they can approach it more cautiously. Companies should provide disclaimers or contextual hints in AI interfaces – e.g., a message like “This response was generated by AI and may not be 100% accurate. Please verify critical information.”
Beyond disclaimers, user education is important. Companies should train employees (and even customers and clients via guides) on how to use AI tools responsibly. This includes teaching them how to spot potential hallucinations (e.g., wildly specific claims with no source, or inconsistent details) and encouraging a practice of cross-verifying with reliable sources. Savvy, critical users are an excellent last line of defense against the spread of AI-generated falsehoods.
7. Monitor, Audit, and Quickly Correct Mistakes. Despite all prevention measures, some hallucinations will slip through. Companies need to have a plan for detecting and handling them fast, efficiently, and effectively. Businesses should solicit feedback from employees and other users. When a mistake is discovered, act transparently and swiftly to correct it. Owning up to errors not only helps fix the immediate issue but can protect your credibility long-term – users are more forgiving if you demonstrate honesty and improvement.
8. Promote AI–Human Collaboration, Not Replacement. Companies should frame their use of AI as a tool to augment human workers, not replace their judgment. When employees understand that the AI is there to assist and speed things up – but not to make infallible decisions – they are more likely to use it appropriately. Encourage a workflow where AI handles the drudge work (first drafts, basic Q&A, data summarization) and humans handle the final validation, strategic thinking, and creative judgment.
This plays to the strengths of each – AI provides efficiency and breadth of knowledge, while humans provide scrutiny, ethics, and common sense. By making AI a collaborative partner, the organization benefits from AI productivity without surrendering control to it. The goal should be effective collaboration with AI, not dependence on AI.
Conclusion
AI hallucinations present tangible and significant risks, including reputational damage, financial costs, legal liabilities, and even physical harm. Businesses adopting AI must recognize these challenges and prioritize effective risk management strategies to mitigate them. Successfully addressing hallucinations involves clear guardrails, continuous human oversight, technical solutions like retrieval-augmented generation, and proactive user education. Companies that thoughtfully implement these measures will be best positioned to harness the powerful benefits of AI while safeguarding against its inherent risks.
Employment Bills Await Governor’s Signature in California
California lawmakers passed several bills during the 2025 legislative session that would impose new compliance obligations on employers. Here is an overview of employment bills the California State Legislature has sent to the governor for approval.
Quick Hits
Before the latest legislative session ended on September 13, 2025, California lawmakers passed several bills impacting employers, including measures to expand pay equity reporting requirements, limit workplace surveillance, and require transparency in the use of artificial intelligence (AI) or automated decision systems in the workplace.
Governor Gavin Newsom must sign or veto bills passed during the legislative session by October 13, 2025, or they will automatically become law.
California Employment Bills Signed or Sent to the Governor for Signature
Bill
Summary
Current Status
AB 250
Window to reactivate sexual assault claims. The bill would establish a two-year window, from January 1, 2026, to December 31, 2027, to revive civil claims for sexual assault, even if the statute of limitations has expired.
Enrolled and presented to the governor on September 22, 2025
AB 692
“Stay or pay” clauses. This bill would ban employment contracts that require the worker to repay an employer, training provider, or debt collector for a debt (such as training costs), if the worker’s employment ends.
Enrolled September 15, 2025
AB 858
Post-emergency reinstatement rights. This bill would expand a COVID-era reinstatement rights law to cover employees laid off due to any state- or locally declared emergency. The bill would cover certain airport service and hospitality providers, building service providers, hotels, private clubs, and event centers.
Enrolled September 15, 2025
AB 1326
Right to wear masks. The bill would prohibit employers from preventing employees from wearing face masks, unless wearing face masks would be a safety hazard. It would give employers the right to require employees to remove their face coverings briefly at the worksite for identification purposes.
Enrolled and presented to the governor on September 16, 2025
SB 7
Disclosure of automated decision systems. This bill would require employers to provide written notice that an “automated decision system” (ADS) was in use in the workplace for the purpose of making employment-related decisions. The bill would require employers to notify employees subject to discipline or employment termination based on a decision made by ADS and give those employees the right to appeal the decision. Further, employers would be required to provide notice to job applicants if ADS will used in the hiring process.
Enrolled September 17, 2025
SB 261
Posting of Labor Commissioner awards. The bill would mandate that the Labor Commissioner’s Office post to its website any unsatisfied awards against employers. The bill would impose a penalty equal to three times the award if the award remains unsatisfied after 180 days.
Enrolled and presented to the governor on September 16, 2025
SB 294
“Workplace Know Your Rights Act.” This bill would mandate that employers post a state agency poster and thereafter provide a stand-alone written notice to each employee addressing independent contractor misclassification protections, heat illness prevention, workers’ compensation, paid sick days, protections against unfair immigration-related practices, the right to notice of federal immigration inspections, the right to organize a union in the workplace, and constitutional rights when interacting with law enforcement at the workplace. Further, the bill would require employers to notify an employee’s emergency contact in the event that the employee is arrested or detained while at work.
Enrolled September 17, 2025
SB 355
Judgment debtor employers. This bill would require, within sixty days of a final judgment against an employer requiring payment to an employee or to the state, the employer to provide documentation to the state Labor Commissioner that the judgment is fully satisfied, a certain bond has been posted, or the employer has entered into an agreement for the judgment to be paid in installments and is in compliance with that agreement.
Enrolled and presented to the governor on September 16, 2025
SB 464
Pay data reporting. The bill would require employers to store pay equity data, including demographic information related to race, ethnicity, or gender, separately from personnel records. It also would create a civil penalty for employers that fail to submit pay data reports to the California Civil Rights Department. Beginning in May 2027, public employers (currently exempt from the requirements) with one hundred or more employees would be required to submit annual pay data reports to the Civil Rights Department.
Enrolled and presented to the governor on September 22, 2025
SB 513
Maintaining personnel records. This bill would require employers to include these items in education or training records: the name of the employee, the name of the training provider, the duration and date of the training, the core competencies or skills in the training, and the resulting certification or qualification.
Enrolled and presented to the governor on September 9, 2025
SB 590
Expanding paid family leave. This bill would expand eligibility for benefits under the state’s paid family leave program to include individuals who take time off work to care for seriously ill designated persons. The bill defines “designated person” to mean “any care recipient related by blood or whose association with the individual is the equivalent of a family relationship.”
Enrolled and presented to the governor on September 22, 2025
SB 642
Pay equity. The bill would set the state statute of limitations for civil actions brought under Section 1197.5 of the California Labor Code at three years after the last discriminatory pay act occurred and expand the look-back period for relief to ten years. The bill would clarify that “pay scale,” for the purposes of job-posting disclosures, means “the salary or hourly wage range that the employer reasonably expects to pay for the position upon hire.”
Enrolled and presented to the governor on September 17, 2025
SB 648
Tip theft. The legislation authorizes the Labor Commissioner to investigate and issue citations or file civil lawsuits for gratuities taken or withheld in violation of the California Labor Code.
Signed into law by the governor on July 30, 2025.
SB 809
Reimbursement of work expenses. The bill would require employers to reimburse employees for the use, upkeep, and depreciation of a truck, tractor, trailer, or other commercial vehicle the employee owned and used for work.
Enrolled September 18, 2025
“Enrolled” refers to when the final version of a bill has been approved by both the Senate and the Assembly, proofread for accuracy, and certified by the legislative officers before being sent to the governor for approval or veto.
AI-yi-yi: Fake Cases, Real Consequences: A Cautionary Tale for AI in the Courtroom
A California court recently threw the book at some lawyers for relying on artificial intelligence (“AI”) to generate what turned out to be fabricated citations and misstated authorities—that is, so-called “AI hallucinations.”
In Noland v. Land of the Free, L.P., a leasing agent/sales representative alleged no fewer than 25 claims, including violations of California’s wage and hour laws and wrongful termination. The trial court granted summary judgment in favor of the company, finding no dispute that Noland had been an independent contractor and not an employee (as he had alleged). The Court of Appeal affirmed summary judgment for the company, noting this “straightforward” case “under normal circumstances, would not warrant publication.”
However, the Court decided to “publish this opinion as a warning” because plaintiff’s counsel had used generative AI to draft their appellate briefs and “nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, [were] fabricated.” The Court detailed how 21 of 23 citations contained quotations or topics that did not appear in the cases cited and that some of the cases did not exist at all. As a result, the Court imposed a $10,000 sanction against plaintiff’s counsel, directed him to serve a copy of the opinion on his client and ordered that a copy of the Court’s opinion be forwarded to the State Bar.
The Court didn’t stop there. In addition to criticizing plaintiff’s counsel for not checking the cases AI had provided, the Court noted that the sanctions were not to be paid to defense counsel or its client because they “did not alert the court to the fabricated citations and appear to have become aware of the issue only when the court issued its order to show cause.”
As many courts before Noland have observed, lawyers have ethical duties of candor and competence that encompass the use of technology, including AI. Noland serves as a powerful warning to practitioners across the Golden State (and beyond) that three years into the generative AI era, when an attorney cites authorities dreamed up by AI, courts will no longer tolerate protestations of ignorance about how these tools work.
As for the future of AI, the potential benefits for litigants and lawyers (and everyone else in today’s society) are breathtaking, but before settling on a final answer, don’t forget the old adage: “Trust, but Verify!”