The State of Employment Law: States Begin to Pass Artificial Intelligence Bias Laws

In this series, we will explore some of the ways states vary from one another in their employment laws.
This week, California enacted new Fair Employment and Housing Act regulations regarding artificial intelligence (AI). While employers may lawfully use automated decision systems (ADS) to screen resumes, evaluate applicants’ abilities, and rate interview performance, employers are prohibited from using ADS to discriminate against applicants based on any protected characteristic. All California employers will be required to retain ADS-related records for at least four years, and records must include the criteria for any ADS use and the results of any AI analysis.
California is not the only state that has enacted AI-related hiring laws. The Illinois AI Video Interview Act requires employers that record video interviews and analyze them using AI to notify each such applicant before the interview, provide each applicant with information regarding how the AI evaluation works, and obtain consent for the recording. The Colorado Artificial Intelligence Act requires employers to: (1) use reasonable care to protect applicants from known or reasonably foreseeable risks of discriminatory treatment through the use of algorithms and (2) develop a risk management policy and conduct annual impact assessments whether there are known or potential risks of discrimination. Several other states have proposed similar AI-related nondiscrimination laws, so more states will likely join California, Colorado, and Illinois soon.
While employers almost certainly are not using AI tools to discriminate against applicants intentionally, AI tools (just like humans) can have unintended biases that disproportionately impact applicants based on their disability, race, national origin, sex, or any other protected characteristic. Even if an employer’s state does not have an AI bias law, employers should regularly monitor any AI tool they implement to ensure that it is unbiased.

European Commission Opens Consultation on EU AI Act Serious Incident Guidance

On September 26, 2025, the European Commission initiated a public consultation on draft guidance and a draft reporting template for serious artificial intelligence (“AI”) incidents under Regulation (EU) 2024/1689 (the “AI Act”).
The guidance and template are designed to help providers of high-risk AI systems prepare for new mandatory reporting requirements detailed in Article 73 of the AI Act. These obligations, which take effect from August 2026, require providers to notify national authorities of serious incidents involving their AI systems. According to the European Commission, the aim of Article 73 of the AI Act is to facilitate early detection of risks, enhance accountability, enable swift intervention, and foster public confidence in AI technologies.
Key aspects of the draft guidance include:

Clarification of terms: The guidance defines key concepts related to serious AI incidents and reporting obligations.
Practical examples: Scenarios are provided to illustrate when and how incidents should be reported. For example, an incident or malfunction may include misclassifications, significant drops in accuracy, AI system downtime or unexpected behaviors.
Reporting obligations and timelines: The guidance sets out the different obligations that apply to different actors, including providers and deployers of high-risk AI systems, providers of GPAI models with systemic risk, market surveillance and national competent authorities, the European Commission and the AI Board.
Relationship to other laws: The guidance explains how these AI-specific rules interact with broader legal frameworks and reporting obligations, such as the Critical Entities Resilience Directive, the NIS2 Directive and the Digital Operational Resilience Act.
International alignment of reporting regimes: The guidance seeks consistency with global efforts such as the AI Incidents Monitor and Common Reporting Framework of the Organisation for Economic Co-operation and Development (“OECD”).

Healthcare AI in the United States — Navigating Regulatory Evolution, Market Dynamics, and Emerging Challenges in an Era of Rapid Innovation

The use of artificial intelligence (AI) tools in healthcare continues to evolve at an unprecedented pace, fundamentally reshaping how medical care is delivered, managed, and regulated across the United States. As 2025 progresses, the convergence of technological innovation, regulatory adaptation (or lack thereof), and market shifts has created remarkable opportunities and complex challenges for healthcare providers, technology developers, and federal and state legislators and regulatory bodies alike.
The rapid proliferation of AI-enabled medical devices represents perhaps the most visible manifestation of this transformation. With nearly 800 AI- and machine learning (ML)-enabled medical devices authorized for marketing by the US Food and Drug Administration (FDA) in the five-year period ending September 2024, the regulatory apparatus has been forced to adapt traditional frameworks designed for static devices to accommodate dynamic, continuously learning algorithms that evolve after deployment. This fundamental shift has prompted new approaches to oversight, such as the development of predetermined change control plans (PCCPs) that allow manufacturers to modify their systems within predefined parameters and without requiring additional premarket submissions.
Regulatory Frameworks Under Pressure
The regulatory environment governing healthcare AI reflects the broader challenges facing federal agencies as they attempt to balance innovation and patient safety. The FDA’s approach to AI-enabled software as a medical device (SaMD) has evolved significantly, culminating in the January publication of comprehensive draft guidance addressing life cycle management and marketing submission recommendations for AI-enabled device software functions. This guidance represents a critical milestone in establishing clear regulatory pathways for AI and ML systems that challenge traditional notions of device stability and predictability.
The traditional FDA paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorization. The FDA’s January 2021 AI/ML-based SaMD Action Plan outlined five key actions based on the total product life cycle approach: tailoring regulatory frameworks with PCCPs, harmonizing good ML practices, developing patient-centric approaches, supporting bias elimination methods, and piloting real-world performance monitoring.
However, the regulatory landscape remains fragmented and uncertain. The rescission of the Biden administration’s Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Artificial Intelligence,” by the Trump administration and the current administration’s issuance of its own EO on AI, “Removing Barriers to American Leadership in Artificial Intelligence,” in January has created additional uncertainty regarding federal AI governance priorities. While the Biden administration’s EO has been rescinded, its influence persists through agency actions already underway, including the April 2024 Affordable Care Act (ACA) Section 1557 final rule on nondiscrimination in health programs run by the US Department of Health and Human Services (HHS) and the final rule on algorithm transparency in the Office for Civil Rights. Consequently, enforcement priorities and future regulatory development remain uncertain.
State-level regulatory activity has attempted to fill some of these gaps, with 45 states introducing AI-related legislation during the 2024 session. California Assembly Bill 3030, which specifically regulates generative AI (gen AI) use in healthcare, exemplifies the growing trend toward state-specific requirements that healthcare organizations must navigate alongside federal regulations. This patchwork of state and federal requirements creates particularly acute challenges for healthcare AI developers and users operating across multiple jurisdictions.
Data Privacy and Security: The HIPAA Challenge
One of the most pressing concerns facing healthcare AI deployment involves the intersection of AI capabilities and healthcare data privacy requirements. The Health Insurance Portability and Accountability Act (HIPAA) was enacted long before the emergence of modern AI systems, creating significant compliance challenges as healthcare providers increasingly rely on AI tools for clinical documentation, decision support, and administrative functions.
The use of AI-powered transcription and documentation tools has emerged as a particular area of concern. Healthcare providers utilizing AI systems for automated note-taking during patient encounters face potential HIPAA violations if proper safeguards are not implemented. These systems often require access to comprehensive patient information to function effectively, yet traditional HIPAA standards may conflict with AI systems’ need for extensive datasets to optimize performance. AI tools must be designed to access and use only the protected health information (PHI) strictly necessary for their purpose, even though AI models often require comprehensive datasets to achieve their full potential.
The proposed HHS regulations issued in January attempt to address some of these concerns by requiring covered entities to include AI tools in their risk analysis and risk management compliance activities. These requirements mandate that organizations conduct vulnerability scanning at least every six months and penetration testing annually, recognizing that AI systems introduce new vectors for potential data breaches and unauthorized access.
Business associate agreements (BAAs) have become increasingly complex as organizations attempt to address AI-specific risks. These agreements must now encompass algorithm updates, data retention policies, and security measures for ML processes, while ensuring that AI vendors processing PHI operate under robust contractual frameworks that specify permissible data uses and required safeguards. Healthcare organizations must ensure that AI vendors processing PHI operate under robust BAAs that specify permissible data uses and necessary security measures and account for AI-specific risks related to algorithm updates, data retention policies, and other ML processes.
Algorithmic Bias and Health Equity Concerns
The potential for algorithmic bias in healthcare AI systems has emerged as one of the most significant ethical and legal challenges facing the industry. A 2024 review of 692 AI- and ML-enabled FDA-approved medical devices revealed troubling gaps in demographic representation, with only 3.6% of approved devices reporting race and ethnicity data, 99.1% providing no socioeconomic information, and 81.6% failing to report study subject ages.
These data gaps have profound implications for health equity, as AI systems trained on nonrepresentative datasets may perpetuate or exacerbate existing healthcare disparities. Training data quality and representativeness significantly — and inevitably — impact AI system performance across diverse patient populations. The challenge is particularly acute given the rapid changes in federal enforcement priorities regarding diversity, equity, and inclusion (DEI) initiatives.
While the April 2024 ACA Section 1557 final rule regarding HHS programs established requirements for healthcare entities to ensure AI systems do not discriminate against protected classes, the current administration’s opposition to DEI initiatives has created uncertainty about enforcement mechanisms and compliance expectations. Given the rapid turnabout in executive branch policy toward DEI and antidiscrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected.
Healthcare organizations are increasingly implementing systematic bias testing and mitigation strategies throughout the AI life cycle, focusing on validating the technology, promoting health equity, ensuring algorithmic transparency, engaging patient communities, identifying fairness issues and trade-offs, and maintaining accountability for equitable outcomes. AI system developers have, until recently, faced increasing regulatory pressure to ensure training datasets adequately represent diverse patient populations. And most healthcare AI developers and practitioners continue to maintain that relevant characteristics, including age, gender, sex, race, and ethnicity, should be appropriately represented and tracked in clinical studies to ensure that results can be reasonably generalized to the intended-use populations.
However, these efforts often occur without clear regulatory guidance or standardized methodologies for bias detection and remediation. Special attention must be paid to protecting vulnerable populations, including pediatric patients, elderly individuals, racial and ethnic minorities, and individuals with disabilities.
Professional Liability and Standards of Care
The integration of AI into clinical practice has created novel questions about professional liability and standards of care that existing legal frameworks struggle to address. Traditional medical malpractice analysis relies on established standards of care, but the rapid evolution of AI capabilities makes it difficult to determine what constitutes appropriate use of algorithmic recommendations in clinical decision-making.
Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation, and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.
The Federation of State Medical Boards’ April 2024 recommendations to hold clinicians liable for AI technology-related medical errors represent an attempt to clarify professional responsibilities in an era of algorithm-assisted care. However, these recommendations raise complex questions about causation, particularly when multiple factors contribute to patient outcomes and AI systems provide recommendations that healthcare providers may accept, modify, or reject based on their clinical judgment.
When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Courts must evaluate whether AI system recommendations served as a proximate cause of patient harm as well as the impacts of the healthcare provider’s independent medical judgment and other contributing factors.
Documentation requirements have become increasingly important, as healthcare providers must maintain detailed records of AI system use, including the specific recommendations provided, the clinical reasoning for accepting or rejecting algorithmic guidance, and any modifications made to AI-generated suggestions. These documentation practices are essential for defending against potential malpractice claims while ensuring that healthcare providers can demonstrate appropriate clinical judgment and professional accountability.
AI-related malpractice cases may require expert witnesses with specialized knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards. Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority.
Market Dynamics and Investment Trends
Despite regulatory uncertainties, venture capital investment in healthcare AI remains robust, with billions of dollars allocated to startups and established companies developing innovative solutions. However, investment patterns have become more selective, focusing on solutions that demonstrate clear clinical value and regulatory compliance rather than pursuing speculative technologies without proven benefits.
The American Hospital Association’s early 2025 survey of digital health industry leaders revealed cautious optimism, with 81% expressing positive or cautiously optimistic outlooks for investment prospects and 79% indicating plans to pursue new investment capital over the next 12 months. This suggests continued confidence in the long-term potential of healthcare AI despite near-term regulatory and economic uncertainties.
Clinical workflow optimization solutions, value-based care enablement platforms, and revenue cycle management technologies have attracted significant funding, reflecting healthcare organizations’ focus on addressing immediate operational challenges while building foundations for more advanced AI applications. The increasing integration of AI into these core healthcare functions demonstrates the technology’s evolution from experimental applications to essential operational tools.
Major technology corporations are driving significant innovation in healthcare AI through substantial research and development investments. Companies such as Google Health, Microsoft Healthcare, Amazon Web Services, and IBM Watson Health continue to develop foundational AI platforms and tools. Large health systems and academic medical centers lead healthcare AI adoption through dedicated innovation centers, research partnerships, and pilot programs, often serving as testing grounds for emerging AI technologies.
Pharmaceutical companies increasingly integrate AI throughout drug development pipelines, from target identification and molecular design to clinical trial optimization and regulatory submissions. These investments aim to reduce development costs and timelines while improving success rates for new therapeutic approvals.
Large healthcare technology companies increasingly acquire specialized AI startups to integrate innovative capabilities into comprehensive healthcare platforms. These acquisitions accelerate technology deployment while providing startups with the resources necessary for large-scale implementation and regulatory compliance.
Emerging Technologies and Integration Challenges
The rapid advancement of gen AI technologies has introduced new regulatory and practical challenges for healthcare organizations. As of late 2023, the FDA had not approved any devices relying on purely gen AI architectures, creating uncertainty about the regulatory pathways for these increasingly sophisticated technologies. Gen AI’s ability to create synthetic content, including medical images and clinical text, requires new approaches to validation and oversight that traditional medical device frameworks may not adequately address.
The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.
Healthcare AI systems must provide sufficient transparency to enable healthcare providers to understand system recommendations and limitations. The FDA emphasizes the importance of explainable AI that allows clinicians to understand the reasoning behind algorithmic recommendations. AI systems must provide understandable explanations for their recommendations, which healthcare providers in turn use to communicate with patients.
The integration of AI with emerging technologies such as robotics, virtual reality, and internet of medical things (IoMT) devices creates additional complexity for healthcare organizations attempting to navigate regulatory requirements and clinical implementation challenges. These convergent technologies offer significant potential benefits but also introduce new risks related to cybersecurity, data privacy, and clinical safety that existing regulatory frameworks struggle to address comprehensively.
AI-enabled remote monitoring systems utilize wearable devices, IoMT sensors, and mobile health applications to continuously track patients’ vital signs, medication adherence, and disease progression. These technologies enable early intervention for deteriorating conditions and support chronic disease management outside traditional healthcare settings, but they face unique regulatory challenges related to device performance, user training, and clinical oversight.
Cybersecurity and Infrastructure Considerations
Healthcare data remains a prime target for cybersecurity threats, with data breaches involving 500 or more healthcare records reaching near-record numbers in 2024, continuing an alarming upward trend. Healthcare data remains a prime target for hackers due to its high value on black markets and the critical nature of healthcare operations, which makes organizations more likely to pay ransoms.
The integration of AI systems, which often require access to vast amounts of patient data, further complicates the security landscape and creates new vulnerabilities that organizations must address through robust security frameworks. Healthcare organizations face substantial challenges integrating AI tools into existing clinical workflows and electronic health record systems. Technical interoperability issues, user training requirements, and change management processes require significant investment and coordination across multiple departments and stakeholders.
The Consolidated Appropriations Act of 2023’s requirement for cybersecurity information in premarket submissions for “cyber devices” represents an important step in addressing these concerns, but the rapid pace of AI innovation often outstrips the development of adequate security measures. Medical device manufacturers must now include cybersecurity information in premarket submissions for AI-enabled devices that connect to networks or process electronic data.
Healthcare organizations must implement comprehensive cybersecurity programs that address not only technical vulnerabilities but also the human factors that frequently contribute to data breaches. Strong technical safeguards must be implemented when using de-identified data for AI training, including access controls, encryption, audit logging, and secure computing environments, and should address both intentional and accidental reidentification risks throughout the AI development process.
A significant concern is the lack of a private right of action for individuals affected by healthcare data breaches, leaving many patients with limited recourse when their sensitive information is compromised. While many states have enacted laws more stringent than federal legislation, enforcement resources may be stretched thin.
Human Oversight and Professional Standards
In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organizations that implement it rather than to the AI system itself. Healthcare providers must maintain ultimate authority for clinical decisions even when using AI-powered decision support tools. Healthcare AI applications must require meaningful human involvement in decision-making processes rather than defaulting to fully automated systems.
AI systems must provide healthcare providers with clear, easily accessible mechanisms to override algorithmic recommendations when clinical judgment suggests alternative approaches. Healthcare providers using AI systems must be provided with the tools to achieve system competency through ongoing training and education programs. At the organization level, hospitals and health systems must implement robust quality assurance programs that monitor AI system performance and healthcare provider usage patterns.
Medical schools and residency programs are beginning to incorporate AI literacy into their curricula, while professional societies are developing guidelines for the responsible use of these tools in clinical practice. For digital health developers, these shifts underscore the importance of designing AI systems that complement clinical workflows and support physician decision-making rather than attempting to automate complex clinical judgments.
The rapid advancement of AI in healthcare is reshaping certain medical specialties, particularly those that rely heavily on image interpretation and pattern recognition, such as radiology, pathology, and dermatology. As AI systems demonstrate increasing accuracy in reading X-rays, magnetic resonance images, and other diagnostic images, some medical students and physicians are reconsidering their specialization choices. This trend reflects broader concerns about the potential for AI to displace certain aspects of physician work, though most experts emphasize that AI tools should augment rather than replace clinical judgment.
Conclusion: Balancing Innovation and Responsibility
The healthcare AI landscape in the United States reflects the broader challenges of regulating rapidly evolving technologies while promoting innovation and protecting patient welfare. Despite regulatory uncertainties and implementation challenges, the fundamental value proposition of AI in healthcare remains compelling, offering the potential to improve diagnostic accuracy, enhance clinical efficiency, reduce costs, and expand access to specialized care.
Success in this environment requires healthcare organizations, technology developers, and regulatory bodies to maintain vigilance regarding compliance obligations while advocating for regulatory frameworks that protect patients without unnecessarily hindering innovation. Organizations that can navigate the complex and evolving regulatory environment while delivering demonstrable clinical value will continue to find opportunities for growth and impact in this dynamic sector.
The path forward demands a collaborative approach that brings together clinical expertise, technological innovation, regulatory insight, and ethical review. As 2025 progresses (and beyond), the healthcare AI community must work together to realize the technology’s full potential while maintaining the trust and confidence of patients, providers, and the broader healthcare system. This balanced approach will be essential to ensuring that AI fulfills its promise as a transformative force in American healthcare delivery.

Mississippi College School of Law Becomes First in Southeast to Require AI Training for All Students

Last week, Mississippi College School of Law (MC Law) announced that it would become the first law school in Mississippi, and the first in the Southeast, to require all students to complete an AI certification course prior to graduation. As AI is increasingly integrated into the legal profession, MC Law joins Case Western Reserve University School of Law as the second law school nationwide to establish a mandatory AI certification program. 
“Whether our students plan to be litigators or transactional attorneys, their future employers will expect familiarity with these AI tools. We want the firms hiring our students to be confident that every MC Law grad is competent in AI technologies,” MC Law Dean John Anderson told The National Law Review (NLR).
The program is being designed and taught by Oliver Roberts, an adjunct professor at Washington University School of Law in St. Louis, Co-Director of the WashU Law AI Collaborative, and Founder and CEO of Wickard.ai.1
“Mississippi is uniquely positioned to lead in the AI revolution, and we’re proud to make that a reality through this historic and innovative partnership with MC Law,” said Roberts, who also taught the nation’s first required AI course at Case Western earlier this year.
Reflecting Mississippi’s growing AI leadership, Mississippi Governor Tate Reeves recently awarded MC Law a $723,000 grant to establish the Center for AI Policy and Technology Leadership (CAPTL), a joint initiative between the law school and the MC School of Business. CAPTL leverages MC Law’s faculty AI expertise to inform lawmakers, legislative staff, and their outside advisors considering AI-related legislative initiatives, and helps track cutting-edge AI policy developments around the U.S.
MC Law’s inaugural Introduction to AI and the Law program reflects the law school’s latest efforts in AI leadership. Launching this spring, the program combines classroom AI instruction with practical AI tool training. According to Anderson, this groundbreaking program will span roughly 12–14 hours across four sessions, covering core AI concepts, prompting techniques, ethical considerations, and the evolving regulatory landscape. It will culminate in a certification assessment. 
Anderson sees the program as part of a larger Mississippi AI ecosystem with MC Law “perfectly situated” by “the state capitol, the supreme court, the governor’s mansion, and our state’s largest law firms.” MC Law plans to expand CAPTL’s programming to educate and provide AI resources to state and regional leaders. 
MC Law’s decision to expand its AI programming comes amid a growing trend among law schools adding AI education to their curricula. This new initiative suggests these mandates may soon become standard practice in legal education. 
“Prospective law students want to know they will be prepared for the practice of law in the digital age, and employers will demand it. When other law schools see what we are doing, I expect they will follow suit,” Anderson added.
AI is quickly being integrated into the legal professions, just like it is in other businesses. For law firms, AI tools are simplifying administrative tasks, drafting contracts, researching case law, and assisting with other higher-level work. The next generation of lawyers will either be AI natives or risk becoming Luddites. 
More and more law schools recognize that AI literacy must be part of their curricula, and it will inevitably become a key metric in program rankings. The National Law Review, alongside established leaders in tech and legal education, plans to provide the first rankings of law school AI programs in the Fall of 2026.2 The NLR Law School AI program rankings will be designed to help faculty and administrators evaluate and benchmark their programs. Perhaps more importantly, it will be a resource for applicants to evaluate how law schools are incorporating AI literacy into their curriculum. The NLR and its partners will share more on their planned ranking system – metrics, data sources, and law school participation – in the coming months.
Employers will demand that new hires have the tools to practice law in the digital age. Being able to compare law schools, and prospective hires, on an “apples-to-apples” basis calls for a ranking system. The emergence of independent organizations, like the NLR, to assess an institution’s commitment to incorporate essential AI skills will be an important step for the legal profession in this time of rapid adoption of AI. 
There is no doubt that AI literacy is going to be a key metric in rankings moving forward. Law schools, like MC Law, that are at the vanguard of this digital transformation will be noticed by both law school applicants and employers.

1  Mr. Roberts also serves on the editorial board of The National Law Review’s “A&I and the Law” newsletter and several other of its AI-themed NLR-affiliated properties.
2 Academic institutions, researchers, or law students who are interested in participating in the design or data collection stage of the NLR Law School AI program ranking system may contact Gary Chodes at [email protected].

Exorcising AI Myths: What’s Really Haunting Your Business – Part 1

Artificial intelligence isn’t a novelty reserved for tech firms; most businesses have been using AI tools for years without realizing it.
Businesses have long leveraged AI capabilities to organize massive document repositories, identify complex patterns in data, and automate manual tasks such as approvals, scheduling, and workflow management. The technology has quietly revolutionized back-office operations across industries, from financial services using AI for fraud detection to healthcare organizations employing it for diagnostic support and patient scheduling. 
Today’s excitement centers on generative AI (GenAI). These sophisticated models can generate text, images, or code from plain-language prompts. Hybrid options combine machine learning and natural-language processing to create remarkably capable chatbots and content generators. However, because these models predict one word at a time based on probability patterns, they can stray off-course and produce convincing but wrong answers, especially when trained on ‘unclean’ data or limited datasets.
Businesses today must understand both the promises and the pitfalls, implementing appropriate guardrails and verification processes to harness AI’s power while mitigating its risks.
Key AI Concepts Every Business Leader Should Know

Artificial Intelligence (AI): A machine-based system that makes predictions, recommendations, or decisions to meet human-defined objectives. AI encompasses everything from simple rule-based systems to complex neural networks that can learn and adapt over time. 
Machine Learning (ML): Techniques for training AI algorithms to improve performance based on data, allowing systems to learn patterns and make decisions without explicit programming for every scenario. 
Large Language Model (LLM): A deep-learning model trained on vast text sets to capture patterns in natural language, enabling sophisticated text generation and understanding capabilities. 
Generative AI (GenAI): Models that emulate the structure of input data to produce new content, from marketing copy to technical documentation to creative designs. 
Agentic AI: AI systems that can act autonomously with limited supervision to achieve user goals, representing the next frontier in AI capability.
Prompt and Output: The user’s input to an AI system and the AI’s response, where the quality of the prompt significantly influences the quality of the output.
Hallucination: Fabricated or inaccurate information that appears plausible, a critical risk that requires robust verification processes.

AI models are only as good as their training data and implementation architecture. Poor data quality leads to unreliable outputs, while biased training sets can perpetuate or amplify discrimination. 
Businesses should insist on ‘sandboxed’ and ‘gated’ tools (secure environments isolated from other users and data) to protect sensitive information. Additionally, businesses should consider retrieval-augmented generation (RAG) systems that augment prompts with trusted internal information to improve accuracy and relevance. 
By customizing AI tools and their underlying architecture, Businesses can combine the innovative potential of generative AI with their own trusted data sources. This strategic blend delivers both creative power and accuracy, significantly reducing false outputs while enhancing practical value for the business.
Why Companies Adopt AI (And Why Some Don’t)
The most common reasons for avoiding GenAI include lack of organizational priority, concerns about data use and privacy, and distrust of output quality. Many executives worry about regulatory compliance, potential liability issues, and the challenge of integrating AI into existing workflows. Some businesses fear that AI adoption might displace workers or create dependency on technology they don’t fully understand. These concerns are legitimate and deserve careful consideration in any AI strategy. 
However, among businesses that have embraced AI, most users employ GenAI daily to weekly, with adoption rates accelerating rapidly. Increased efficiency tops the list of reported benefits, with companies seeing productivity gains across diverse functions. Improved communication follows closely, as AI helps craft clearer messages, translate complex technical concepts for diverse audiences, and facilitate cross-functional collaboration. 
Businesses also report significant cost savings through automation of routine tasks, enhanced accuracy in data analysis and decision-making, and better strategic insights from pattern recognition in large datasets. The key takeaway: AI can dramatically boost productivity and competitive advantage, but businesses need comprehensive education programs and robust governance frameworks to realize these benefits while managing risks. 
Five Strategic Essentials for Business Leaders

Identify AI opportunities strategically: Look beyond obvious automation targets to find transformative applications. Examine repetitive tasks, data-heavy processes, and decision bottlenecks where AI can improve both productivity and outcomes. Consider customer service enhancement, predictive maintenance, demand forecasting, and personalization opportunities. Map your value chain to identify where AI could create competitive advantages, focusing on areas with clear ROI potential and manageable implementation complexity. 
Develop comprehensive governance and usage policies: Establish clear policies that encourage innovation while mitigating legal, ethical, and business risks. Your governance framework should address data privacy, algorithmic bias, transparency requirements, and accountability structures. Include specific guidelines for different use cases, from customer-facing applications to internal productivity tools. Regular policy reviews ensure your framework evolves with technology and regulatory changes.
Improve departmental efficiency systematically: Adopt AI solutions that streamline workflows and automate routine tasks across all departments. Start with pilot programs in receptive departments, measure results carefully, and scale successful implementations. Focus on augmenting human capabilities rather than replacing workers, emphasizing how AI can free employees for higher-value activities that require creativity, empathy, and strategic thinking.
Protect intellectual property rigorously: Ensure that AI use respects your organization’s IP while avoiding infringement of others’ rights. Implement clear protocols for data handling, establish ownership rights for AI-generated content, and maintain audit trails for AI-assisted work. Consider how AI tools might inadvertently expose proprietary information and implement appropriate safeguards.
Address AI risks in contracts proactively: Include comprehensive AI-related terms in vendor agreements to manage data privacy, cybersecurity, and liability concerns. Negotiate clear provisions regarding data usage, model training rights, indemnification for AI errors, and compliance with evolving regulations. Establish performance standards and remedies for AI system failures. 

Successfully implementing these strategic essentials requires commitment from leadership, investment in employee education, and a willingness to iterate as you learn. Businesses that approach AI adoption thoughtfully, with clear strategies and appropriate safeguards, position themselves to thrive in an increasingly AI-driven business landscape.

Attorney Sanctioned $10,000 For Citing Nonexistent, AI-Generated Legal Authority

Noland v. Land of the Free, L.P., 2025 WL 2629868 (Cal. Ct. App. 2025)
Sylvia Noland asserted 25 causes of action against her former employer, including claims for wrongful termination, PAGA and other Labor Code violations, breach of contract, intentional infliction of emotional distress, etc. The employer filed a successful motion for summary judgment, which resulted in dismissal of the case. In the appellate briefing plaintiff’s counsel filed, “nearly all of the quotations… ha[d] been fabricated.” In addition, a few of the cases purportedly relied upon by the appellant-employee “d[id] not exist at all.” Plaintiff’s counsel acknowledged to the Court that he had relied on AI “to support citation of legal issues” and that the fabricated quotes were AI-generated; counsel further asserted “he had not been aware that generative AI frequently fabricates or hallucinates legal sources and, thus, he did not ‘manually verify [the quotations] against more reliable sources.’” The Court of Appeal declined to permit the filing of revised briefs and concluded that counsel’s reliance on fabricated legal authority rendered the appeal frivolous and violative of the California Rules of Court. Because the employer’s counsel did not alert the Court to the existence of the fabricated citations and apparently learned of same from the Court, the Court ordered plaintiff’s counsel to pay $10,000 in sanctions to the clerk of the Court rather than to the employer or its counsel.

California’s New AI Laws: What Just Changed for Your Business

California just passed comprehensive AI safety legislation, enacting 18 new laws that affect everything from deepfakes to data privacy to hiring practices. If you do business in California — or use AI tools — here’s what you need to know now.
The State That Wouldn’t Wait
While Washington debates federal AI regulation, California has already written the rulebook. This week, Governor Gavin Newsom signed a sweeping package of 18 AI laws into effect, making California the first US state to establish comprehensive governance over artificial intelligence.
The timing matters. With recent federal efforts to preempt state-level AI regulation now stalled, California’s move sets a precedent that other states are already racing to follow. As with their early efforts in the privacy space (through the California Consumer Privacy Act of 2018), California’s AI rules are quickly becoming everyone’s AI rules.
The Flagship: California’s AI Safety Law
The centerpiece of this legislative package is the Transparency in Frontier Artificial Intelligence Act (TFAIA), formerly Senate Bill 53. This landmark law targets the developers of the most powerful AI systems and establishes California as the first state to directly regulate AI safety. It also builds on the recommendations from the Joint California Policy Working Group on AI Frontier Models.
What TFAIA Requires
Developers of “frontier” AI models must now:

Publish their safety plans: Companies must disclose how they’re incorporating national and international safety standards into their development processes.
Report critical incidents: Both companies and the public can now report serious safety events to the California Office of Emergency Services.
Protect whistleblowers: Employees who raise concerns about health and safety risks from AI systems gain legal protection.
Support public AI research: Through a new consortium called CalCompute, California is building a public computing cluster for developing “safe, ethical, equitable, and sustainable” AI.

The Industry Pushback — and What Got Weakened
The tech industry lobbied hard, and it shows. The final version of TFAIA is considerably softer than earlier drafts:
Incident Reporting Narrowed: Companies are only required to report events that result in physical harm. Financial damage, privacy breaches, or other non-physical harms? These aren’t covered under mandatory reporting.
Penalties Slashed: The maximum fine for a first-time violation — even one causing $1 billion in damage or contributing to 50+ deaths — dropped from $10 million to just $1 million. Critics note that this creates a troubling cost-benefit calculation for large tech companies, which has arguably played out in other areas.
The message? For billion-dollar corporations, safety violations may be just another line item in the budget.
The Broader Package: 18 Laws Reshaping AI Use
Beyond TFAIA, California’s new laws create compliance obligations across multiple industries, many of which took effect in January 2025. For instance:
1) Deepfakes and Election Integrity
California is taking direct aim at AI-generated deception:

Criminal penalties for deepfake pornography: Creating or distributing non-consensual intimate images using AI is now a crime (SB 926).
Election protections: Laws like AB 2655 and AB 2355 require platforms to label or block “materially deceptive” election content, particularly AI-generated videos or audio that could damage candidates or mislead voters.

Real-world impact: Political campaigns and content platforms must now implement detection and labeling systems before the 2026 election cycle.
2) Your AI Data Is Now Personal Information
Here’s a change that affects everyone: AI-generated data about you is now officially “personal information” under California’s Consumer Privacy Act (AB 1008).
What does this mean practically?

AI systems that create profiles, predictions, or inferences about you must now treat that output data with the same protections as traditional personal information.
You gain new rights to access, delete, and control AI-generated data about yourself.
Neural data — information about your brain activity — gets even stronger protection as “sensitive personal information” (SB 1223).

3) The Workplace: No More AI Autopilot
New regulations from California’s Civil Rights Department, effective October 1, 2025, fundamentally change how AI can be used in employment:
The Core Rule: Employers can’t use automated decision systems (ADS) that discriminate based on protected categories under the Fair Employment and Housing Act.
The Requirement: Companies should conduct bias audits of their AI tools used for hiring, promotion, and evaluation.
The Shift: This moves liability away from proving intent to discriminate and toward demonstrating impact. If your AI tool produces discriminatory outcomes — even unintentionally — you’re exposed to legal risk. This is not dissimilar to recent shifts in the children’s privacy law landscape that impose specific constructive knowledge standards.
Practical example: That resume-screening AI you’re using? You need documentation showing you’ve tested it for bias against protected groups. No audit? You’re rolling the dice.
4) Healthcare: Keeping Humans in the Loop
California’s new healthcare AI laws establish a critical principle: algorithms can’t make final medical decisions.
Under SB 1120, AI systems are prohibited from independently determining medical necessity in insurance utilization reviews. A physician must make the final call.
Why this matters: This protects patients from algorithmic denials while still allowing AI to assist with analysis and recommendations. It’s a model other states are already adopting.
What This Means for Your Business
If You’re a Tech Company
Immediate action items:

Review your AI systems against the new compliance requirements.
Document your safety practices and bias testing procedures.
Establish whistleblower protection policies.
Prepare for increased scrutiny from California regulators.

Strategic consideration: California’s strictest-in-the-nation rules often become de facto national standards. Building for California compliance now may save costly adjustments later.
If You Use AI Tools
Questions to ask your vendors:

Have you conducted bias audits on this system?
What happens if your AI produces a discriminatory outcome?
Do your contracts shift all liability to us?
How do you handle California’s new data privacy requirements?

Red flag: Vendors that can’t answer these questions clearly, or whose contracts dump all AI-related liability onto you, pose significant risk.
If You’re in Healthcare
Priority actions:

Review all AI-assisted utilization review processes to ensure physician oversight.
Train staff on new disclosure requirements for AI in patient interactions.
Document human review procedures for all AI-driven medical decisions.

This analysis is current as of September 30, 2025.

Behind the 97%: Claims of AI “Universality” Among Lawyers May Be Premature

A recent, widely circulated report, “Benchmarking Humans & AI in Contract Drafting” (Guo, Rodrigues, Al Mamari, Udeshi, and Astbury, September 2025), made headlines with a striking claim: 97% of lawyers use generative AI for legal work. While it is no secret that AI has become increasingly prevalent in legal practice, this figure suggests a true watershed moment where the technology has essentially reached “universality.” But when we explore the underlying methodology, a number of important concerns emerge and put in doubt the now oft-quoted 97% figure and possible claims of “universality.”
The study, based on research conducted in early 2025, was primarily a benchmarking exercise that compared the outputs of 13 AI tools against a group of in-house lawyers across 30 drafting tasks. This portion of the study, which found that top-performing AI tools matched or even outperformed human lawyers on first-draft reliability while lagging slightly on usefulness, is valuable. It provides concrete findings regarding the areas of law where AI outperforms, or still falls short of, human attorneys.
However, the widely cited “97% of lawyers use AI” statistic comes from a survey of just 72 respondents. Given that recent estimates place the total number of lawyers in this country at over a million, a sample of fewer than 100 may not be representative of the broader population.
The report did not disclose the demographic profile of the law firms/lawyers participating (e.g., the size of the law firm, corporate vs. consumer practice, law firm vs. in-house) or the recruitment methods used to source the 72 respondents included in the study. This left key questions about the reliability of the bold 97% statistic, especially given the small sample size. When contacted by The National Law Review (NLR), the report’s authors stated that the responses were obtained through “direct outreach, LinkedIn, and a practice-community network.” The authors further acknowledged “a risk of selection bias,” noting that lawyers already engaged with AI may have been more willing to respond.
In an email response to NLR, the authors reported their study included a diverse cohort of lawyers spanning 24 jurisdictions, and that a more comprehensive demographic overview of the study participants would be publicly released in the coming weeks, which may provide helpful nuance. Until then, however, a conclusion that AI use among lawyers has reached universality should be treated with healthy skepticism. In an era of frequent claims about technology now surpassing once-thought unreachable benchmarks (e.g., the passing of the Turing Test, reaching AGI, reaching superintelligence, and quantum supremacy), any new claim of this ilk should be based on a strong statistical foundation using best-in-class experimental methodologies. For claims to be accepted, the conclusion should be supported by multiple studies by different research groups and be of sufficient scale. Importantly, the results for any experiment should be demonstrated to be replicable.
The authors also acknowledged that the number of lawyers contacted for the survey was not recorded, pointing to the possibility of self-selection bias (i.e., a disproportionate number of lawyers who do not use AI may have simply chosen to opt out of the study for any number of reasons, including not wanting to appear out of date or due to embarrassment).
Moreover, “universality” itself has multiple dimensions. Does it refer to a lawyer who has tried a generative AI tool once, one who relies on it daily, or one who integrates it into substantive client work? The concept of “universality” has little value without a clear definition of what constitutes use.
While these limitations weaken the claims of universality, they do not diminish the value of the benchmarking work presented in the report. The report offers important and timely findings regarding how AI compares to humans in terms of reliability, usefulness, and workflow support. Still, adoption numbers should be treated with caution. While the study may offer interesting insights into how AI is being used by early adopters, without larger and more representative samples, adoption figures risk overstating the pace of change.

Declawing the CAT: SEC Trims Back Audit Trail Obligations

The Securities and Exchange Commission (SEC) issued a conditional exemptive order that grants targeted relief from several requirements under the national market system plan governing the consolidated audit trail (CAT), SEC Rule 613 of Regulation NMS, and SEC Rule 17a-1 under the Securities Exchange Act of 1934, as amended. The relief reduces CAT’s overall operational complexity and costs while preserving its surveillance role. The order shows the SEC’s effort to recalibrate the CAT’s balance between transparency and efficiency.

Adjustments to Lifecycle Linkages. The order introduces one of its most significant changes through new requirements for lifecycle linkages. Previously, the Financial Industry Regulatory Authority (FINRA) and the securities exchanges had to submit interim lifecycle linkages by T+1 at 9 p.m. and final linkages by T+5 at 8 a.m. Under the exemptive relief, FINRA and the securities exchanges must now provide only the final linkage report by T+5 at 8 a.m. However, regulators may still direct FINRA CAT to generate interim linkages if they need them before the T+5 deadline.
Changes to Late Data Processing. The SEC also modified how the CAT handles late-reported data. The SEC ended the “Full Replay” process, which was formerly used to rebuild late records into complete order lifecycles. Instead, the “Enhanced Late to the Lifecycle” process will continue on a quarterly basis for trade dates within the past three years, but FINRA CAT will not have to reprocess older data. Regulators can still request a Full Replay for specific trade dates, which FINRA CAT must run for the prior week’s data. FINRA and the securities exchanges must continue to notify users of any change to how late data is handled. 
Modifications to Query Tools and Data Retention. The order also narrows certain obligations related to the online targeted query tools (OTQT). FINRA CAT no longer must maintain all OTQT functions, but it must keep core features such as monitoring, logging, and reporting for direct queries and bulk data extracts. The order provides a two-month transition period to prevent disruption before any decommissioned OTQT functionality takes effect.
Data Storage and Retention. The SEC also approved new data retention standards. FINRA CAT may now delete data older than five years instead of six and move data older than three years into cold storage. It may delete interim operational data after fifteen days and purge options market quotes after one year.

Implications for Market Participants
The SEC’s exemptive order demonstrates a willingness to address persistent industry concerns about the CAT’s scope and cost.[1] The exemptive relief refines operational requirements to make the system more practical and manageable and marks another phase in the SEC’s ongoing supervision of the CAT.
Market participants should be prepared for further changes to CAT requirements. The SEC’s recent order explicitly notes that additional reforms remain necessary to respond to recent judicial and regulatory developments appropriately.

Employment Tip of the Month – October 2025

Q: Can employers safely use artificial intelligence (AI) in the hiring process?
A: While some states and jurisdictions do not have any direct law or statute regulating the use of AI during hiring, employers should be concerned about the potential bias in the output from AI used in employment decisions and take steps to ensure that they are not in violation of discrimination and privacy laws.
Background on Use of AI The immersion of AI has accelerated rapidly in recent years, changing the way we work across many industries. From automating routine tasks to assisting in making complex decisions, AI has brought exciting possibilities. With that being said, this rapid advancement and expansion has raised serious concerns regarding AI in the employment context, particularly around algorithmic bias, workplace surveillance, and job displacement. As AI becomes more integrated into workplace management, it’s imperative for employers to stay ahead of the evolving legal standards to ensure compliance, protect employee rights, and manage risk around accountability and fairness.
Public officials and legislatures are increasingly focused on the potential risks and benefits that come with using AI technology. In the employment realm specifically, laws affecting employers include those that regulate the use of automated employment decision tools (AEDTs) in the hiring process. Examples of this legislation can be seen in New York Local Law 144 and Illinois H.B. 3773. New York Local Law 144 was the first of its type to create obligations for employers when AI is used for employment purposes. Illinois followed by enacting H.B. 3773, making it the second state to pass broad legislation on the use of AI in the employment context.
More legislation is on the way in 2025, generally falling into two distinct categories. The first would require employers to provide notice that an automated decision-making system for the purpose of making employment-related decisions is in use. The second aims to limit the purpose and way an automated decision system may be used. This article will highlight proposed, enacted, and failed legislation, and offer takeaways about what to be aware of moving forward.
Currently Enacted LegislationNew York has followed a broad trend seeking to bring transparency to the use of automated decision systems, including AI, in employment and other areas through two pieces of legislation. New York Local Law 144, which took effect on January 1, 2023, prohibits employers and employment agencies from using an AEDT in New York City unless they ensure a bias audit was done and provide notice. If employers or employment agencies use an AEDT to substantially help them assess or screen candidates at any point in the hiring or promotion process, they must comply with the law’s requirements. The next piece of legislation, New York S.B. 822, effective July 1, 2025, amended existing law on AI and employment regarding state agencies and prohibits the use of AI to affect existing rights of employees pursuant to a collective bargaining agreement.
Illinois joined New York in passing legislation to regulate AI and the risks associated with its use in the employment context. H.B. 3773 amended the Illinois Human Rights Act (IHRA) and affects any employer who currently uses, or intends to use, AI, including generative AI, to make decisions around recruitment, hiring, promotions, training, discharge, or any other term or condition of employment. The amendment prohibits employers from using AI in ways that may lead to discriminatory outcomes based on characteristics protected under IHRA. Additionally, employers are required to give notice if they are using AI in this realm. The Illinois Department of Human Rights and Illinois Human Rights Commission will enforce the law, with remedies possibly including back pay, reinstatement, emotional distress damages, and attorneys’ fees. This goes into effect January 1, 2026.
Additional legislation has been enacted in Colorado, Maryland, California, and other states. The Colorado AI Act, which will take effect on February 1, 2026, is designed to regulate the use of high-risk AI systems by imposing compliance obligations on developers of the systems and the businesses that use them. The Act is designed to encompass the employment context sphere, resulting in Colorado employers being subject to the law. On June 30, 2025, California provided regulations under the Fair Employment and Housing Act that address use of AEDTs in employment decisions, effective October 1, 2025. Maryland has also implemented legislation, section 3-717, that forbids the use of facial recognition services to create a facial template during an applicant’s interview without a waiver signed by the applicant.
Failed LegislationDespite the influx of enacted legislation regarding AI in employment, much has failed to be passed into law. 

In Connecticut, a bill that would have implemented AI protections for employees and limited the use of electronic monitoring by an employer failed. 
In Texas, three separate bills failed to pass, the first relating to AI training programs and attempting to impose requirements on the developers of these systems. An additional proposed bill prohibited state agencies from using an automated employment decision tool to assess a job applicant’s fitness for a position unless the applicant was notified and provided with information, and any bias was mitigated. 
In Georgia a bill was proposed to prohibit surveillance-based price discrimination and wage discrimination, but ultimately failed. 
Lastly, the Nevada legislature proposed a bill to require AI companies to maintain policies to protect against bias, generation of hate speech, bullying, and more. The bill would have imposed requirements on employers, landlords, financial institutions, and insurers to uphold these standards.

Even legislation that reached the final stage of the process has had difficulty being passed. For example, Virginia Governor Glenn Youngkin vetoed an AI bill on March 24, 2025, that would have regulated how employers used automation in the hiring process. Specifically, the bill would have regulated both creators and users of AI technology across multiple use cases, including employment. Youngkin stated that he vetoed the bill out of fear that it would erode Virginia’s progress in attracting AI innovators and tech startups.
All states remain very interested in regulation of these emerging AI tools and features but have yet to align on the best way to handle such regulations. Much of the legislation that fails lacks specificity and the level of intricacy that the passed laws on this topic contain.
Proposed and Pending LegislationDespite frequent failure of legislation regarding AI in the employment context, there is an array of pending legislation throughout the United States in 2025, with three overlapping themes. 

The first is aimed at requiring employers to provide notice that an automated decision-making system for the purpose of making employment-related decisions is in use (see California Senate Bill 7, Illinois Senate Bill 2203, Vermont House Bill 262, Pennsylvania House Bill 594, New York Senate Bill 4349 and 185). 
The second is legislation aimed at limiting the purpose and way an automated decision system may be used to make decisions (see California Senate Bill 7, Colorado House Bill 1009, Massachusetts House Bill 77, New York A.B. 3779 and 1952). 
The third is legislation aimed at allowing bargaining over matters related to the use of AI between the state and its employees (see Washington Senate Bill 5422). States such as New York aim to expand measures to hold employers accountable for AI-driven employment decisions, where others including Massachusetts hope to develop a comprehensive legal structure to deal with these types of issues that present in AI and employment.

What This Means for EmployersThe influx of legislation makes it imperative that employers pay careful attention and strengthen their AI compliance practices. In doing so, employers should focus on the follow important points.
Be Transparent: Job candidates and employees should be informed when AI tools are used in their selection process or evaluations. On the flip side, employers may want to ask for confirmation that candidates did not use AI to produce application materials.
Prepare for Accommodations with AI Use: Have accommodation plans in place should a candidate seek a disability accommodation, particularly recognizing that many laws and federal regulations instruct employers to provide an alternative to the AI tool.
Develop AI Use Policies: In crafting policies, employers should consider how their employees may use AI along with how employers want them to use the technology. Policies should have usage guidelines and best practices.
Check and Audit Vendors: Employers should be mindful in selecting AI vendors that ensure their systems are not biased, can be audited, and can duly address reasonable accommodations in the recruiting and hiring process. If possible, employers should (1) require that representations used by AI tools in workplace contexts are legally compliant and (2) attempt to negotiate indemnification protections from AI vendors and obtain their cooperation in defending against related claims.
Validate Results: Employers should ensure a diverse applicant pool before the application of AI and consider hiring an industrial organization psychologist to conduct validation research. Validate the results of the use of the tool and compare to results human decision-makers have obtained.
Stay Informed and Stay Tuned to Legal Shifts: It is important to stay up to date on existing and pending legislation related to AI to ensure AI tools are consistent with federal, state, and local law, and to update policies and practices consistent with legal developments.
Retain Human Oversight: Ensure critical decisions aren’t made solely by automated tools. Train HR teams on when to override algorithmic rankings, and audit results for desired (and non-discriminatory) outcomes.
Avoid Litigation Regarding AI Workplace Tools: To avoid legal entanglement, businesses should carefully review any AI tools used for employment functions, potentially turning to both technical and employment law experts for independent audits to ensure they are not biased or otherwise potentially violating applicable employment laws. This is recommended even if their jurisdiction has no new AI-specific discrimination laws because of the rapid adoption of AI workplace tools and the potential for liability under existing non-AI-specific employment laws
With all of the above, consulting with an attorney in connection with best practices in employment can help navigate compliance with all applicable federal, state, and local laws.

How Legal Practices can use AI to Vastly Improve the Client Experience

Businesses across all industries are transforming their approach to the client experience with AI – and law practices are no exception.
“Although traditionally reluctant to adopt new technologies, law firms are already making significant investments in internal and third-party approaches to generative AI,” law professors Catherine Gage O’Grady from the University of Arizona and Casey O’Grady of Harvard University wrote in a study published this year. They anticipate that legal practices “will start to move toward agentic systems that combine human lawyers with AI agents.”
Law firm adoption of these new AI technologies for client work can lead to the biggest opportunities but also expose firms to significant pitfalls. Here’s a roadmap.
Unify the Customer Experience
There are myriad uses for AI technologies in the practice of law. One of the most important things for law firms is to deploy AI technology to improve the underlying client experience (CX). Research demonstrates that even clients who obtain positive legal outcomes and are happy with the quality of their legal work are still often disappointed in their overall relationship because the CX is poor. 
“Surveys reveal that at least 50% of successful clients report dissatisfaction with their attorneys, not due to incompetence or negligence, but because of poor communication,” Attorney Journals reported. “In a study of 44 successful clients, 60% cited communication issues as their primary concern. A significant report by the International Bar Association involving 219 senior counsels found that poor communication was the leading reason clients terminated their attorney-client relationships. This issue spans all demographics and practice areas.”
To be as accessible as possible, attorneys need to operate across as many channels as possible – cell phone, video conferencing, text, email, client portals, chat, DMs, LinkedIn, WhatsApp, etc. While this “omnichannel experience” may be essential, it is inherently disorganized and can lead to a scattered trail of information about clients and their important legal matters – and client communications will suffer.
Even the greatest lawyers can’t be expected to remember everything that was said, planned, and promised, particularly when balancing a packed roster of many clients. That’s why today’s legal practices would often benefit from a unified customer experience. This type of approach helps “ensure seamless customer journeys, enabling businesses to meet the evolving expectations of consumers,” a recent study in the American Journal of Economic and Management Business explains.
Unified customer experience management (UCXM)  platforms transcribe calls, pull together information from every communication channel, and use natural language processing (NLP) to create useful summaries and key insights that attorneys can easily spot at a glance. 
A growing number of law firms are adopting UCXMs. For example, The Law Offices of Lee Arter employed a UCXM solution from one of the industry-leading tech providers and saw terrific results. “Our aim is to be accessible and easy for people to get a hold of us,” explained attorney Eric DeBellis. The Californian consumer law firm “embraced” a UCXM “and we love it,” he added.
Top technology firms increasingly observe their clients in the legal sector gravitate toward adoption of these newer technologies that enhance connectivity. According to Jamal Khan, head of the Helix Center for Applied AI & Robotics (CNXN): “We continue to see strong interest from law firms in adopting AI tools to improve the way they communicate with clients. Whether it is analyzing conversations or helping surface insights more quickly, these solutions are becoming important to delivering a higher quality client experience.”
Ensure Access to All Pertinent Data
These tools go beyond just combining client interactions into a single record. They wrap in key documents and schedules, creating a holistic view of each individual. In a new study on AI contract review in the cloud, Gargi Sharma and Himani Sharma of Manipal University in India note that these technologies can extract and classify crucial information from legal documents. 
“This consequently makes the search for important provisions much easier,” they write. Also, “The artificial intelligence recognizes disparities, risks, and improprieties that may pose a financial or legal risk by cross-referencing with predefined regulations, such as identifying inconsistencies, hazards, and non-standard terms that might be financially or legally risky.” 
For all this to work, firms must ensure that these platforms have access to all the necessary data. Any AI tool is only as good as the information it ingests; data silos can stand in the way of an otherwise well-planned UCXM implementation.
Keep Humans in Charge
These new UCXM tools can help catch details that human reviewers at law firms may overlook, a huge potential benefit. Also, as AI-infused tech becomes increasingly ingrained in everyday life, clients may warm up to the idea of interacting regularly with AI agents employed by their legal counsel. 
But none of this progress – in terms of the onward march of AI and technology in the legal profession – means disregarding the accountability of the lawyer directly to the client. Ken Withers, executive director of The Sedona Conference, is developing guidance on the implementation of AI by lawyers and the courts. He notes that Rule 1.4 of the ABA’s Model Rules of Professional Conduct requires regular communication and consultation with the client, and Rule 1.1 on “competence” addresses technological proficiency in running a legal practice. 
“UCXM appears to be a potentially low-risk, high-reward application of AI to the practice of law,” Withers says. “But the lawyer’s ultimate responsibility is to make sure client work is done professionally, which means the lawyer must be assured that the technology performs as represented.” He adds, “It’s distinctly possible that in the near future, as this application of AI proves itself and is accepted by the profession, legal malpractice insurance carriers may insist on it.”
While tools such as UCXM solutions can operate without risking client privacy or confidentiality, it’s up to humans to oversee these systems and make sure they are used as intended. For example, attorneys must still independently verify and approve anything that is sent out publicly or filed in court. Adopting AI does not require replacing people; it can mean freeing people to focus on higher-level tasks. 
Josh Gablin, an attorney specializing in AI and ethics, furthers this theme. “Many AI tools are now available for lawyers that make their work more efficient,” he says. “But lawyers must still abide by existing ethical obligations, such as keeping the client informed about their matter and explaining it sufficiently for the client to make informed decisions. The risk of poor communication with clients and a possible breakdown in the attorney-client relationship can be exacerbated by the modern hodgepodge of disjointed communication methods. Fortunately, new AI tools, such as a unified UX platform focused on enabling organized communication, are potential game-changers in this area.”
By harnessing these technologies, lawyers can deliver clients the personalized experiences they seek. Clients feel the difference – and so do prospects. 
In a FindLaw survey of more than 2,000 people who had legal needs, more than half (56%) said they take action within a week or less. “Prompt responsiveness is key when it comes to capturing prospective clients, so incorporating web chat and call answering services on your website ensures that no lead is left unattended,” the survey reports. “These tools enable your firm to manage inquiries outside of regular business hours, facilitating quick connections with prospects while reducing time spent on pre-screening tasks.” 
Today’s clients have higher expectations than ever. They don’t just compare and judge their client experience with your law firm to their experiences with other firms; they compare each experience with your law firm to the best customer experiences they’ve had with any kind of business. The bar is high. But with the power of AI, legal practices can pass it.

DISCLAIMER:
The views and opinions expressed in this article are those of the author or those quoted herein, and not necessarily those of The National Law Review (NLR).
PRODUCT DISCLAIMER:
The National Law Review (NLR) does not endorse or recommend any commercial products, processes, or services. References to any specific commercial products in this article are for informational purposes only.

Landmark Patent Appeal Decision Strengthens Protection for AI and Machine Learning Innovations

A significant Patent Trial and Appeal Board (PTAB) decision authored by the US Patent and Trademark Office (USPTO) leadership, including the new USPTO Director John A. Squires, signals the importance of artificial intelligence (AI) innovations to the US economy and paves the way for patenting of AI and machine learning (ML) technologies.

On September 26, PTAB’s Appeals Review Panel (ARP) issued a significant decision addressing the patent eligibility of claims of a patent application filed by Google’s DeepMind Technologies directed to methods for training ML models. The decision vacates a prior ground of rejection under 35 U.S.C. § 101, affirming that the claims at issue, when considered as a whole, are not directed to a patent-ineligible abstract idea but instead reflect a technological improvement in the field of machine learning.
Claimed Invention
The patent application at the center of this decision concerns a computer-implemented method for training a ML model on multiple tasks. The method involves determining the importance of model parameters to a first task, then training the model on a second, different task while protecting performance on the first task. This is achieved by adjusting model parameters to optimize an objective function that incorporates a penalty term based on the previously determined importance measures.
The specification highlights several technical advantages, including reduced storage requirements, lower system complexity, and the ability to preserve performance across sequential tasks, which addresses the well-known challenge of “catastrophic forgetting” in continual learning systems.
Procedural History
The Board initially affirmed the rejection of all pending claims under 35 U.S.C. § 103 and introduced a new ground of rejection under § 101, finding the claims directed to an abstract idea, specifically, a mathematical calculation. The applicant sought rehearing, arguing that the claims provided a technical improvement in ML technology.
Legal Framework
The ARP applied the two-step framework established in Alice Corp. v. CLS Bank and further articulated in the Manual of Patent Examining Procedure § 2106. The analysis first considers whether the claims are directed to a judicial exception (such as an abstract idea) and, if so, whether additional elements integrate the exception into a practical application.
Key Findings

The ARP agreed that the claims recite an abstract idea in the form of a mathematical calculation. However, the analysis did not end there.
Upon review, the ARP found that the claims, when considered as a whole, integrate the abstract idea into a practical application by improving the operation of the ML model itself.
The decision emphasized that the claims address technical challenges in continual learning, such as preserving knowledge from previous tasks and reducing storage needs, which constitute improvements to computer technology.
This decision underscores that claims directed to ML methods may be patent-eligible when they provide a specific technological improvement, rather than merely reciting abstract mathematical concepts or generic computer implementation.
The decision also highlights the continued relevance of §§ 102, 103, and 112 as the primary statutory tools for assessing the scope and validity of patent claims, rather than relying on § 101 to categorically exclude innovations in AI and ML.

Implications
While the decision addresses the legal framework for patent eligibility of AI inventions, its broader significance lies in the clear policy direction set by the USPTO’s leadership regarding the patentability of AI and ML inventions. With this decision, the USPTO signals strong support for protecting AI innovations through the patent system.
This approach encourages inventors and companies to pursue patent protection for advances in AI and ML, fostering increased investment and growth in these fields. The decision reflects a commitment to ensuring that US patent policy keeps pace with technological developments, maintaining America’s leadership in AI by providing robust incentives for innovation and reducing uncertainty around the eligibility of AI-related inventions. As a result, the USPTO is likely to see a continued rise in AI patent filings, reinforcing the nation’s position at the forefront of emerging technologies.