AI Adoption Surges Among S&P 500 Companies—But So Do the Risks

According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.
The article reports that major companies are no longer just testing AI in isolated pilots; they’re embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.

Reputational Risk: Leading the way is reputational risk, with more than a third of companies worried about potential brand damage. This concern centers on scenarios like service breakdowns, mishandling of consumer privacy, or customer-facing AI tools that fail to meet expectations.
Cybersecurity Risk: One in five S&P 500 companies explicitly cite cybersecurity concerns related to AI deployment. According to Cybersecurity Dive, AI technology expands the attack surface, creating new vulnerabilities that malicious actors can exploit. Compounding these risks, companies face dual exposure—both from their own AI implementations and from third-party AI applications.
Regulatory Risk: Companies are also navigating a rapidly shifting legal landscape as state and federal governments scramble to establish guardrails while supporting continued innovation.

One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC’s 2025 Annual Corporate Director’s Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.
Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.

USPTO’s Automated Search Pilot Program: Early Prior Art Insights—Promises and Pitfalls for Patent Applicants

The United States Patent and Trademark Office (USPTO) is rolling out a new Automated Search Pilot Program, offering applicants a first-of-its-kind opportunity to receive a pre-examination, AI-generated prior art search report. The program’s stated goals are to improve prosecution efficiency and the quality of patent examination by providing an Automated Search Results Notice (ASRN) before an examiner reviews the case. The ASRN is intended to provide an earlier communication regarding potential prior art issues and could bring about significant changes in how utility filings are prosecuted and strategized.
How it Works
The USPTO’s internal AI tool will generate the ASRN by searching the application text against multiple U.S. and foreign databases, ranking up to ten documents for relevancy. Shortly after pre-examination processing, the ASRN is issued to the applicant, providing insight into the potentially relevant prior art uncovered—but requiring no response.
This will give the applicant an opportunity to assess prior art issues before substantive examination. Applicants are not required to respond to the ASRN, but may opt to file a preliminary amendment to place the application in better condition for examination. Applicants may also request deferral of examination or file a petition for express abandonment and seek a refund of certain fees if prosecution is no longer desired.
How to Participate
Applicants filing original, noncontinuing, utility patent applications between October 20, 2025, and April 20, 2026, can participate in the program by submitting a petition (Form PTO/SB/470) and the then-current petition fee set forth in 37 CFR 1.17(f) with the application filing. The application must be filed electronically using the USPTO’s Patent Center, and the application must conform to the USPTO requirements for DOCX submission upon filing. Finally, the applicant must be enrolled in the Patent Center Electronic Office (e-Office) Action Program to participate.
International applications entering the national stage in the US, plant applications, design applications, reissue applications and continuation applications are not eligible.
Potential Benefits
The clearest benefit of this program is the opportunity for applicants to see potentially relevant prior art before substantive examination proceeds. For patent owners operating in crowded technology spaces, this may mean fewer surprises and an earlier ability to refine claim strategy. Receiving an automated prior art report enables applicants to refocus the claims before substantive examination, potentially heading off costly additional cycles of prosecution. It also creates a pathway for quick decision-making: if the ASRN reveals insurmountable prior art, applicants may opt for express abandonment and seek a refund of the search and any excess claims fees.
What Applicants Need to Know
The ASRN is limited to a maximum of 10 references, which will be listed in order of relevance as ranked by the AI tool. Typically, if the AI tool actually uncovers the best art, the top handful should be sufficient. But whether the AI tool finds the best references—or at least those as good as the examiner will find—remains to be seen.
At this stage, AI technology is still nascent and not without error. As of this writing I have tested several AI search tools and have been somewhat disappointed with the results. While these AI tools may uncover some nuggets, they are often hit-or-miss, returning irrelevant references and making poorly reasoned combinations of prior art that fail to meet legal standards for combinability. These tools can be useful, but still require a fair amount of human oversight to achieve good results. Thus, the references returned with the ASRN may be mixed or questionable value. I plan to test this pilot program myself to see whether there is value in the preliminary results. 
Examiners often possess deep expertise in their assigned art units and have accumulated personal collections of highly potent art over their lengthy tenures. These “go-to” references are often uncovered based on human intuition and years of hard work, rather than searchable metadata or textual similarity. Thus, the AI system generating the ASRN may overlook these highly relevant references. Consequently, applicants may still encounter more significant or challenging prior art in the first Office Action than what appears in the automated report.
Tactical and Strategic Considerations
Any preliminary amendment to address results of the automated search should be submitted as soon as possible to reduce the likelihood that the amendment interferes with preparation of the first Office Action. Teams should establish protocols for prompt ASRN review and clear criteria for preliminary amendments, abandonment, or examination deferral.
As with any new process, in-house counsel and portfolio managers should weigh the strategic fit before participation and monitor outcomes as real-world experience with the program accumulates. Teams contemplating the pilot program should weigh whether the value of obtaining an early prior art report aligns with their portfolio management goals. For example, in fast-moving sectors where fast, strategic pivots matter, the program may offer a real advantage by streamlining prosecution. While there may be additional cost associated with reviewing the references and filing a preliminary amendment, that cost may be offset by fewer rounds of Office Actions before allowance.
Early search results from the ASRN may also assist with foreign filing decisions, which often need to be made before receiving a substantive Action in the U.S. If the ASRN reveals prior art that poses a major hurdle, applicants may decide to forgo foreign filings and avoid the accompanying expenses. Absent such insight, applicants are often forced to make these costly decisions without knowing the prior-art landscape, just to preserve their priority date. Conversely, when proceeding with foreign filings, the ASRN may enable you to refine your claims from the outset, increasing your chances of a more efficient prosecution abroad.
Final Thoughts
The USPTO’s Automated Search Pilot Program provides the potential for up-front insight into relevant prior art, allowing applicants to make more informed early decisions and streamline prosecution. While it may bring the opportunity for greater efficiency and cost savings, its true value will depend on the relevance of the AI-located references. If the Program lives up to its promise, it will allow applicants to make informed decisions up front to avoid unnecessary rounds of examination. But with a limited reference count and uncertainties associated with an AI-generated prior-art search, whether the Program translates into real benefits will not be known until we have had a chance to road-test the Program.
 

USPTO Automated Search Pilot Program: Enhancing Early Prior Art Assessment

The United States Patent and Trademark Office (USPTO) has announced an Automated Search Pilot Program in the Federal Register on October 8, 2025. This initiative evaluates the effects of providing AI-generated search results prior to substantive examination of original, noncontinuing, nonprovisional utility patent applications. By issuing an Automated Search Results Notice (ASRN) to participants, the program enables applicants to identify potential prior art concerns earlier, informing strategic decisions during prosecution.
The intent is to assess:

The influence of pre-examination search reports on applicant behavior.
The feasibility of scaling ASRN generation.
Data to guide future enhancements.

The pilot targets at least 1,600 applications across Technology Centers examining utility patents.
Eligible applications are original utility filings under 35 U.S.C. 111(a), submitted electronically via Patent Center in DOCX format between October 20, 2025, and April 20, 2026 (or until 200 applications per relevant Technology Center).
Participants must enroll in the Patent Center e-Office Action Program.  To join, file a petition using Form PTO/SB/470 with the petition fee under 37 CFR 1.17(f) on the application filing date. Granted petitions receive an ASRN, and ineligible ones will be dismissed without appeal.
An internal AI tool will conduct a search using the application’s Cooperative Patent Classification (CPC), specification, claims, and abstract. It will query public databases, including U.S. patents, pre-grant publications, and foreign texts, and automatically rank up to 10 relevant documents. The Patent Office adds that its models will be trained on unbiased public patent data, ensuring confidentiality under 35 U.S.C. 122(a).
The ASRN will list citations with a Patent Public Search query string for retrieval. No response is required, but applicants will have options including preliminary amendments, examination deferral, or express abandonment.
Examiners will consider ASRN documents as standard prior art, potentially citing them in actions.  This may also result in fewer citations by Applicants, which may reduce cost in view of the new prior art submission fees implemented by the USPTO this year.
The program is intended to expedite prosecution, improve examination quality, and refine AI applications in patent processing.  The USPTO will solicit participant feedback and adhere to GAO pilot design principles, including objective-setting and outcome analysis.

New Development – Taiwan’s Executive Yuan Has Passed the Draft Bill of the Basic Act on Artificial Intelligence

This is a follow-up to our legal alert concerning the draft bill of the Basic Act on AI (人工智慧基本法草案; the Bill). On 28 August 2025, Taiwan’s Executive Yuan passed the draft Bill, aiming to strike a balance between emerging AI technologies and ethical conduct. The Bill itself is available here. 
Recent Updates
This Bill is a slightly revised version of the preliminary draft announced by the National Science and Technology Council in July 2024. We have summarized the key differences below:

The Executive Yuan has indicated that no new governmental agency will be created to regulate AI developments. Instead, the Ministry of Digital Affairs (MoDA) will be designated to coordinate with other agencies to develop detailed laws and regulations.
MoDA is charged with the obligation to set regulations for restricting or prohibiting any AI application from causing damage to the lives, bodies, freedom, or property of citizens, or to social order, national security, or the ecological environment, as well as from violating relevant laws and regulations by causing or generating conflicts of interest, bias, discrimination, false advertising, misleading or falsified information, etc. Therefore, instead of relying on governance and preventive measures, MoDA must take affirmative action to promulgate regulations to that end. The Bill also specifies that national security is a basis for such obligation.
Research and development (R&D) exemption clause: the government should define the criterion for attribution of responsibility and establish relevant relief, compensation, or insurance regulations for high-risk AI applications. To avoid affecting the freedom of academic and industrial R&D, activities conducted before application of AI shall not be subject to the regulations related to application accountability. However, this exemption does not apply to actual environmental testing or the use of R&D achievements to provide products or services.

Conclusion
The Executive Yuan is taking a framework-oriented path, focusing on core principles rather than imposing immediate strict regulations. The Bill will serve as the foundation for all the laws, regulations, and directives for governing AI technology in the future. For the next step, Taiwan’s Executive Yuan will submit the Bill to the Legislative Yuan for further review.
Companies in the AI industry should closely monitor any guidance and regulations published by the Legislative Yuan or other agencies and consider necessary adaptations to fully align their operations with the expectations of Taiwan’s government.

Novel Lawsuits Allege AI Chatbots Encouraged Minors’ Suicides, Mental Health Trauma: Considerations for Stakeholders

In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
A recent article in Modern Healthcare reported that the increased scrutiny of AI chatbots is not preventing digital health companies from investing in AI development to meet the rising demand for mental health tools. Yet the issue of AI and mental health encompasses not only minors, developers, and investors but also health care providers, therapists, and employers in all industries, including health care. On October 1, 2025, a coalition of leaders from academia, health care, tech, and employee benefits announced the formation of an AI in Mental Health Safety & Ethics Council, a cross-disciplinary team advancing the development of universal standards for the safe, ethical, and effective use of AI in mental health care. Existing lawsuits from parents are demonstrating various avenues for liability in a broad range of contexts, and the seriousness of those lawsuits may prompt Congress to act. In this post, we explore some of the many unfolding developments.
The Lawsuits: Three Examples
Cynthia Montoya and William Peralta’s lawsuit, filed in the U.S. District Court for the District of Colorado on September 15, alleges that defendants including Character Technologies, Inc. marketed a product that ultimately caused their daughter to commit suicide by hanging within months of opening a C.AI account. They allege claims including strict product liability (defective design); strict liability (failure to warn); negligence per se (child sexual abuse, sexual solicitation, and obscenity); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of the Colorado Consumer Protection Act.
Matthew and Maria Raine’s lawsuit, filed in California Superior Court, County of San Francisco, on August 26, alleges that defendants including OpenAI, Inc. created a product, ChatGPT, that helped their 16-year-old son commit suicide by hanging. The Raines allege claims including strict liability (design defect and failure to warn); negligence (design defect and failure to warn); violation of California’s Business and Professional Code, Unfair Competition Law, and California Penal Code (criminalizing aiding, advising, or encouraging another to commit suicide); and wrongful death and survivorship.
Megan Garcia filed suit in U.S. District Court for the Middle District of Florida (Orlando) in October 2024 against Character Technologies Inc. and others, claiming that her son’s interactions with an AI chatbot caused his mental health to decline to the point where the teen committed suicide to “come home” to the bot. An amended complaint filed in July 2025 alleges strict product liability (defective design); strict liability (failure to warn); aiding and abetting; negligence per se (sexual abuse and sexual solicitation); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of Florida’s Deceptive and Unfair Trade Practices Act.
Congressional Scrutiny
The Montoya/Peralta lawsuit appeared the same week as a September 16, 2025, hearing of the U.S. Senate Judiciary Committee on “Examining the Harm of AI Chatbots.” The panel included Matthew Raine and Megan Garcia as well as “Jane Doe,” a mother from Texas who filed suit in December 2024 alleging that her son used a chatbot suggesting that “killing us, his parents, would be an understandable response to our efforts [to limit] his screen time.”
Senator Josh Hawley (R-MO), who chairs the U.S. Senate Subcommittee on Crime and Counterterrorism and who conducted the hearing, took the issue seriously:
The testimony that you are going to hear today is not pleasant. But it is the truth and it’s time that the country heard the truth. About what these companies are doing, about what these chatbots are engaged in, about the harms that are being inflicted upon our children, and for one reason only. I can state it in one word, profit.
Representatives from certain companies that develop AI chatbots reportedly declined the invitation to appear at the congressional hearing or to send a response.
Potential FDA and FTC Oversight
On September 11, 2025, the Food and Drug Administration (FDA) announced that a November 6 meeting of its Digital Health Advisory Committee would focus on “Generative AI-enabled Digital Mental Health Medical Devices.” FDA is establishing a docket for public comment on this meeting; comments received on or before October 17, 2025, will be provided to the committee.
Although FDA has reviewed and authorized certain digital therapeutics, generative AI products currently on the market have generally not been subject to FDA premarket review and are not subject to quality system regulations governing product design and production, or postmarket surveillance requirements. Were FDA to change the playing field for these products, it could have a major impact on access to these products in the U.S. market, producing substantial headwinds (e.g., barriers to market entry) or tailwinds (e.g., enhancing consumer trust, and competitive benefits for FDA-cleared products), depending on your point of view.
All stakeholders (practitioners, software developers and innovators, investors, and the public at large) should be paying close attention to FDA developments and considering how to effectively advocate for their points of view. Innovators also should be thinking about how to future-proof themselves against major disruptions due to (very likely) regulatory changes by, for example, building datasets substantiating product value to individuals, implementing procedures and processes to mitigate risks being introduced through product design, and adopting strategies to identify and address emergent safety concerns. If products’ regulatory status is called into doubt or clearly changes in the future, these steps can help innovators be prepared to address their products with FDA if they are contacted.
The FTC announced its own inquiry on September 11, issuing orders to seven companies providing consumer-facing AI chatbots to provide information on how those companies measure, test, and monitor potentially negative impacts of this technology on children and teens. The inquiry “seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”
The timing here is not coincidental. FDA and FTC routinely coordinate on enforcement of laws concerning consumer (nonprescription) products and will likely be considering how to most efficiently implement changes to regulation.
Federal Legislative Efforts
Federal legislators recently introduced bills to prevent harm to minors’ mental health due to AI chatbots; these proposals highlight enforcement by the Federal Trade Commission (FTC) and the state attorneys general. Key federal bills include:

2714: The CHAT Act would require AI chatbots to implement age verification measures and also to establish certain protections for minor users. The legislation includes, among other things, a requirement of verifiable parental consent before allowing a minor to access and use the companion AI chatbot; immediate notice to the parent of any interaction involving suicidal ideation; and blocked access to any companion AI chatbot that engages in sexually explicit communication. Notice would be required every 60 minutes that the user is not engaging with a human. A covered entity—defined as “any person that owns, operates, or otherwise makes available a companion AI chatbot to individuals in the United States” — would be required to monitor companion AI chatbot interactions for suicidal ideation. Violations of S. 2714 would be enforced by the FTC or through civil actions by the attorneys general of the states.
R. 5360: This legislation would direct the FTC to develop and make available to the public educational resources for parents, educators, and minors with respect to the safe and responsible use of AI chatbots by minors.

State Legislative Efforts
States including Utah, California, Illinois, and New York have already undertaken legislative efforts relating to AI and mental health, seeking to impose obligations on developers and clarifying permissible applications of AI in mental health therapy (see a summary by EBG colleagues here). New York’s S. 3008, “Artificial Intelligence Companion Models,” takes effect November 5. It defines “AI companion” as an AI “designed to simulate a sustained human or human-like relationship with a user” that facilitates “ongoing engagement” and asks “unprompted or unsolicited emotion-based questions” about “matters personal to the user.” The bill also defines “human relationships” as those that are “intimate, romantic or platonic interactions or companionship.” The AI companion must have a protocol for detecting “user expressions of suicidal ideation or self harm,” and it must notify the user of a suicide prevention and behavioral health crisis hotline. The AI must also provide notifications at the beginning of any interaction, and throughout the interaction—at least every three hours—that state that the user is not communicating with a human.
On September 22, 2025, the California legislature presented to the governor for signature SB 243, Companion Chatbots, which would amend the Business and Professions Code. If signed, this law will take effect July 1, 2027. The law closely tracks New York’s law: it requires the AI to provide notifications every three hours that the user that it is not human, and it also requires protocols to detect suicidal ideation. Interestingly, this law provides a private right of action for injunctive relief, damages of up to $1,000 per violation, and attorney’s fees and costs.
Illinois HB 1806, the Therapy Resources Oversight Act, took effect on August 1, 2025. It is designed to ensure that therapy or psychotherapy services are delivered by qualified, licensed or certified professionals and to protect consumers from unlicensed or unqualified providers, including unregulated AI systems. AI use by a licensed professional is permitted when assisting in providing “supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs and data systems.” The new law prohibits an individual, corporation, or entity from providing or advertising, offering therapy or psychotherapy services, including through Internet-based AI, unless the services are conducted by a licensed professional. A proposed law in New York, S. 8484, would also prohibit licensed mental health professionals from using AI tools in client care, except in administrative or supplemental support activities where the client has given informed consent.
Other comprehensive state laws relating to AI and consumer protection, such as the impending law in Colorado, may also be implicated in the context of AI chatbots and mental health.
Takeaways for the Health Care Industry (Including Health Care Employers)
The issues surrounding AI mental health chatbots, potential liability, and increasing probability of regulatory actions continue to develop quickly—against a federal backdrop of fostering AI innovation. Developers and investors should already be following the cases and laws in this area. Health care providers and social workers should familiarize themselves with the specific laws that could affect them as practitioners, and with chatbot apps they recommend or use, as well as data protection issues. We add here that more employers are offering mental health chatbots to employees, which could raise liability concerns:

Risk of misdiagnosis or inappropriate treatment. If the bot’s algorithms are flawed or its responses inadequate, and an employee suffers harm, the employer could face claims of negligence for selecting or deploying an inadequate therapeutic tool. Courts may find that employers assumed a duty of care by offering what employees reasonably perceived as mental health treatment.
Privacy and data security. Employees may disclose sensitive information about mental health conditions, trauma, substance use, or other protected health information. If this data is breached or used inappropriately, employers could face lawsuits under the Health Insurance Portability and Accountability Act, state privacy statutes, or disability discrimination laws like the Americans with Disabilities Act.
Practice of medicine. Employers must consider whether they are practicing medicine without proper licensing or credentials, which could trigger regulatory action or professional liability claims—especially if the bots cross the line from general wellness into clinical mental health treatment.
Voluntary consent. Employees may feel coerced into using these bots, particularly if participation is tied to health insurance benefits or workplace wellness incentives.

The issues concerning the safety and security of wellness bots and various therapeutic AI modalities continue to evolve.

Senate Bill 53 – A Move Toward Transparency and Reporting Requirements for Large Developers of AI Models

California’s Governor signed Senate Bill (SB) 53, which creates a comprehensive regulatory framework for advanced AI. The law takes effect January 1, 2026.
SB 53 is designed to govern what it calls “frontier” AI models, which are large, cutting-edge systems built by major developers with substantial resources. The central aim of the law is to balance these competing realities: encouraging innovation while protecting public safety and property from catastrophic risks.
Overview
Although SB 53 is directed at the developers of advanced AI systems, the law’s requirements will inevitably affect the businesses that depend on these technologies.
Under the law, developers must now publish a “frontier AI framework” that explains their risk management practices. They are also required to release transparency reports before introducing new or substantially updated models. These reports must contain detailed risk assessments, strategies for mitigation, and independent third-party evaluations.
In addition, the law imposes strict incident reporting obligations. If a developer discovers a “critical safety incident,” it must be reported within fifteen days. If the incident presents an imminent risk of death or serious injury, the report must be submitted within twenty-four hours. For employers, this means that AI vendors may need to temporarily adjust or even suspend services to comply with these requirements, potentially affecting continuity of business operations.
The legislation also imposes serious financial consequences for non-compliance. Any violation can result in civil penalties of up to one million dollars per instance, enforced by the Attorney General. Businesses that rely on AI providers should therefore be prepared for the possibility that vendors who fail to meet these standards could face enforcement actions that disrupt their services.
SB 53 introduces several important legal definitions that determine when these rules apply. A “frontier model” is defined by the amount of computing power used in its training, while “catastrophic risk” refers to scenarios that could cause mass casualties or more than one billion dollars in property damage. The law also identifies “critical safety incidents” as failures or risks that could threaten life or cause serious harm. These definitions are critical because they establish the threshold at which developers, and by extension their customers, are subject to the most stringent requirements.
Whistleblower Protections
Another significant component of SB 53 is its treatment of whistleblowers. The law protects employees who raise concerns about catastrophic risks or violations. These protections include strong anti-retaliation measures, anonymous reporting options, and the ability for employees to seek injunctive relief in court. For employers, this provision highlights the need to create internal reporting systems that allow employees to express concerns about AI use safely and without fear of reprisal. Failing to do so may not only undermine compliance but also damage workplace trust.
Preemption
The law is written to ensure that its provisions apply consistently across California. It preempts local regulations that might otherwise conflict and includes clauses to ensure broad application and resilience against legal challenges. In practice, this means employers can expect uniform rules across the state rather than a patchwork of differing local ordinances.

AI Hiring Targeted by Class Action and Proposed Legislation

As in almost every area, Artificial Intelligence (AI) is rapidly changing the way that employees are recruited and hired. AI resume screening and candidate matching tools promise to make the hiring process more effective, efficient, and economical. A survey showed that 70 percent of employers plan to use AI in the hiring process by the end of 2025. However, claims of discrimination and unfairness threaten to derail, or at least slow down, AI’s momentum.
In May 2025, in Mobley v. Workday, Inc, a federal court for the Northern District of California conditionally certified a nationwide class of older workers based upon the allegation that Workday’s algorithmic screening tools disparately impacted older workers. In other words, the tools allegedly screened out older workers at a higher rate before a human being reviewed the candidates. The court noted that over one billion applicants may have been rejected using Workday. Very recently, the court ordered Workday to turn over its list of employer clients who used Workday’s HiredScore tool to score, sort, rank, or screen candidates. The list is to be used to determine whether a person can be included in the collective lawsuit. While only Workday is thus far a defendant in Mobley, there is certainly a risk that employers will become defendants in this case or in other actions. Given the ubiquity and growth of AI, many employers are potential defendants.
Meanwhile, Americans are wary about the use of AI in hiring, and several states have proposed laws to limit or control its usage. For example, California has issued regulations effective October 1, 2025, making clear that AI bias is included within its discrimination statutes and strongly encouraging company audits. Colorado passed a sweeping law requiring transparency notices plus appeal rights for workers negatively affected by AI tools, as well as assessments and monitoring from developers of AI technology and the businesses and government entities deploying those tools. However, in August, the Colorado legislature voted to delay the effective date of the Act until June 30, 2026. Part of the motivation for the delay, and the pause on the proposed legislation in other states, is the Trump Administration’s recent AI Action Plan, which includes threats to federal funding and possible preemption by the Federal Communications Commission for overly restrictive state AI laws.
The law is behind the speed of this revolutionary technology. However, employers must be aware of the risk and take steps to address it, including:

Understanding screening technology and the criteria used
Ensuring that human beings are the ultimate decision-makers and exercise oversight
Requiring validation of the AI screening tools and conducting bias audits
Closely assessing vendor contracts to include representations and warranties, as well as appropriate allocation of liability.

FDA Seeks Stakeholder Feedback on AI-Enabled Medical Device Performance

The U.S. Food and Drug Administration (FDA) is offering an opportunity for stakeholders to provide feedback to “advance a broader discussion among the AI healthcare ecosystem.” Specifically, the FDA has issued a Request for Public Comment to gather insights on evaluating the real-world performance of AI-enabled medical devices, including generative AI technologies. 
This initiative, detailed in Docket No. FDA-2025-N-4203, aims to ensure these devices remain safe and effective post-deployment by further evaluating challenges like data drift that may impact the accuracy and reliability of predictive models. Comments are due by December 1, 2025.
The FDA is seeking input from industry stakeholders on practical approaches to monitor and assess AI-enabled medical devices through a series of targeted questions. 
Key topics include:

Performance Metrics: Metrics used to measure safety, effectiveness, and reliability in real-world use, metric definitions, and evaluation timeframes.
Real World Evaluation Methods: Tools and processes for proactive post-deployment monitoring, balancing human and automated review approaches, and technical and operational infrastructure used to support real-world device performance evaluation.
Postmarket Data Sources and Quality Management: Use of electronic health records, device logs, and patient feedback for performance evaluation, addressing data quality and interoperability challenges.
Monitoring Triggers and Response Protocols: Criteria for initiating intensive evaluations and responding to performance degradation in real-world settings.
Human-AI Interaction: Impact of clinical usage patterns and user training on device performance, including communication strategies that have proven most effective as systems evolve.
Additional Best Practices: Additional considerations and factors that were important in the development and implementation of real-world validation systems, implementation barriers, and strategies for patient privacy.

The FDA is allowing stakeholders to choose whether to address only specific questions or topics relevant to their business, expertise, or experience. This feedback will inform future FDA strategies to maintain the safety and effectiveness of AI-enabled medical devices throughout their lifecycle. 

Bracewell Explains – Speed to Power – Using Associated Natural Gas to Power Data Centers [Video]

The growing energy demands of AI and data centers are challenging the capacity and reliability of traditional power grids. In response, developers are exploring innovative, behind-the-meter power generation solutions.
One such solution involves using associated natural gas, a byproduct of crude oil production, to fuel power generation co-located with data centers. This approach is gaining traction in regions like the Permian Basin, where it offers a new revenue stream for gas producers facing pipeline constraints and provides a reliable power source for data center operators.
A critical component of these projects is the Power Purchase Agreement (PPA), which governs the long-term sale of electricity from the power generation facility to the end-user, such as a data center. Crafting a robust PPA requires careful consideration of numerous factors, including pricing structures, performance guarantees, and the allocation of risks associated with fuel supply and infrastructure.

California Court of Appeal Warns Against Attorney Misuse of Artificial Intelligence

On Sept. 12, 2025, the first California court issued an explicit “warning” to attorneys who use AI as part of their legal practice. In Sylvia Noland v. Land of the Free, L.P., et al., an otherwise unremarkable and straightforward employment-related appeal, the court discovered that much of the legal authority relied on in plaintiff’s opening and reply briefs was fabricated. The court’s published opinion aimed to address a much broader issue that has become increasingly relevant in the legal profession: the reliability and verification of legal authority generated by AI tools.
After reviewing each of the cases plaintiff’s counsel cited, the court discovered that much of the quoted language did not appear in the cited cases, certain cases cited did not discuss the topics they were meant to support, and a handful of the cases cited did not exist at all. The court determined that the AI tools plaintiff’s counsel used must have created fake legal authority—coined “AI hallucinations”—which plaintiff’s counsel failed to realize because he did not carefully review the AI’s output. The concept of AI hallucinations has been the subject of increasing discussion in federal courts and courts of other jurisdictions.
The court therefore made clear: “no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.” A failure to do so constitutes a violation of the basic duty that counsel owes to their client and to the court.
Because the court had to spend excessive time attempting to locate fabricated legal authority, which created an unnecessary burden, the court imposed a $10,000 monetary sanction on plaintiff’s counsel to be paid to the clerk of court. Further, the court ordered plaintiff’s counsel to serve a copy of the court’s opinion on his client, and the clerk to serve a copy on the California State Bar.
While AI has the potential to enhance the practice of law, and is often encouraged by many clients, attorneys must carefully heed the warning of the Court of Appeal, or risk sanctions and potential disciplinary action from the state bar. This opinion may also serve as guidance to counsel to ensure that the authority cited by opposing counsel is legitimate—and to promptly report to the court if it is not—to safeguard judicial resources.

AI at the Frontier: What California’s SB-53 Means for Large AI Model Developers

On September 29, 2025, Governor Gavin Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act (“the Act”) into law, establishing a regulatory framework for developers of advanced artificial intelligence (AI) systems. The law imposes new transparency, reporting, and risk management requirements on entities developing high-capacity AI models. It is the first of its kind in the United States. Although several states, including California, Colorado, Texas, and Utah have passed consumer AI laws, SB 53 is focused on the safety of the development and use of large AI platforms. According to Newsom in his signing message to the California state Senate, the Act “will establish state-level oversight of the use, assessment, and governance of advanced artificial intelligence (AI) systems…[to]strengthen California’s ability to monitor, evaluate, and respond to critical safety incidents associated with these advanced systems, empowering the state to act quickly to protect public safety, cybersecurity, and national security.”
Newsom highlighted that “California is the birthplace of modern technology and innovation” and “is home to many of the world’s top AI researchers and developers.” This allows for “a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders-especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.”
Although the Biden administration issued an Executive Order in October of 2023 designed to start the discussion and development of guardrails around using AI in the United States, President Trump gutted the AI EO on his first day in office in January of 2025, without providing any meaningful replacement. Since then, there has been nothing from the White House except encouragement for AI developers to move fast and furiously. As a result, states are recognizing the risk of AI for consumers, cybersecurity, and national intelligence and, as usual, California is leading the way in addressing these risks.
Newsom noted in his message to the California State Senate that, in the event “the federal government or Congress adopt national AI standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks—ensuring businesses are not subject to duplicative or conflicting requirements across jurisdictions.” A summary of the substance of the bill is outlined below.
Who is Covered?
The Act is meant to cover only certain powerful artificial intelligence models. The Act defines AI models generally as computer systems that can make decisions or generate responses based on the information they receive. Such systems can operate with varying levels of independence and are designed to affect real-world or digital environments, such as controlling devices, answering questions, or creating content. The Act defines several specific types of AI models and AI developers:

A foundation model is a general-purpose AI model trained on broad datasets and adaptable to a wide range of tasks.
A frontier model is a foundation model trained using more than 1026 integer or floating-point operations. This means that there are 100 septillion computational steps, which only applies to large and complex AI models.
A frontier developer is an entity that initiates or conducts training of a frontier model.
A large frontier developer is a subset of a frontier developer and constitutes a frontier developer (including affiliates) with annual gross revenues exceeding $500 million.

The Act applies to frontier developers. The law is designed to target developers with significant resources and influence over high-capacity AI systems and is not meant to cover smaller or less computationally intensive projects.
Key Compliance Requirements

Frontier AI Framework – Large frontier developers must publish and maintain a documented framework outlining how they assess and mitigate catastrophic risks associated with their models. The framework may include risk thresholds and mitigation strategies, cybersecurity practices, and internal governance and third-party evaluations. A catastrophic risk is defined as a foreseeable and material risk that a frontier model could contribute to thedeath or serious injury of at least 50 people, or cause over $1 billion in property damage, through misuse or malfunction.

Transparency Reporting – Prior to deploying a new or substantially modified frontier model, developers must publish on their websites a report detailing the model capabilities and intended uses, risk assessments and mitigation strategies, and involvement of third-party evaluators. For example, a developer releasing a model capable of generating executable code or scientific analysis must disclose its intended use cases and any safeguards against misuse.
Incident and Risk Reporting – Critical safety incidents must be reported to the Office of Emergency Services (OES) within 15 days. If imminent harm is identified, an appropriate authority, such as a law enforcement agency or public safety agency, must be notified within 24 hours. For instance, if a model autonomously initiates a cyberattack, the developer must notify an appropriate authority within 24 hours. Developers are also encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.

Whistleblower Protections
The law prohibits retaliation against employees who report safety concerns or violations. Large developers must notify employees of their whistleblower rights, implement anonymous reporting mechanisms, and provide regular updates to whistleblowers.
Enforcement and Penalties
Noncompliance may result in civil penalties of up to $1 million per violation, enforceable by the California Attorney General. This is a high ceiling for penalties and likely to incentivize proactive compliance and documentation. Penalties may be imposed for failure to publish required documents, false statements about risk, or noncompliance with the developer’s own framework.
CalCompute Initiative
The Act also establishes a consortium to develop CalCompute, a public cloud computing cluster intended to support safe and equitable AI research. A report outlining its framework is due to the California Legislature by January 1, 2027. CalCompute could become a strategic resource for academic and nonprofit developers who seek access to high-performance computing but lack the necessary commercial infrastructure.
Takeaways
The Act introduces a structured compliance regime for high-capacity AI systems. Organizations subject to the Act should begin reviewing their AI development practices, internal governance structures, and incident response protocols.

From E-Commerce to A-Commerce: The Dawn of Agentic AI Payments

Google’s recent announcement of the Agent Payments Protocol (AP2) raises a range of fascinating legal questions. The concept behind AP2 is to have a payment system that allows consumers to instruct AI empowered agents to search for and make purchases on that consumer’s behalf, using “intent mandates” and “cart mandates” that can provide “a non-repudiable audit trail” to establish “authorization and authenticity, providing a clear foundation for accountability.”
Google is not doing this on its own. A2P is open source and Google has partnered with over 25 payments related entities (such as American Express, PayPal, Coinbase, Adyen, MasterCard and WorldPay). This opens the door not only to Agent payments but also Agent-to-Agent or A2A transactions where both the purchaser and seller operate through what is referred to as “agentic AI.”
David Birch, UK author and commentator on digital financial services, recently noted that AP2 and the payment systems that it will generate “will reshape the very nature of e-commerce.” One fascinating aspect of AP2 is the fact that it is “payment agnostic”—it does not rely solely on the credit/debit card rails or ACH. Even more interesting (given the recent passage of the GENIUS Act), AP2 has integrated with a cryptocurrency protocol which would allow for agentic purchases from crypto wallets. As Birch explains:
The reason for this is obvious. Agents cannot have bank accounts, but they can have smart wallets that they use to store stablecoins and to spend them via AP2. That means an environment for permissionless innovation…

Yes, A2P is certainly likely to revolutionize how we shop and make payments. But the missing piece is the inevitable starting point. How does one confidently and accurately verify the true identity of the purchaser’s/seller’s agents? AP2 transactions are purported to be “non-repudiable”—but that depends on whether the identities of the purchaser’s/seller’s agents have actually been verified. While having an immutable blockchain audit trail will help, as many deep fake crypto scams have shown, it is not a fool-proof safeguard.
In the meantime, we lawyers have our own concerns. Completing a purchase and sale transaction usually triggers a range of state and federal laws, including the right of the purchaser to dispute the transaction if fraud or unauthorized activity is suspected. Unfortunately, when it comes to high tech solutions that can move money, high tech criminals are sure to be close behind.
For now, applicable agentic AI laws are essentially nonexistent. Until the inevitable machinery of laws and regulations kick into motion, both purchasers and sellers using agentic AI will have to rely on contractual terms to define the parties’ rights. While we always hope that such terms will be reasonably fair, clear, and conspicuously disclosed, past experience has demonstrated that unscrupulous actors are perfectly happy to mislead their customers if it makes them more money. All the more reason to make sure the agent you’re working with has a good reputation.