China’s Supreme People’s Court Strips Internet Courts of Jurisdiction Over AI Copyright Cases

On October 10, 2025, China’s Supreme People’s Court (SPC) released the Provisions of the Supreme People’s Court on the Jurisdiction of Internet Courts (最高人民法院关于互联网法院案件管辖的规定) effective November 1, 2025. The 2025 Provisions amend the 2018 Provisions by removing Internet Courts’ jurisdiction over five types of cases including “copyright or neighboring rights disputes involving works first published online,” and “disputes arising from infringement of copyright or neighboring rights in works published or disseminated online.” The SPC explained that this will ensure “new, cutting-edge, and complex online cases are heard by the Internet Courts” while “general and traditional online cases are heard by other local courts.”
The Beijing Internet Court in particular has been a leader in copyright jurisprudence for generative artificial intelligence including Li v. Liu that recognized copyright for images created using generative AI and the Cat Crystal Diamond Pendant case that determined the amount of creativity required in generative AI cases to claim copyright but perhaps the SPC believes the law is settled and these cases are no longer cutting edge.
Three other areas removed from Internet Court jurisdiction include “financial loan contract disputes and small loan contract disputes where the execution and performance of the contracts are all completed online,” “product liability disputes arising from defects in products purchased through e-commerce platforms that infringe upon the personal and property rights of others” and “traditional online infringement disputes such as online infringement of reputation rights, general personality rights, and property rights.”
Internet Courts will have jurisdiction over four new areas including “online data ownership, infringement, and contract disputes,” “online personal information protection and privacy disputes,” “online virtual property ownership, infringement, and contract disputes,” and “online unfair competition disputes.”
Internet Courts will retain jurisdiction over the following four categories: “Internet domain name ownership, infringement, and contract disputes,” “disputes arising from the signing or performance of online shopping contracts through e-commerce platforms,” ”Internet service contract disputes where the signing and performance are completed online,” and “online public interest litigation cases initiated by the procuratorate.”
A translation of the Provisions follows. Due to geoblocking of the SPC website, the original Provisions and SPC explanation are available on social media here (Chinese only).
Provisions of the Supreme People’s Court on the Jurisdiction of Internet Courts
( Adopted at the 1957th meeting of the Judicial Committee of the Supreme People’s Court on September 15 , 2025 , and effective November 1 , 2025 )
These Regulations are formulated in accordance with the provisions of the Civil Procedure Law of the People’s Republic of China, the Administrative Procedure Law of the People’s Republic of China, and other relevant provisions, and in light of the actual circumstances of adjudication work, in order to strengthen the development of internet courts, optimize and improve their jurisdiction, further leverage their functions and roles in facilitating and benefiting the people, resolving disputes fairly, efficiently, and conveniently, strengthening the law-based governance of cyberspace, and serving to safeguard the healthy development of the digital economy.
Article 1: Internet courts shall have centralized jurisdiction over the following first instance cases within their respective municipal districts that should be accepted by basic level people’s courts:
(1) Network data ownership, infringement, and contract disputes;
(2) Disputes concerning the protection of personal information and privacy rights on the Internet;
(3) Disputes regarding online virtual property ownership, infringement, and contracts;
(4) Disputes involving unfair competition on the Internet;
(5) Internet domain name ownership, infringement, and contract disputes;
(6) Disputes arising from the signing or performance of online shopping contracts through e-commerce platforms;
(7) Disputes over network service contracts where the signing and performance are completed online;
(8) Administrative disputes arising from administrative actions taken by administrative agencies, such as those regarding network data supervision, network personal information protection supervision, network unfair competition supervision, network transaction management, and network information service management;
(9) Internet public interest litigation cases initiated by the procuratorate.
Foreign-related civil cases that meet the requirements of the preceding paragraph, as well as civil cases involving the Hong Kong and Macao Special Administrative Regions and Taiwan, shall be under the jurisdiction of the Internet Courts.
With the approval of the Supreme People’s Court, relevant high people’s courts may designate Internet courts to have jurisdiction over other specific types of online civil and administrative cases.
Article 2: For civil disputes over contracts and other property rights as defined in Article 1 of these Provisions, the parties may legally agree upon the jurisdiction of an Internet Court located in a location that has a physical connection to the dispute.
Where parties agree in the form of standard clauses that a case is under the jurisdiction of an Internet Court, such agreement shall comply with the provisions of laws and judicial interpretations on standard clauses.
Article 3: Cases where parties appeal judgments or rulings rendered by Internet Courts shall be heard by the Intermediate People’s Court in the location of the Internet Court. Where multiple Intermediate People’s Courts exist in a location, the case shall be heard by an Intermediate People’s Court designated by a Higher People’s Court.
If the appeal case falls within the jurisdiction of a specialized people’s court, it shall be heard by the corresponding specialized people’s court.
Article 4: These Provisions shall come into force on November 1, 2025. Cases already accepted before the implementation of these Provisions shall continue to be heard by the original people’s courts that accepted the cases.
In case of any inconsistency between previously issued judicial interpretations and these Provisions, these Provisions shall prevail.
USPTO Launches Automated Search Pilot Program Using AI for Pre-Examination Prior Art Searches
On October 20, 2025, the U.S. Patent and Trademark Office (USPTO) will launch a new Artificial Intelligence Search Automated Pilot (ASAP!) Program to test the use of artificial intelligence (AI) tools in conducting pre-examination prior art searches for certain utility patent applications.
Under the program[1], the USPTO will use an internal AI system to analyze an application’s Cooperative Patent Classification data, specification, and claims to identify and rank potentially relevant prior art. Participating applicants will receive an Automated Search Results Notice (ASRN)—a listing of up to ten references the AI tool determines to be most relevant, ranked from most to least relevant. The ASRN is not an Office Action and does not require a response, and the references are not made of record, unless later cited by the Examiner or by the applicant in an Information Disclosure Statement. Early identification of relevant references may give applicants an opportunity to assess potential prior art issues and make strategic filing or amendment decisions before examination begins.
Participation requires filing a petition under 37 C.F.R. § 1.182 and paying the petition fee under 37 C.F.R. § 1.17(f)[2] at the time of filing an original, noncontinuing, nonprovisional utility application. The pilot will accept at least 1,600 applications filed between October 20, 2025, and April 20, 2026, or until the program reaches capacity.
The USPTO will use data from the pilot to evaluate:
How early search results influence applicant behavior and prosecution strategy;
The scalability and accuracy of AI-assisted search tools; and
The usefulness of automated pre-examination searches for both applicants and examiners.
Practical Considerations for Applicants
Participation in ASAP! program may offer some applicants an early look at potentially relevant prior art and an opportunity to refine claim scope or prosecution strategy before examination begins. Strategic responses might include filing a preliminary amendment to preempt likely prior art rejections, preparing evidence for potential affidavit practice, requesting deferral of examination, or filing a petition for express abandonment to obtain partial fee refunds (e.g., search and excess claims fees). Because the ASRN does not carry any obligation to respond, applicants can choose whether to act on the information provided. Moreover, it is possible that some ASAP! applicants, e.g., by prudent use of a preliminary amendment in response to the ASRN, may obtain enhanced allowance rates and/or less lengthy prosecution as compared to others who do not employ the pilot.
However, applicants should carefully weigh these potential benefits against the cost of additional attorney time to review and assess the ASRN and the ASAP! petition fee. Applicants seeking to minimize up-front costs or who prefer to rely on the examiner’s own search may find limited value in early participation. Those with a duty of candor under 37 C.F.R. § 1.56 may need to assess the references on the ASRN for possible submission in an Information Disclosure Statement. Moreover, it is unclear if any preliminary amendment made by an applicant in response to the ASRN could give rise to disclaimer or prosecution history estoppel, despite the guidance in the notice that the ASRN is not a notification of a substantive rejection under 35 U.S.C. § 132.
It also may be tempting to substitute the ASAP! program for a patentability investigation conducted prior to filing, but the ASAP! program does not afford applicants the same advantages. While the petition fees and attorney time to review the references might be lower in cost than a pre-filing patentability evaluation, the latter provides an applicant with an ability to adjust both the application and claims in view of the search results prior to filing. In contrast, an ASAP! petitioner can only adjust claim scope through a preliminary amendment and cannot amend the application in response to the ASRN.
Footnotes
[1] Federal Register :: Automated Search Pilot Program
[2] Small entity – $180; Large entity – $450
McDermott+ Check-Up: October 10, 2025
THIS WEEK’S DOSE
Government shutdown continues through a second week. No progress was made this week to fund the government, with the Senate failing to pass a continuing resolution.
Senate confirms HHS nominees. The nominations for US Department of Health and Human Services (HHS) personnel were confirmed along party lines.
Senate Aging Committee discusses pharmaceutical supply chains. The committee evaluated ways to strengthen domestic pharmaceutical manufacturing.
Senate Judiciary Committee holds hearing on patent reform. The committee discussed the Patent Eligibility Restoration Act, which would have ramifications for the pharmaceutical industry.
Senate HELP Committee examines AI. The Committee on Health, Education, Labor, and Pensions (HELP) reviewed the potential implications of integrating artificial intelligence (AI) into various sectors.
CDC approves vaccine schedule changes. The Centers for Disease Control and Prevention (CDC) approved changes to the COVID-19 and measles, mumps, rubella, and varicella vaccine schedules.
CONGRESS
Government shutdown continues through a second week. The Senate remained in session, while Speaker Johnson (R-LA) kept the House out of session for another week, maintaining that his chamber had already passed a continuing resolution (CR) to fund the government and that the ball is in the Senate’s court.
As in week one, the Senate continued to vote on the same two stopgap spending bills, both of which continued to fail to advance:
The Republican-led CR would fund the government through November 21, 2025, at current levels and would extend healthcare policies that expired on September 30, 2025. The latest vote was 52 – 42 and followed the same pattern as last week: Sen. Paul (R-KY) joined Democrats in opposition, and Sens. Cortez Masto (D-NV), Fetterman (D-PA), and King (I-ME) joined Republicans in voting for the CR.
The Democratic-led CR would fund the government through October 31, 2025. It would reverse the Medicaid cuts in the One Big Beautiful Bill Act and would permanently extend the Affordable Care Act Marketplace enhanced advanced premium tax credits (APTCs). The latest Senate vote on this CR failed 50 – 45 along party lines once again.
The expiration of the APTCs remains the biggest point of contention between the parties, with Democratic leadership holding firm that they will not vote for a CR unless the enhanced APTCs are extended. While some Republicans have expressed support for extending the enhanced APTCs, and informal negotiations continue among rank-and-file Republican and Democratic senators, Republican leadership has reiterated that they don’t intend to negotiate until the government reopens.
As the shutdown drags on, federal workers and military service members grow closer to the possibility of missing their first paychecks. A draft memo from the Office of Management and Budget this week also argued that federal employees are not guaranteed back pay. These dynamics factor in to the partisan rhetoric that continues to dominate Capitol Hill.
Senate confirms HHS nominees. Using its new rule to permit a large swath of nominees to be approved en bloc, the Senate voted 51 – 47 along party lines to confirm more than 100 nominees for posts in the Trump administration, including:
Michael Stuart to be HHS general counsel.
Gustav Chiarello III to be an HHS assistant secretary.
Alex Adams to be HHS assistant secretary for family support.
Brian Christine to be an HHS assistant secretary.
Senate Aging Committee discusses pharmaceutical supply chains. Witnesses at the hearing highlighted the fragility of the US drug supply chain and the lack of domestic active pharmaceutical ingredient sources. They proposed public-private partnerships, US Food and Drug Administration reform, and investment incentives to rebuild pharmaceutical independence. While senators on both sides of the aisle voiced concerns about US dependence on China and India for essential medicines, Republican members focused on national security risks and reshoring pharmaceutical manufacturing through federal purchasing power, tariffs, and country-of-origin labeling. Democratic members focused on strengthening regulatory oversight, improving transparency, and supporting strategic investment in domestic production.
Senate Judiciary Committee holds hearing on patent reform. During the hearing, witnesses agreed that changes should be made to the patent system, especially in light of the growth of AI, but disagreed on whether the Patent Eligibility Restoration Act (PERA) was the best option. The committee members present emphasized that PERA would resolve uncertainty caused by differing legal judgments on patent eligibility. The reforms outlined in PERA would apply to the patent system at large and would impact the pharmaceutical industry.
Senate HELP Committee examines AI. During the hearing, committee members focused on how AI can be adopted in healthcare settings, cybersecurity issues in healthcare data, and ethical issues regarding AI use in healthcare decision-making. Senators from both sides of the aisle expressed concerns about the potential harmful effects AI can have on children and expressed a desire to understand the best way to regulate AI without impeding advancements.
ADMINISTRATION
CDC approves vaccine schedule changes. The CDC adopted the following changes to the COVID-19 and measles, mumps, rubella, and varicella (MMRV) vaccine schedules recommended by the Advisory Committee on Immunization Practices (ACIP) on September 19, 2025:
Removing the blanket COVID-19 vaccination recommendation that adults 65 and older get vaccinated for COVID-19, and instead recommending shared clinical decision-making.
Changing the MMRV schedule for toddlers to be immunized for chickenpox in a standalone vaccination rather than in the MMRV combination vaccine.
Immunization schedules inform insurance coverage and whether patients need a prescription to receive the vaccines. In a social media post, Acting CDC Director Jim O’Neill called on vaccine manufacturers to also replace the combined measles, mumps, and rubella vaccine with individual monovalent vaccines.
While the updated COVID-19 vaccination schedule removes the blanket recommendation for adults, it lessens restrictions on access to the vaccination during pregnancy. The new guidance applies to all adults with no carveout for pregnant women, effectively undoing an earlier decision by HHS Secretary Kennedy to remove the COVID-19 vaccine from the immunization schedule for healthy pregnant women altogether. Now, CDC advises pregnant women, like other adults, to participate in shared clinical decision-making on whether to receive the COVID-19 vaccination.
Following the CDC’s adoption of these changes, ACIP announced a plan to review the safety and efficacy of the childhood vaccine schedule, including the timing and order of vaccines and the safety of aluminum in vaccines.
QUICK HITS
CDC announces plans to review hepatitis B screenings for pregnant women. CDC will identify and review the barriers that contribute to pregnant women missing hepatitis B screenings, and will recommend a pathway to ensure higher rates of testing. This announcement follows recent ACIP discussions regarding a proposal to delay the first dose of the Hepatitis B vaccine for infants of mothers who test negative. The proposal was ultimately tabled.
HHS Office of Inspector General releases two reports. The first report recommends that the Centers for Medicare & Medicaid Services (CMS) consider revising its methodology for determining the nondrug component of Medicare weekly bundled payment rates for opioid-use disorder treatments. The second report offers recommendations to improve accuracy of behavioral health provider network directories in Medicaid and Medicare Advantage managed care plans, and suggests that CMS consider a nationwide directory.
Democratic House committee leaders release letter on expiration of Medicare telehealth, Acute Hospital at Care at Home waivers. Energy and Commerce Committee Ranking Member Pallone (D-NJ) and Ways and Means Committee Ranking Member Neal (D-MA) sent a letter to HHS Secretary Kennedy and CMS Administrator Oz expressing concerns about the expiration of Medicare telehealth flexibilities and the Acute Hospital Care at Home waiver, and the subsequent uncertainty and care disruptions presented to Medicare beneficiaries. The letter requests that CMS issue guidance to providers and beneficiaries and exercise maximum regulatory and enforcement flexibility.
Senate HELP Committee chair releases letter on AMA and CPT. Chair Cassidy (R-LA) sent a letter to the American Medical Association (AMA) expressing strong concerns that AMA’s monopoly over the Current Procedural Terminology (CPT) coding system results in higher healthcare costs. The letter requests responses to a series of CPT-related questions by October 20, 2025.
MedPAC meeting cancelled. The Medicare Payment Advisory Commission (MedPAC) monthly meeting, scheduled for October 9 – 10, 2025, was cancelled because of the ongoing government shutdown.
NEXT WEEK’S DIAGNOSIS
The Senate announced the evening of October 9, 2025, that it would go out of session through the holiday weekend and is scheduled to return to session on October 14, 2025. As of publication, the House had not formally announced its schedule, but is expected to remain out of session subject to a 48-hour call to return to Washington. Given this, the shutdown will extend into at least next week.
Decoding Proposed Rule 707: A New Gatekeeper for Expert Testimony?
On June 10, 2025, the U.S. Courts’ Committee on Rules of Practice and Procedure approved a set of proposed amendments to the Federal Rules, including introducing Proposed Federal Rule of Evidence 707, which addresses the admissibility of AI-generated evidence. Proposed Rule 707 would require that machine-generated evidence—when offered without a human expert—meet the same admissibility standards under Rule 702 (the Daubert standard) as expert testimony. The proposed rule states:
Machine-Generated Evidence: When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of rule 702(a)-(d). This rule does not apply to the output of simple scientific instruments.
The proposal aims to ensure courts apply consistent reliability standards, whether evidence is generated by a person or a machine, especially as AI tools are increasingly used in areas like forensic analysis and discovery. Any AI output offered into evidence must still meet the standard for expert testimony, meaning it must still (1) assist the trier of fact; (2) be based on sufficient facts or data; (3) be the product of reliable methods and principles; and (4) reflect a reliable application of the principles and methods to the facts. According to the Committee, the purpose of Rule 707 is to prevent the proponent of machine-generated evidence from evading “the reliability requirements of Rule 702 by offering machine output directly, where the output would be subject to Rule 702 if rendered as an opinion by a human expert.”
The public comment period on Proposed Rule 707 is open through Feb. 16, 2026, allowing stakeholders to weigh in before final adoption.
Artificial Intelligence in HR
Legal and Practical Issues Every Employer Should Know
Artificial intelligence (AI) has quickly moved from a futuristic concept into a practical tool used in everyday business. In the human resources (HR) world, AI now drafts job descriptions, scans résumés, conducts video interviews, and even generates performance reviews.
While these tools promise efficiency and cost savings, they also create new risks. Discrimination claims, privacy issues, and liability disputes are all part of the emerging landscape. For employers, the key is to balance efficiency with compliance and to ensure that technology doesn’t undermine fairness or expose the company to avoidable lawsuits.
Increasingly, regulators, courts, and lawmakers are paying attention. This means that employers who rush into AI adoption without a thoughtful compliance strategy may find themselves facing costly litigation or government investigations. The conversation is not only about what AI can do, but what AI should do in the sensitive context of HR.
What Counts as AI in HR Today
AI is being implemented across several HR processes including recruiting, performance management, and compensation.
Recruiting platforms use algorithms to scan résumés, chatbots interact with applicants, and some platforms help companies manage large-scale hiring. According to the Society for Human Resource Management’s “2025 Talent Trends: AI in HR” survey, just over 50% of employers are using AI in recruiting. In performance management, AI can now track productivity, analyze communication styles, and even suggest employee development programs. Some employers are beginning to use AI-driven pay equity audits to identify disparities across departments and levels of seniority. While this can be a step toward compliance with equal pay laws, it only works if the algorithms themselves are transparent and designed with fairness in mind.
While AI can generate information quickly, it can also produce false or misleading results, sometimes referred to as ‘hallucinations.’ Employers who assume that AI’s output is automatically correct risk significant liability. Human oversight is not just best practice; it’s a necessity. The more decisions are automated, the higher the stakes for ensuring a human layer of review.
Employers should be mindful that even routine tasks like job postings or performance evaluations can carry legal implications when AI is involved. As Helen Bloch of the Law Offices of Helen Bloch, P.C. cautions, “If someone is going to use AI, which is inevitable, you have to keep in mind the various laws that apply to each situation.”
Examples of AI-assisted HR tasks to be particularly mindful of include résumé-screening algorithms that prioritize keywords, automated assessments that claim to measure cognitive ability, and chatbots that handle initial applicant inquiries.
Legal Risks and Considerations
Disparate Impact and Discrimination
One of the biggest legal risks associated with using AI in the recruitment process is the concept of disparate impact, a policy or practice that seems neutral on its face but ends up disadvantaging a protected group. A current example is the class action lawsuit Mobley v. Workday, in which plaintiffs argue that software discriminated against job applicants over age 40 in violation of the Age Discrimination in Employment Act (ADEA).
According to Charles Krugel of the Law Offices of Charles Krugel, Mobley is the case to watch as it pertains to AI and discrimination. This type of litigation underscores the need for employers to do their due diligence on AI tools and conduct bias audits before relying on algorithms to make employment decisions.
Disparate impact claims are particularly dangerous because an employer may not even realize its practices are discriminatory until litigation begins. The Equal Employment Opportunity Commission (EEOC) has already issued guidance warning that automated decision-making tools fall under the same anti-discrimination laws as traditional practices. Employers should be aware that claims may be brought not only by rejected applicants but also by government agencies seeking to enforce civil rights laws.
Who’s Responsible When AI Gets It Wrong?
Some employers assume that outsourcing HR to third-party vendors will insulate them from liability. This is a dangerous misconception. Employers remain responsible for compliance with anti-discrimination and privacy laws, regardless of whether the error originated in-house or through an external AI service.
“You’re still going to face consequences if you break the law, whether somebody does it as an authorized agent or that authorized agent is a computer,” notes Max Barack of Garfinkel Group, LLC.
Contracts with vendors should be explicit about risk allocation, and employers should also review their insurance coverage. Employment practices liability insurance (EPLI) may not cover certain AI-related claims unless additional riders are purchased. Employers should also remember that joint liability principles mean that both the vendor and the company can be held accountable. For example, if a recruiter’s AI-powered screening tool is found to discriminate against disabled applicants, both the recruiter and the hiring company may be liable. This makes vendor due diligence and contractual protections more important than ever.
AI Regulation
States are starting to regulate how employers can use AI in hiring. For instance, Illinois’ Artificial Intelligence Video Interview Act requires employers to disclose when AI is being used in video interviews and to obtain applicant consent. In New York, recent laws require permission before a company can use AI-generated likenesses of employees. These rules reflect a broader trend toward transparency and informed consent. Employers must keep up with evolving disclosure requirements, especially when using AI in recruitment, evaluations, or public-facing content. Beyond Illinois and New York, states like Maryland and California are also experimenting with legislation aimed at regulating AI in hiring. Internationally, the European Union has introduced its AI Act, which classifies certain uses of AI in employment as high-risk and subjects them to strict transparency and audit requirements. These developments suggest that the regulatory trend is accelerating, and employers who fail to prepare now may find themselves scrambling to catch up.
The Importance of Vigilance and Adaptability
The use of AI in HR processes is becoming a standard feature of recruitment, evaluation, and workforce management, but the technology brings significant risks if used without proper oversight. AI can be a powerful tool for improving efficiency and fairness, but only if employers use it responsibly and remain vigilant about compliance, transparency, and human judgment. Companies that treat AI as a compliance blind spot risk litigation, regulatory penalties, and reputational harm. Those who take a proactive approach will not only reduce legal risk but also build trust with employees and applicants.
Employers using AI in HR workflows should remember to:
Conduct regular bias audits of AI tools
Require human review of AI-generated outputs
Stay current with federal and state AI-related employment laws
Review and update contracts with vendors for liability protections
Ensure employment practices liability insurance covers AI-related risks
Train HR professionals to identify and respond to AI red flags
Maintain transparency with employees and applicants about AI use
This article was originally published on October 9, 2025 here.
More Sanctions + Inquiries Against Lawyers + Judges for Cite Hallucinations
U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing counsel’s invoice for fees and costs. The court noted that sanctions were necessary because the attorney had acted “recklessly and shown “singularly egregious conduct” because they did not verify the citations and the citations of all nine cases cited were erroneous. The court further noted that the lack of verification raised “serious ethical concerns.”
The attorney’s co-counsel was not sanctioned as they indicated they were unaware of the use of generative AI, but they admitted that they didn’t independently check and verify the citations and underwent questioning by the court.
The sanctioned attorney self-reported the incident to the Pennsylvania Disciplinary Board and filed a motion to withdraw from the case.
This is a hard lesson to learn: it is not the first time an attorney has been sanctioned by a court for filing hallucinated citations. The message in all of the cases is that attorneys have an ethical obligation to check every cite before filing a pleading with the court, and extreme caution should be taken when using generative AI tools in the brief writing process.
Similarly, Senator Chuck Grassley, Chairman of the U.S. Senate Judiciary Committee, sent letters to two federal judges this week requesting information about their use of generative AI in drafting orders in cases. According to Grassley, original orders entered by the judges in July in separate cases were withdrawn after lawyers noted that factual inaccuracies and other errors were contained in the orders. Grassley noted that lawyers are facing scrutiny over the use of generative AI, and therefore judges should be held to the same or higher standard.
The judges have not responded to date.
The same lessons learned from attorneys using generative AI tools may wish to be considered by courts and their law clerks. Proceed with caution.
California Governor Newsom Signs Groundbreaking AI Legislation into Law
On September 29, 2025, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (SB-53) (the “Act”). The Act sets new transparency and safety requirements, and whistleblower protections, for “frontier” artificial intelligence (“AI”) models. The Act aims to prevent catastrophic risks from the use of frontier models, increase public and government oversight over the technology, and protect employees who report serious problems with the technology. The Act will go into effect on January 1, 2026.
Relevant Definitions
The Act introduces several novel definitions, including:
“AI model”: an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
“Critical safety incident”: any of the following: (1) unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a foundation model causing death or bodily injury; or (4) a foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
“Catastrophic risk”: a foreseeable and material risk that a frontier developer’s development, storage, use or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people, or more than $1 in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following: (1) providing expert-level assistance in the creation or release of a chemical, biological, radiological or nuclear weapon; (2) engaging in conduct with no meaningful human oversight, intervention or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion or theft, including theft by false pretense; or (3) evading the control of its frontier developer or user.
“Frontier developer”: a person who has trained a frontier model.
“Foundation model”: an AI model that is: (1) trained on a broad data set; (2) designed for generality of output; and (3) adaptable to a wide range of distinctive tasks.
“Frontier model”: a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
“Frontier AI framework”: documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
“Large frontier developer”: a frontier developer with more than $500 million in annual gross revenues.
Key Requirements
The Act’s key requirements include:
Safety Framework. A large frontier developer must create and publish a “frontier AI framework” that describes how the large frontier developer:
Incorporates national standards, international standards and industry-consensus best practices into its frontier AI framework.
Defines and assesses thresholds to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk (which may include multiple-tiered thresholds).
Applies mitigations to address the potential for catastrophic risks based on the results of assessments required by the Act.
Reviews assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
Revisits and updates the frontier AI framework (including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures required under the Act).
Implements cybersecurity practices to secure unreleased “model weights” (i.e., a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs) from unauthorized modification or transfer by internal or external parties.
Identifies and responds to “critical safety incidents.”
Institutes internal governance practices to ensure implementation of the above-described processes.
Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Risk Assessments. Before, or concurrently with, deploying a new (or substantially modified) frontier model, a large frontier developer must: (1) conduct an assessment of catastrophic risks (“risk assessment”) in accordance with the developer’s frontier AI framework and (2) clearly and conspicuously publish on its website a transparency report.
The transparency report must include: (1) a link to the frontier developer’s website; (2) a method to communicate with the developer; (3) the frontier model’s release date; (4) the languages supported by the frontier model; (5) the modalities of output supported by the frontier model; (6) the intended uses of the frontier model; (7) any generally applicable restrictions or conditions on uses of the frontier model; (8) a summary of catastrophic risks identified in the developer’s risk assessment; (9) the results of the risk assessment; (10) whether third parties were involved in the risk assessment; and (11) other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Risk Assessment Submissions to Office of Emergency Services. Every three months (or on another reasonable schedule communicated to the California Governor’s Office of Emergency Services (“Cal OES”)) large frontier developers must provide Cal OES with a summary of any assessment of catastrophic risk from internal use of their frontier models. They must also report any critical safety incidents (e.g., loss of control, unauthorized access to or exfiltration of frontier models) within 15 days of the incident, or within 24 hours if there is imminent risk of death or serious physical injury. Reports are kept confidential and exempt from public records laws to protect trade secrets.
Government Oversight and CalCompute. The California Government Operations Agency must establish a consortium to develop a public cloud computing cluster to be known as “CalCompute.” CalCompute is to foster safe, ethical, and equitable AI research.
Whistleblower Protections. Frontier developers may not retaliate against employees who disclose information about activities that pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. Frontier developers must provide clear notices to employees about their rights, and provide a reasonable internal process through which a covered employee may anonymously disclose relevant information. Employees can seek injunctions and attorney’s fees if they experience retaliation.
Enforcement and Penalties for Noncompliance. The Act empowers the California Attorney General to enforce compliance with the Act. Certain violations of the Act may result in civil penalties of up to $1 million per violation.
Any entity developing or training advanced AI models should promptly assess whether they are covered by the Act before the Act’s effective date of January 1, 2026, and take steps as appropriate to comply with the Act’s requirements.
Will You Help The USPTO Test AI-Generated Prior Art Searching?
One of the goals of the new Director of the United States Patent and Trademark Office (USPTO) is to improve patent examination efficiency by leveraging Artificial Intelligence (AI) tools. The Automated Search Pilot Program announced in a Federal Register Notice published October 8, 2025, asks patent applicants to volunteer their applications to help the USPTO assess the impact and feasibility of using AI tools to generate initial prior art search results. Participation does require a fee, but applicants may be willing to participate in order to help the USPTO assess the potential value of AI-generated search results and determine if they might improve examination efficiency and quality.
Program Goals
According to the Notice, the goal of the Automated Search Pilot Program is “to evaluate the impact of sharing the results of an automated search” with applicants prior to examination, as well as to assess the scalability of generating automated search results. The USPTO contemplates that providing “automated search results” to an applicant early in the examination process “will provide the applicant with an earlier communication regarding potential prior art issues in their application,” and offer an opportunity to “place the application in better condition for examination” before it is reviewed by the examiner. The USPTO also expects the automated search to “provide[] a new pathway to identify relevant prior art for patent examiners to improve examination quality and efficiency.”
Participating In The Pilot Program
The Automated Search Pilot Program will begin October 20, 2025, and will only be open to brand new utility patent applications:
Only original, noncontinuing, nonprovisional utility applications filed under 35 U.S.C. 111(a) on or after October 20, 2025, and on or before April 20, 2026, are eligible to participate in the pilot program.
Thus, the following applications will not be included: international applications that have entered the national stage under 35 U.S.C. 371; plant applications; design applications; and reissue applications.
In addition, continuing (i.e., continuation, divisional, or continuation-in-part) applications will not be included.
To participate in the program, applicants must file a petition using a new form (Form PTO/SB/470, titled “CERTIFICATION AND PETITION UNDER 37 CFR 1.182 TO PARTICIPATE IN THE AUTOMATED SEARCH PILOT PROGRAM” and pay the petition fee set forth in 37 CFR 1.17(f) (currently $450 for a large entity or $180 for a small entity).
As explained in the Notice, the petition must be filed with the application papers (at the time the application is filed), but will not be decided until the application has completed pre-examination processing. Due to the short timeframe of the program, applicants will not be given an opportunity to correct any defects that led to denial of a petition to participate.
The Federal Notice sets forth additional participation requirements that and should be consulted before any petition is filed.
The Automated Search
As explained in the Notice, “[t]he automated search will be conducted using an internal Artificial Intelligence (AI) tool” that uses the CPC classification of the application, specification, claims, and abstract as contextual information. The Notice explains further:
The AI tool will use the contextual information to find similar information in publicly available documents located in a number of databases available to the USPTO, including U.S. Patents, U.S. Pre-Grant Publications (PG-Pubs), and Foreign Image and Text (FIT). The FIT database includes publications from a number of foreign patent authorities. The AI tool will rank the returned documents from most to least relevant.
The Notice provides the following additional information on the AI tool and confidentiality:
The AI models supporting the automated search are trained using publicly available patent data, including text of patents and published applications, patent classifications, document citations, and human-rated similarity. The training data excludes applicant, inventor, and assignee information because this information may introduce potential biases in the model. The USPTO has implemented measures for the AI tool to ensure data security and maintain patent application confidentiality as required by 35 U.S.C. 122(a). See New Artificial Intelligence Functionality in PE2E Search, 1504 OG 359 (November 15, 2022).
The Automated Search Results Notice (ASRN)
According to the Notice, the results of the automated search will be the generation of an ASRN that will be sent to the applicant and placed in the application file. Importantly, applicants are not required to respond to the ASRN, but may elect to do so as outlined below. The ASRN will list up to 10 documents returned by the AI tool, “listed in descending order of relevance as determined by the AI tool.” According to the Notice, “[c]opies of the documents cited in the ASRN will not be placed in the file”— applicants will have to obtain copies themselves.
Putting The ASRN To Use
As noted above, applicants are not required to respond to the ASRN, but the USPTO hopes they will do so (when appropriate), such as by:
filing a preliminary amendment
filing a petition for express abandonment under 37 CFR 1.138(d) and requesting a refund of the search fee and any excess claims fees
filing a request for deferral of examination under 37 CFR 1.103(d)
Any of these actions must be pursued promptly, before the examiner commences examination—no specific time period to take action is guaranteed.
According to the Notice, examiners will consider the ASRN documents as they would their own search results, and will not be required to list ASRN documents on Form PTO-892 unless relied upon in a prior art rejection.
While applicants are not required to cite the ASRN documents in an Information Disclosure Statement (why would they be???), the ASRN documents only will be listed on the face of the patent if formally made of record by the examiner on a Form PTO-892 or cited by the applicant in an Information Disclosure Statement. (Yes, the USPTO has found a way to increase the already-high IDS burden!)
Evaluating The Pilot Program
As noted above, the primary goal of the pilot program is “to evaluate the impact of sharing the results of an automated search prior to examination of a patent application.” According to the Notice, the USPTO will specifically evaluate the scalability of generating and mailing the ASRN and the usefulness of the ASRN to applicants. In that regard, “the USPTO anticipates providing an avenue for participants to provide feedback regarding the pilot program.”
Promoting Applicant Participation
The success of the Automated Search Pilot Program will depend on applicant participation. It is too bad participation requires a fee, but applicants may be willing to test it out in order to receive prior art search results earlier in the process, and help the USPTO take this step towards implementing AI-assisted prior art searching.
Will Artificial Intelligence Increase the Prices of Construction Materials, Equipment, and Labor?

By now, you’ve likely seen news discussing how artificial intelligence (AI) is set to change the construction industry (and every other industry, for that matter). Typically, this discussion centers on improving business efficiency and cost savings. Many construction companies are predictably using AI to assist with project estimating, processing submittals and Requests for Information (RFIs), and, yes, contract review.
However, as more players in the construction industry adopt AI, it may lead to some potentially unexpected outcomes for contractors, like higher material, equipment, and labor costs. Earlier this year, several construction companies filed class-action antitrust lawsuits against the largest equipment rental providers in the United States, alleging a conspiracy to artificially inflate equipment rental prices. Specifically, the plaintiffs allege these providers illegally conspired to increase prices by sharing real-time, confidential data through the “Rouse Rental Insights” (RRI) program.
The lawsuits have been consolidated into the matter of In re Construction Equipment Antitrust Litigation (Case No. 1:25-cv-03487) and are pending in the United States District Court for the Northern District of Illinois.
Many large construction equipment rental providers use RRI to share pricing data from individual line items on invoices. The RRI program uses AI to aggregate pricing information and generate a recommended “RRI Price” daily for each class and category of equipment. The RRI Price considers seasonal changes and other market fluctuations to predict the optimal rental price for a given day.
The plaintiffs in the class action contend that by sharing their confidential pricing data with the RRI pricing tool and agreeing to use the AI-driven “RRI Price,” the equipment rental providers have conspired to significantly increase rental prices. The plaintiffs argue that such price increases are harmful because (1) there are relatively few large equipment rental providers; (2) buying (rather than renting) equipment is uneconomical for most contractors; and (3) increases in equipment rental rates do not significantly decrease the demand for equipment rentals.
Below is a graphic contained in the plaintiffs’ complaint showing the growth in the U.S. construction equipment rental industry since 1997, which plaintiffs contend is due in part to their alleged conspiracy:
The class action lawsuit is ongoing, and the results may determine how AI is utilized in the construction industry going forward. If the equipment rental companies successfully defend the use of the RRI Price, other industry players could adopt similar AI pricing models, which could lead to increased prices in other segments of the construction industry.
Your Year-end U.S. Privacy “To Do” List – Don’t Wait Until the Holiday Crush to Become 2026-Ready
The California Consumer Privacy Act (CCPA) requires that privacy notices be updated annually, and that the detailed disclosures it proscribes be in those notices reflect the 12-month period prior to the effective (posting) date. Interestingly, failure to make annual updates was one of several alleged CCPA violations that resulted in a recent $1.35 Million administrative civil penalty by the California Privacy Protection Agency (CPPA) against retailer Tractor Supply Company. Also, three more state consumer protection laws go into effect on January 1, 2026, which will require notice and consumer rights intake changes, if applicable. Additionally, new and amended CCPA regulations will bring new obligations for businesses starting the first of the year that need to be addressed between now and then. Also recommended is a general checkup with particular attention to enforcement priorities. Here are some things to do in preparation for 2026:
Assess which of the 20 state consumer privacy laws (CPLs) apply to your business, and update notices and rights request processes to identify which apply and address material differences in what each requires.
Consider new or modified data practices initiated in 2025, or under consideration to be introduced in 2026, complete risk assessments on them, and update the privacy notice to reflect at least the preceding 12-month period.
Implement a data processing risk assessment program, or revise the current process to reflect the new CCPA requirements, effective January 1.
Confirm you have contracts in place containing data protection terms required by CCPA and other CPLs with parties that receive (or access) your personal data – an ongoing California enforcement priority. Have these organized by service provider / processor or third party and be prepared to produce them upon regulatory inquiry.
Employers, especially in California, need to address use of automated decision-making tools. This will become an even more complex and time urgent matter for California employers if Governor Newsome does not veto SB-7 (the “No Robo-Bosses” Act), which would become effective January 1 and add even further requirements and restrictions on technology-assisted HR decision-making. (Note: An inadequate privacy notice and rights request process for personnel was another basis for the Tractor Supply penalty.)
Review your tracking technologies and cookie banner(s) and preference tool(s) to support a defense to wiretapping (e.g., CIPA) claims and comply with CPL notice and opt-out requirements, including browser privacy control signals, as explained here.
If you process personal data of minors, consumer health data, precise location data, biometric data, or other sensitive personal data, consider the legal requirements and limitations that have been evolving in recent years and the growing application of consumer protection law principles to limit unexpected uses.
Revisit and update your information governance roadmap or project plan and seek budget for 2026 initiatives. This should include:
Preparing for the Colorado AI Act
Preparing for California Automated Decision-making Technology rules (and address Colorado and other CPL Profiling rules)
Preparing for upcoming California cybersecurity audit requirements
Completing data mapping (required by Minnesota’s CPL and the new California cybersecurity regulations)
Many companies go on website code lock in mid-November, and Q4 is a hectic time between year-end financial closings and the holidays, so give yourself enough time to get revisions to notices, policies, and tools updated and published. Update your information governance roadmap for 2026 to reflect new laws, regulations, and enforcement trends and be sure your budget for next year reflects these needs.
AI Adoption Surges Among S&P 500 Companies—But So Do the Risks
According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.
The article reports that major companies are no longer just testing AI in isolated pilots; they’re embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.
Reputational Risk: Leading the way is reputational risk, with more than a third of companies worried about potential brand damage. This concern centers on scenarios like service breakdowns, mishandling of consumer privacy, or customer-facing AI tools that fail to meet expectations.
Cybersecurity Risk: One in five S&P 500 companies explicitly cite cybersecurity concerns related to AI deployment. According to Cybersecurity Dive, AI technology expands the attack surface, creating new vulnerabilities that malicious actors can exploit. Compounding these risks, companies face dual exposure—both from their own AI implementations and from third-party AI applications.
Regulatory Risk: Companies are also navigating a rapidly shifting legal landscape as state and federal governments scramble to establish guardrails while supporting continued innovation.
One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC’s 2025 Annual Corporate Director’s Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.
Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.
USPTO’s Automated Search Pilot Program: Early Prior Art Insights—Promises and Pitfalls for Patent Applicants
The United States Patent and Trademark Office (USPTO) is rolling out a new Automated Search Pilot Program, offering applicants a first-of-its-kind opportunity to receive a pre-examination, AI-generated prior art search report. The program’s stated goals are to improve prosecution efficiency and the quality of patent examination by providing an Automated Search Results Notice (ASRN) before an examiner reviews the case. The ASRN is intended to provide an earlier communication regarding potential prior art issues and could bring about significant changes in how utility filings are prosecuted and strategized.
How it Works
The USPTO’s internal AI tool will generate the ASRN by searching the application text against multiple U.S. and foreign databases, ranking up to ten documents for relevancy. Shortly after pre-examination processing, the ASRN is issued to the applicant, providing insight into the potentially relevant prior art uncovered—but requiring no response.
This will give the applicant an opportunity to assess prior art issues before substantive examination. Applicants are not required to respond to the ASRN, but may opt to file a preliminary amendment to place the application in better condition for examination. Applicants may also request deferral of examination or file a petition for express abandonment and seek a refund of certain fees if prosecution is no longer desired.
How to Participate
Applicants filing original, noncontinuing, utility patent applications between October 20, 2025, and April 20, 2026, can participate in the program by submitting a petition (Form PTO/SB/470) and the then-current petition fee set forth in 37 CFR 1.17(f) with the application filing. The application must be filed electronically using the USPTO’s Patent Center, and the application must conform to the USPTO requirements for DOCX submission upon filing. Finally, the applicant must be enrolled in the Patent Center Electronic Office (e-Office) Action Program to participate.
International applications entering the national stage in the US, plant applications, design applications, reissue applications and continuation applications are not eligible.
Potential Benefits
The clearest benefit of this program is the opportunity for applicants to see potentially relevant prior art before substantive examination proceeds. For patent owners operating in crowded technology spaces, this may mean fewer surprises and an earlier ability to refine claim strategy. Receiving an automated prior art report enables applicants to refocus the claims before substantive examination, potentially heading off costly additional cycles of prosecution. It also creates a pathway for quick decision-making: if the ASRN reveals insurmountable prior art, applicants may opt for express abandonment and seek a refund of the search and any excess claims fees.
What Applicants Need to Know
The ASRN is limited to a maximum of 10 references, which will be listed in order of relevance as ranked by the AI tool. Typically, if the AI tool actually uncovers the best art, the top handful should be sufficient. But whether the AI tool finds the best references—or at least those as good as the examiner will find—remains to be seen.
At this stage, AI technology is still nascent and not without error. As of this writing I have tested several AI search tools and have been somewhat disappointed with the results. While these AI tools may uncover some nuggets, they are often hit-or-miss, returning irrelevant references and making poorly reasoned combinations of prior art that fail to meet legal standards for combinability. These tools can be useful, but still require a fair amount of human oversight to achieve good results. Thus, the references returned with the ASRN may be mixed or questionable value. I plan to test this pilot program myself to see whether there is value in the preliminary results.
Examiners often possess deep expertise in their assigned art units and have accumulated personal collections of highly potent art over their lengthy tenures. These “go-to” references are often uncovered based on human intuition and years of hard work, rather than searchable metadata or textual similarity. Thus, the AI system generating the ASRN may overlook these highly relevant references. Consequently, applicants may still encounter more significant or challenging prior art in the first Office Action than what appears in the automated report.
Tactical and Strategic Considerations
Any preliminary amendment to address results of the automated search should be submitted as soon as possible to reduce the likelihood that the amendment interferes with preparation of the first Office Action. Teams should establish protocols for prompt ASRN review and clear criteria for preliminary amendments, abandonment, or examination deferral.
As with any new process, in-house counsel and portfolio managers should weigh the strategic fit before participation and monitor outcomes as real-world experience with the program accumulates. Teams contemplating the pilot program should weigh whether the value of obtaining an early prior art report aligns with their portfolio management goals. For example, in fast-moving sectors where fast, strategic pivots matter, the program may offer a real advantage by streamlining prosecution. While there may be additional cost associated with reviewing the references and filing a preliminary amendment, that cost may be offset by fewer rounds of Office Actions before allowance.
Early search results from the ASRN may also assist with foreign filing decisions, which often need to be made before receiving a substantive Action in the U.S. If the ASRN reveals prior art that poses a major hurdle, applicants may decide to forgo foreign filings and avoid the accompanying expenses. Absent such insight, applicants are often forced to make these costly decisions without knowing the prior-art landscape, just to preserve their priority date. Conversely, when proceeding with foreign filings, the ASRN may enable you to refine your claims from the outset, increasing your chances of a more efficient prosecution abroad.
Final Thoughts
The USPTO’s Automated Search Pilot Program provides the potential for up-front insight into relevant prior art, allowing applicants to make more informed early decisions and streamline prosecution. While it may bring the opportunity for greater efficiency and cost savings, its true value will depend on the relevance of the AI-located references. If the Program lives up to its promise, it will allow applicants to make informed decisions up front to avoid unnecessary rounds of examination. But with a limited reference count and uncertainties associated with an AI-generated prior-art search, whether the Program translates into real benefits will not be known until we have had a chance to road-test the Program.