AI Chatbots in the Medical Field: Healthcare Hero or HIPAA Nightmare?

Healthcare professionals are finding AI to be nothing short of an asset in producing efficient communication and data organization on the job. Clinicians utilize AI for managing medical records, patient medications, and various medical writing and data organization-based tasks. AI has the capacity to provide clinical-grade language processing and time-saving strategies that simplify ICD-10 coding and assist clinicians in completing clinical notes faster and in a more timely manner.
While AI’s advancements have served as game-changers in increasing workday efficiency, clinicians must be cognizant of the perils of using AI chatbots as a means to communicate with patients. As background, AI chatbots are computer programs designed to simulate conversations with humans. In principle, these tools facilitate communication between patients and healthcare providers by offering continuous access to medical information, automating processes such as appointment scheduling and medication reminders, assessing symptoms, and recommending care and treatment.
When patient medical records and sensitive information are involved, however, how do clinicians find the balance between utilizing AI chatbots to their benefit and exercising discretion with sensitive patient data to avoid HIPAA violations? Given AI’s numerous data collection mechanisms, including its tracking of browsing activity and its ability to access individual device information, what can be done to ensure that patient information is never subjected to even the shortest-lived bugs or breaches? Can AI companies assist clinicians in ensuring that patient confidentiality is preserved?
First, opt-out features and encryption protocols are two ways AI protects user data, but tech companies collaborating with healthcare providers in creating HIPAA-compliant AI software would be even more beneficial to the medical field. Second, it is imperative for healthcare professionals to acquire patient consent and anonymize any patient data prior to recruiting the help of an AI chatbot. Healthcare providers utilizing legal safeguards, such as requiring patients to sign releases expressing consent that medical records may be used for research, in addition to proper anonymization of patient data used for research, may mitigate legal risks associated with HIPAA compliance.
For further assistance in managing the risks associated with AI, healthcare providers can turn to the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) to evaluate risks related to AI systems. NIST, a non-regulatory Federal agency within the U.S. Department of Commerce, published this voluntary guidance to help entities manage the risks of AI systems and promote responsible AI development.
Leveraging the vast capabilities of artificial intelligence, alongside robust data encryption and strict adherence to HIPAA compliance protocols, will enhance the future of healthcare for patients and healthcare providers alike.

WashU Law’s Bold Bid to Become the Global Leader in Legal AI

As AI becomes an integral part of legal practice, law schools nationwide are updating their curricula to prepare students for an AI-driven profession. Washington University in St. Louis School of Law is among the institutions investing heavily in this area, with a stated goal of becoming the global leader in AI education and training. But it’s not without competition.
This past year, WashU Law entered the U.S. News & World Report’s top 14 law schools, surpassing Cornell Law School for the coveted “T14” designation. Building on that momentum, WashU Dean Stefanie Lindquist has made her law school’s AI ambitions clear.
“WashU Law is positioning itself as the nation’s leading law school in artificial intelligence and the law, and our vision is global,” Dean Lindquist told The National Law Review in an exclusive interview.
So far, Dean Lindquist and WashU Law have taken concrete steps to realize their lofty ambitions.
Last week, WashU Law announced a star-studded AI Advisory Board featuring some of the biggest players in legal tech, including Pablo Arredondo (Co-Founder, Casetext & VP, Thomson Reuters), Keith Carlson (CTO, Relativity), Judge Joshua Deahl (D.C. Court of Appeals), John Haddock (Chief Business Officer, Harvey), Hon. Bridget McCormack (CEO & President, American Arbitration Association; former Chief Justice, Michigan Supreme Court), Sara Miro (Director of Knowledge & Innovation, Sullivan & Cromwell LLP), Blake Rooney (CIO, Husch Blackwell), Evan Shenkman (Chief Innovation Officer, Fisher Phillips), Scott Stevenson (CEO & Founder, Spellbook), and Max Junestrand (CEO/Founder, Legora).
The AI Advisory Board’s “expertise and vision will ensure WashU Law continues to lead in preparing lawyers, scholars, and policymakers for the future of law in a rapidly changing world,” said Dean Lindquist.
Last January, WashU Law became one of the first law schools to offer comprehensive AI training for students, faculty, and alumni through a week-long program with Wickard.ai, covering AI fundamentals, legal applications, ethics, and future challenges. Building on that foundation, WashU then launched the WashU Law AI Collaborative, led by lecturer Ryan Durrie and adjunct professor Oliver Roberts, to drive AI policy research, AI education, CLE programming, and events advancing dialogue on AI and the law.
Some of these initiatives include WashU Law’s AI Policy CLE Series, a summer AI & the Future of Law program for aspiring law students, and Legal AI Demo Days, featuring top legal tech companies. Next month, WashU Law will bring its AI programming to Mississippi, where Professor Roberts will lead AI ethics training at the U.S. District Court for the Northern District of Mississippi’s 17th Annual Bench & Bar CLE Conference. Furthermore, in late October, WashU Law will host Legal Tech Week in conjunction with The National Law Review, where leaders in the legal AI industry will introduce some of the most exciting new AI tools available to the legal profession.
Through its AI Collaborative, WashU Law has also expanded its AI influence globally by developing AI partnerships with leading law schools around the world, including the University of Nottingham, Utrecht University, Universidad de La Sabana, Universidad Pontificia Comillas, Fudan University, and the University of Queensland School of Law.
“Through new international partnerships, from Dubai to Europe and beyond, we are extending WashU Law’s reach in the field of AI and legal innovation. These collaborations bring together scholars, practitioners, and technologists worldwide, enabling us to learn and lead on a global stage,” noted Dean Lindquist.
In January 2026, WashU Law is scheduled to host a global AI summit in Dubai, bringing together the top lawyers, regulators, and academics in the AI space, according to Dean Lindquist.
But as WashU Law aims to be the global leader in AI education, it still faces strong competition at home.
In February, Case Western Reserve School of Law became the first U.S. law school to require AI education for first-year law students. This groundbreaking program, designed and delivered by Wickard.ai, provided CWRU law students with hands-on, foundational knowledge in AI and legal practice.
Suffolk Law School has also long been at the forefront of legal technology education. Starting this fall, the law school now requires all first-year law students to take a course in Generative AI and recently launched a LLM in Legal Innovation and Technology. The law school also brought on adjunct professor Tom Martin, CEO and Founder of LawDroid, to teach courses in generative AI.
The University of Chicago Law School now offers courses in “Generative AI in Legal Practice,” “Editing, Advocacy, and AI,” and “Regulation of AI: Legal and Constitutional Issues.” The University of Pennsylvania Carey Law School is also offering courses in generative AI this fall.
Vanderbilt Law School has also taken significant steps to advance legal AI education. The law school recently launched the Vanderbilt AI Law Lab (VAILL), which serves as a resource hub and training center for students and helps design legal AI tools. Next month, it will be hosting its inaugural Vanderbilt AI Governance Symposium, led by VAILL Co-Director and Professor of Law Mark Williams.
Similarly, the University of Miami School of Law unveiled the Miami Law and AI Lab led by Director Or Cohen-Sasson. The lab is an interdisciplinary effort to bridge the fields of law, technology, and policy, and the lab recently developed an AI-powered Bluebook citation tool.
As more law schools experiment with integrating AI into their curricula, the field remains wide open, with no single institution yet claiming clear dominance. In the coming months, as AI use in legal practice continues to grow, we can expect many more law schools to incorporate AI education into their programs. Schools that move early may gain a competitive edge in attracting prospective students and giving their graduates a distinct advantage in clerkships, summer associate programs, and long-term career placement.

Early Jurisprudence from Beijing on the Intersection of Artificial Intelligence, Copyright and Personality Rights

On September 10, 2025, the Beijing Internet Court released eight Typical Cases Involving Artificial Intelligence (涉人工智能典型案例) “to better serve and safeguard the healthy and orderly development of the artificial intelligence industry.” Typical cases in China serve as educational examples, unify legal interpretation, and guide lower courts and the general public. While not legally binding precedents like those in common law systems, these cases provide authoritative guidance in a civil law system where codified statutes are the primary source of law. Note though that these cases are from the Beijing Internet Court and unless designated as Typical by Supreme People’s Court (SPC), as in cases #2 and #8 below, may have limited authority. Nonetheless, the cases provide early insight into Chinese legal thinking on AI and may be “more equal than others” coming from a court in China’s capital.
The following can be derived from the cases:

AI-generated works can be protected by copyright and the author is the person entering the prompts into the AI.
Designated Typical by the SPC: Personality rights extend to AI-generated voices that possess sufficient identifiability based on tone, intonation, and pronunciation style that would allow the general public to associate the voice with the specific person. 
Designated Typical by the SPC: Personality rights extend to people’s virtual images and unauthorized creation or use of a person’s AI-generated virtual image constitutes personality rights infringement.
Network platforms using algorithms to detect AI-generated content for removal must provide reasonable explanations for their algorithmic decisions, particularly for content where providing creation evidence is unreasonable.
Virtual digital persons or avatars demonstrating unique aesthetic choices in design elements constitute artistic works protected by copyright law.

As explained by the Beijing Internet Court:
Case I: Li v. Liu: Infringement of the Right of Authorship and the Right of Network Dissemination—Determination of the Legal Attributes and Ownership of AI-Generated Images[Basic Facts]On February 24, 2023, plaintiff Li XX used the generative AI model Stable Diffusion to create a photographic close-up of a young girl at dusk. By selecting the model, inputting prompt words and reverse prompt words, and setting generation parameters, the model generated an image. The image was then published on the Xiaohongshu platform on February 26, 2023. On March 2, 2023, defendant Liu XX published an article on his registered Baijiahao account, using the image in question as an illustration. The image removed the plaintiff’s signature watermark and failed to provide any specific source information. Consequently, plaintiff Li filed a lawsuit, requesting an apology and compensation for economic losses. An in-court inspection revealed that, when using the generative AI model Stable Diffusion, the model generated different images by changing individual prompt words or parameters.
[Judgment]The court held that, judging by the appearance of the images in question, they are no different from ordinary photographs and paintings, clearly belonging to the realm of art and possessing a certain form of expression. The images in question were generated by the plaintiff using generative artificial intelligence technology. From the moment the plaintiff conceived the images in question to their final selection, the plaintiff exerted considerable intellectual effort, thus satisfying the requirement of “intellectual achievement.” The images in question themselves exhibit discernible differences from prior works, reflecting the plaintiff’s selection and arrangement. The process of adjustment and revision also reflects the plaintiff’s aesthetic choices and individual judgment. In the absence of evidence to the contrary, it can be concluded that the images in question were independently created by the plaintiff, reflecting his or her individual expression, thus satisfying the requirement of “originality.” The images in question are aesthetically significant two-dimensional works of art composed of lines and colors, classified as works of fine art, and protected by copyright law. Regarding the ownership of the rights to the works in question, copyright law stipulates that authors are limited to natural persons, legal persons, or unincorporated organizations. Therefore, an artificial intelligence model itself cannot constitute an author under Chinese copyright law. The plaintiff configured the AI model in question as needed and ultimately selected the person who created the images in question. The images in question were directly generated based on the plaintiff’s intellectual input and reflect the plaintiff’s personal expression. Therefore, the plaintiff is the author of the images in question and enjoys the copyright to the images in question. The defendant, without permission, used the images in question as illustrations and posted them on his own account, allowing the public to access them at a time and location of his choosing. This infringed the plaintiff’s right to disseminate the images on the Internet. Furthermore, the defendant removed the signature watermark from the images in question, infringing upon the plaintiff’s right of authorship and should bear liability for infringement. The verdict ordered the defendant, Liu XX, to apologize and compensate for economic losses. Neither party appealed the verdict, which has since come into effect.[Typical Significance]This case clarifies that content generated by humans using AI, if it meets the definition of a work, should be considered a work and protected by copyright law. Furthermore, the copyright ownership of AI-generated content can be determined based on the original intellectual contribution of each participant, including the developer, user, and owner. While addressing new rights identification challenges arising from technological change, this case also provides a practical paradigm for institutional adaptation and regulatory responses to the judicial review of AI-generated content, demonstrating its guiding and directional significance. This case was selected as one of the top ten nominated cases for 2024 to promote the rule of law in the new era, one of the ten most influential events in China’s digital economy development and rule of law in 2024, and one of the top ten events in China’s rule of law implementation in 2023.
Case II: Yin XX v. A Certain Intelligent Technology Company, et al., Personal Rights Infringement Dispute—Can the Rights of a Natural Person’s Voice Extend to AI-Generated Voices?[Basic Facts]Plaintiff Yin XX, a voice actor, discovered that works produced using his voice acting were widely circulated on several well-known apps. After audio screening and source tracing, it was discovered that the voices in these works originated from a text-to-speech product on a platform operated by the first defendant, a certain intelligent technology company. The plaintiff had previously been commissioned by the second defendant, a cultural media company, to record a sound recording, which held the copyright. The second defendant subsequently provided the audio of the recording recorded by the plaintiff to the third defendant, a certain software company. The third defendant used only one of the plaintiff’s recordings as source material, subjected it to AI processing, and generated the text-to-speech product in question. The product was then sold on a cloud service platform operated by the fourth defendant, a certain network technology company. The first defendant, a certain intelligent technology company, entered into an online services sales contract with the fifth defendant, a certain technology development company. The fifth defendant placed an order with the third defendant, which included the text-to-speech product in question. The first defendant, the certain intelligent technology company, used an application programming interface (API) to directly access and generate the text-to-speech product for use on its platform without any technical processing. The plaintiff claimed that the defendants’ actions had seriously infringed upon the plaintiff’s voice rights and demanded that Defendant 1, a certain intelligent technology company, and Defendant 3, a certain software company, should immediately cease the infringement and apologize. The plaintiff requested the five defendants compensate the plaintiff for economic and emotional losses.[Judgment]The court held that a natural person’s voice, distinguished by its voiceprint, timbre, and frequency, possesses unique, distinctive, and stable characteristics. It can generate or induce thoughts or emotions associated with that person in others, and can publicly reveal an individual’s behavior and identity. The recognizability of a natural person’s voice means that, based on repeated or prolonged listening, the voice’s characteristics can be used to identify a specific natural person. Voices synthesized using artificial intelligence are considered recognizable if they can be associated with that person by the general public or the public in relevant fields based on their timbre, intonation, and pronunciation style. In this case, Defendant No. 3 used only the plaintiff’s personal voice to develop the text-to-speech product in question. Furthermore, court inspection confirmed that the AI voice was highly consistent with the plaintiff’s voice in terms of timbre, intonation, and pronunciation style. This could evoke thoughts or emotions associated with the plaintiff in the average person, allowing the voice to be linked to the plaintiff and, therefore, identified. Defendant No. 2 enjoys copyright and other rights in the sound recording, but this does not include the right to authorize others to use the plaintiff’s voice in an AI-based manner. Defendant No. 2 signed a data agreement with Defendant No. 3’s company, authorizing Defendant No. 3 to use the plaintiff’s voice in an AI-based manner without the plaintiff’s informed consent, lacking any legal basis for such authorization. Therefore, Defendants No. 2 and No. 3’s defense that they had obtained legal authorization from the plaintiff is unsustainable. Defendants No. 2 and No. 3 used the plaintiff’s voice in an AI-based manner without the plaintiff’s permission, constituting an infringement of the plaintiff’s voice rights. Their infringement resulted in the impairment of the plaintiff’s voice rights, and they bear corresponding legal liability. Defendants No. 1, No. 4, and No. 5 were not subjectively at fault and are not liable for damages. Therefore, damages are determined based on a comprehensive consideration of the defendants’ infringement, the value of similar products in the market, and the number of views. Verdict: Defendants 1 and 3 shall issue written apologies to the plaintiff, and Defendants 2 and 3 shall compensate the plaintiff for economic losses. Neither party appealed the verdict, and the judgment has entered into force.
[Typical Significance]This case clarifies the criteria for determining whether sounds processed by AI technology are protected by voice rights, establishes behavioral boundaries for the application of new business models and technologies, and helps regulate and guide the development of AI technology toward serving the people and promoting good. This case was selected by the Supreme People’s Court as a typical case commemorating the fifth anniversary of the promulgation of the Civil Code, a typical case involving infringement of personal rights through the use of the Internet and information technology, and one of the top ten Chinese media law cases of 2024.
Case III: Li XX v. XX Culture Media Co., Ltd., Internet Infringement Liability Dispute Case—Using AI-synthesized celebrity voices for “selling products” without the rights holder’s permission constitutes infringement, and the commissioned promotional merchant shall bear joint and several liability.[Basic Facts]Plaintiff Li holds a certain level of fame and social influence in the fields of education and childcare. In 2024, Plaintiff Li discovered that Defendant XX Culture Media Co., Ltd. was promoting several of its family education books on an online platform store using videos of Plaintiff Li’s public speeches and lectures, accompanied by an AI-synthesized voice that closely resembled the Plaintiff’s voice. Plaintiff argued that the Defendant’s unauthorized use of Plaintiff’s portrait and AI-synthesized voice in promotional products closely associated the Plaintiff’s persona with the target audience of the commercial promotion, misleading consumers into believing the Plaintiff was the spokesperson or promoter for the books. The Defendant exploited the Plaintiff’s persona, professional background, and social influence to attract attention and increase sales, thereby infringing upon the Plaintiff’s portrait and voice rights. As a book seller, the defendant and the video publisher (a live streamer) had a commission relationship, jointly completing sales activities. The defendant had the obligation and ability to review the live streamer’s videos and should bear tort liability, including an apology and compensation for losses, for the publication of the video in question.[Judgment]The court held that the video in question used the plaintiff, Li’s portrait and AI-synthesized voice. This voice was highly consistent with Li’s own voice in timbre, intonation, and pronunciation style. Considering Li’s fame in the education and parenting fields, the video in question promoted family education books, making it easier for viewers to connect the relevant content in the video with Li. It can be determined that a certain range of listeners could establish a one-to-one correspondence between the AI-synthesized voice and the plaintiff himself. Therefore, the voice in question fell within the scope of protection of Li’s voice rights. The promotional video in question extensively used the plaintiff’s portrait and synthesized voice, without the plaintiff’s authorization. Therefore, the publication of the video in question constituted an infringement of the plaintiff’s portrait and voice rights. Defendant XX Culture Media Co., Ltd. and the video publisher (a live streamer) entered into a commissioned promotion relationship in accordance with the platform’s rules and service agreements. They jointly published the video in question for the purpose of promoting the defendant’s book and generating corresponding revenue. Furthermore, the defendant, based on the platform’s rules and management authority, possessed the ability to review and manage the video in question. Given that the video in question extensively used the plaintiff’s likeness and synthesized a simulated voice, the defendant should have had a degree of foresight regarding the potential copyright infringement risks posed by the video and exercised reasonable scrutiny to determine whether the video had been authorized by the plaintiff. However, the evidence in the case demonstrates that the defendant failed to exercise due diligence in its review. Therefore, defendant XX Culture Media Co., Ltd. should bear joint and several liability with the video publisher for publishing the infringing video. The judgment ordered defendant XX Culture Media Co., Ltd. to apologize to plaintiff Li and compensate for economic losses and reasonable expenses incurred in defending its rights. The judgment dismissed plaintiff Li’s other claims. Neither party appealed the verdict, and the judgment has entered into force.[Typical Significance]With the rapid development of generative artificial intelligence technology, the “cloning” and misuse of celebrity voices has become increasingly difficult to distinguish between genuine and fake, leading to widespread audio copyright infringement and significant consumer misinformation. This case clarifies that as long as the voice of a natural person synthesized using AI deep synthesis technology can enable the general public or the public in relevant fields to identify a specific natural person based on its timbre, intonation, pronunciation style, etc., it is identifiable and should be included in the scope of protection of the natural person’s voice rights. At the same time, in the legal relationship where merchants entrust video publishers to promote products, the platform merchant, as the entrusting party and actual beneficiary, has a reasonable review obligation to review the promotional content published by its affiliated influencers. Merchants cannot be exempted from liability simply on the grounds of “passive cooperation” or “no participation in production.” If they fail to fulfill their duty of care in review, they must bear joint and several liability with the influencers who promote products. This provides normative guidance for standardizing e-commerce promotion behavior, strengthening the main responsibilities of merchants, and governing the chaos of AI “voice substitution”, promoting the positive development of artificial intelligence and deep synthesis technology.
Case IV: Liao v. XX Technology and Culture Co., Ltd., Internet Infringement Liability Dispute Case—Unauthorized “AI Face-Swapping” of Videos Containing Others’ Portraits, Constituting an Infringement of Others’ Personal Information Rights[Basic Facts]The plaintiff, Liao, is a short video blogger specializing in ancient Chinese style, with a large online following. The defendant, XX Technology and Culture Co., Ltd., without his authorization, used a series of videos featuring the plaintiff to create face-swapping templates, uploaded them to the software at issue, and provided them to users for a fee, profiting from the process. The plaintiff claims the defendant’s actions infringe upon his portrait rights and personal information rights and demands a written apology and compensation for economic and emotional damages. The defendant, XX Technology and Culture Co., Ltd., argues that the videos posted on the defendant’s platform have legitimate sources and that the facial features do not belong to the plaintiff, thus not infringing the plaintiff’s portrait rights. Furthermore, the “face-swapping technology” used in the software at issue was actually provided by a third party, and the defendant did not process the plaintiff’s personal information, thus not infringing the plaintiff’s personal information rights. The court found that the face-swapping template videos at issue shared the same makeup, hairstyle, clothing, movements, lighting, and camera transitions as the series of videos created by the plaintiff, but the facial features of the individuals featured were different and did not belong to the plaintiff. The software in question uses a third-party company’s service to implement face-swapping functionality. Users pay a membership fee to unlock all face-swapping features.[Judgment]The court held that the key to determining whether portrait rights have been infringed lies in recognizability. Recognizability emphasizes that the essence of a portrait is to point to a specific person. While the scope of a portrait centers around the face, it may also include unique body parts, voices, highly recognizable movements, and other elements that can be associated with a specific natural person. In this case, although the defendant used the plaintiff’s video to create a video template, it did not utilize the plaintiff’s portrait. Instead, it replaced the plaintiff’s facial features through technical means. The makeup, hairstyle, clothing, lighting, and camera transitions retained in the template are not inseparable from a specific natural person and are distinct from the natural personality elements of a natural person. The subject that the general public identifies through the replaced video is a third party, not the plaintiff. Furthermore, the defendant’s provision of the video template to users did not vilify, deface, or falsify the plaintiff’s portrait. Therefore, the defendant’s actions did not constitute an infringement of the plaintiff’s portrait rights. However, the defendant collected videos containing the plaintiff’s facial information and replaced the plaintiff’s face in those videos with a photo provided by the defendant. This synthesis process required algorithmically integrating features from the new static image with some facial features and expressions from the original video. This process involved the collection, use, and analysis of the plaintiff’s personal information, constituting the processing of the plaintiff’s personal information. The defendant processed this information without the plaintiff’s consent, thus infringing upon the plaintiff’s personal information rights. If the defendant infringes upon the creative work of others by using videos produced by others without authorization, the relevant rights holder should assert their rights. The judgment ordered the defendant to issue a written apology to the plaintiff and compensate the plaintiff for emotional distress. Neither party appealed the verdict, and the judgment has entered into force.[Typical Significance]This case, centered on the new business model of “AI face-swapping,” accurately distinguished between portrait rights, personal information rights, and legitimate rights based on labor and creative input in the generation of synthetic AI applications. This approach not only safeguards the legitimate rights and interests of individuals, but also leaves room for the development of AI technology and emerging industries, and provides a valuable opportunity for service providers.
Case V: Tang v. XX Technology Co., Ltd., Internet Service Contract Dispute Case—An online platform using algorithmic tools to detect AI-generated content but failing to provide reasonable and appropriate explanations should be held liable for breach of contract.[Basic Facts]Plaintiff Tang posted a 200-word text on an online platform operated by defendant XX Technology Co., Ltd., stating, “Working part-time doesn’t make you much money, but it can open up new perspectives… If you’re interested in learning to drive and plan to drive in the future, you can do it during your free time during your vacation… After work, you won’t have much time to get a license.” The platform operated by defendant XX Technology Co., Ltd. classified the content as a violation of “containing AI-generated content without identifying it,” hid it, and banned the user for one day. Plaintiff Tang’s appeal was unsuccessful. He argued that he did not use AI to create content and that defendant XX Technology Co., Ltd.’s actions constituted a breach of contract. He requested the court to order the defendant to revoke the illegal actions of hiding the content and banning the account for one day, and to delete the record of the illegal actions from its backend system.[Judgment]The court held that when internet users create content using AI tools and post it to online platforms, they should label it truthfully in accordance with the principle of good faith. The defendant issued a community announcement requiring creators to proactively use labels when posting content containing AIGC. For content that fails to do so, the platform will take appropriate measures to restrict its circulation and add relevant labels. Using AI-generated content without such labels constitutes a violation. The plaintiff is a registered user of the platform, and the aforementioned announcement is part of the platform’s service agreement. The defendant has the right to review and address user-posted content as AI-generated and synthetic content in accordance with the agreement. Generally, the plaintiff should provide preliminary evidence, such as manuscripts, originals, source files, and source data, to prove the human nature of the content. However, in this case, the plaintiff’s responses were created in real time, making it objectively impossible to provide such evidence. Therefore, it is neither reasonable nor feasible for the plaintiff to provide such evidence. The defendant concluded that the content in question was AI-generated and synthetic based on the results of the algorithmic tool. The defendant is both the controller and judge of the algorithmic tool, controlling the algorithmic tool’s operation and review results. The defendant should provide reasonable evidence or explanation for this fact. Although the defendant provided the algorithm’s filing information, its relevance to the dispute could not be confirmed. The defendant failed to adequately explain the algorithm’s decision-making basis and results, nor to rationally justify its determination that the content in question was AI-generated and synthesized. The defendant should bear liability for breach of contract for its handling of the account in question without factual basis. The defendant’s standard for manual review required highly human emotional characteristics, and this judgment lacked scientific basis, persuasiveness, or credibility. The court ordered the defendant to repost the content and delete the relevant backend records. The defendant appealed the first-instance judgment but later withdrew the appeal, and the first-instance judgment stood.[Typical Significance]This case is a valuable exploration of the labeling, platform identification, and governance of AI-generated content within the context of judicial review. On the one hand, it affirms the positive role of online content service platforms in using algorithmic tools to review and process AI-generated content and fulfill their primary responsibility as information content managers. On the other hand, it recognizes the obligation of online content service platforms to adequately explain the results of automated algorithmic decisions in the context of live text creation, and establishes a standard for the level of explanation required during judicial review. Through judicial case-by-case adjudication, this case rationally distributes the burden of proof between online content service platforms and users, promotes online content service platforms to improve the recognition and decision-making capabilities of algorithms, and effectively improves the level of artificial intelligence information content governance.
Case VI: Cheng v. Sun Online Infringement Liability Dispute Case—Using AI Software to Parody and Deface Others’ Portraits Consists of Personality Rights Infringement[Basic Facts]Plaintiff Cheng and defendant Sun were both members of a photography exchange WeChat group. Without Cheng’s consent, defendant Sun used AI software to create an anime-style image of Cheng, showing her as a scantily clad woman, from Cheng’s WeChat profile photo. He then sent the image to the group. Despite repeated attempts by plaintiff Cheng to dissuade him, defendant Sun continued to use the AI software to create an anime-style image of Cheng, showing her as a scantily clad woman with a distorted figure, and sent it to plaintiff via private WeChat messages. Plaintiff Cheng believes that the allegedly infringing images, sent by defendant Sun to groups and private messages, are recognizable as the plaintiff’s own image and contain significant sexual connotations and derogatory qualities, thereby diminishing her public reputation and infringing her portrait rights, reputation rights, and general personality rights. Plaintiff Cheng therefore demands an apology and compensation for emotional and economic losses.
[Judgment]The court held that the allegedly infringing image posted by defendant Sun in the WeChat group was generated by AI using plaintiff Cheng’s WeChat profile picture without authorization. The image closely resembled the plaintiff Cheng’s appearance in terms of facial shape, posture, and style. WeChat group members were able to identify the plaintiff as the subject of the allegedly infringing image based on the appearance of the person and the context of the group chat. Therefore, the defendant’s group posting constituted an infringement of the plaintiff’s portrait rights. The plaintiff’s personal portrait displayed through her WeChat profile picture served as an identifier of her online virtual identity. The defendant’s use of AI software to generate the allegedly infringing image transformed the plaintiff’s well-dressed WeChat profile picture into an image revealing her breasts. This triggered inappropriate discussion within the WeChat group targeting the plaintiff and objectively led to vulgarized evaluations of her by others, constituting an infringement of her portrait rights and reputation rights. Furthermore, the defendant, Sun, used AI software to create an image of the plaintiff, Cheng, using her WeChat profile picture to create a picture with wooden legs and even three arms. The figure in the image clearly does not conform to the basic human anatomy, and the chest is also exposed. The defendant’s private message of these images to the plaintiff inevitably caused psychological humiliation, violated her personal dignity, and constituted an infringement of her general personality rights. The judgment ordered Sun to publicly apologize to Cheng and compensate her for emotional distress. Neither party appealed the verdict, and the judgment has entered into force.[Typical Significance]In this case, the court found that the unauthorized use of AI software to spoof and vilify another person’s portrait constituted an infringement of that person’s personality rights. The court emphasized that users of generative AI technology must abide by laws and administrative regulations, respect social morality and ethics, respect the legitimate rights and interests of others, and refrain from endangering the physical and mental health of others. This court clarified the behavioral boundaries for ordinary internet users using generative AI technology, and has exemplary significance for strengthening the protection of natural persons’ personality rights in the era of artificial intelligence.
Case VII: A Technology Co., Ltd. and B Technology Co., Ltd. v. Sun XX and X Network Technology Co., Ltd., Copyright Ownership and Infringement Dispute—Original Avatar Images Constitute Works of Art[Basic Facts]Virtual Digital Humans [avatars] A and B were jointly produced by four entities, including plaintiff A Technology Co., Ltd. and plaintiff X Network Technology Co., Ltd. Plaintiff A Technology Co., Ltd. is the copyright owner, and plaintiff X Network Technology Co., Ltd. is the licensee. Virtual Digital Human A has over 4.4 million followers across various platforms and was recognized as one of the eight hottest events of the year in the cultural and tourism industries in 2022. The two plaintiffs claimed that the images of Virtual Humans A and B constitute works of art. Virtual Human A’s image was first published in the first episode of the short drama “Thousand ***,” and Virtual Human B’s image was first published on the Weibo account “Zhi****.” After resigning, Sun XX, an employee of one of the co-creation units, sold models of Virtual Humans A and B on a model website operated by defendant X Network Technology Co., Ltd. without authorization, infringing the two plaintiffs’ rights to reproduce and disseminate the virtual human images. As the platform provider, defendant X Network Technology Co., Ltd. failed to fulfill its supervisory responsibilities and should bear joint and several liability with defendant Sun XX.
[Judgment]The court held that the full-body image of virtual human A and the head image of virtual human B were not directly derived from real people, but were created by a production team. They possess distinct artistic qualities, reflecting the team’s unique aesthetic choices and judgment regarding line, color, and specific image design. They meet the requirements of originality and constitute works of art. The defendant, Sun, published the allegedly infringing model on a model website. The model’s facial features, hairstyle, hair accessories, clothing design, and overall style, particularly in terms of the combination of original elements in the copyrighted work, are identical or similar to the virtual human A and virtual human B in the copyrighted work. This constitutes substantial similarity and infringes the plaintiffs’ right to disseminate the works through information networks. Taking into account factors such as the specific type of service provided by the defendant, the degree of interference with the content in question, whether it directly obtained economic benefits, the fame of the copyrighted work, and the popularity of the copyrighted content, the defendant, as a network service provider, did not commit joint infringement. Virtual human figures carry multiple rights and interests. This case only determines the rights and interests in the copyrighted work. The amount of economic compensation in this case is determined by comprehensively considering the type of rights sought to be protected, their market value, the subjective fault of the infringer, the nature and scale of the infringing acts, and the severity of the damages. Verdict: Defendant Sun was ordered to compensate the two plaintiffs for their economic losses. Defendant Sun appealed the first-instance judgment. The second-instance court dismissed the appeal and upheld the original judgment.[Typical Significance]This case concerns the legal attributes and originality of virtual digital human images. Virtual digital humans consist of two components: external representation and technical core, possessing a digital appearance and human-like functions. Regarding external representation, if a virtual human embodies the production team’s unique aesthetic choices and judgment regarding lines, colors, and specific image design, and thus meets the requirements for originality, it can be considered a work of art and protected by copyright law. This case provides a reference for similar adjudications, contributing to the prosperity of the virtual digital human industry and the development of new-quality productivity.
Case VIII: He v. Artificial Intelligence Technology Co., Ltd., a Case of Online Infringement Liability Dispute—Creating an AI avatar of a natural person without consent constitutes an infringement of personality rights[Basic Facts]A certain artificial intelligence technology company (hereinafter referred to as the defendant) is the developer and operator of a mobile accounting software. Users can create their own “AI companions” within the software, setting their names and profile pictures, and establishing relationships with them (e.g., boyfriend/girlfriend, sibling, mother/son, etc.). He, a well-known public figure, was set as a companion by numerous users within the software. When users set “He” as a companion, they uploaded numerous portrait images of He to set their profile pictures and also set relationships. The defendant, through algorithmic deployment, categorized the companion “He” based on these relationships and recommended this character to other users. To make the AI character more human-like, the defendant also implemented a “training” algorithm for the AI character. This involves users uploading interactive content such as text, portraits, and animated expressions, with some users participating in review. The software then screens and categorizes the content to create the character data. The software can push relevant “portrait emoticons” and “sultry phrases” to users during conversations with “Ms. He,” based on topic categories and the character’s personality traits, creating a user experience that evokes a genuine interaction with the real person. The plaintiff claims that the defendant’s actions infringe upon the plaintiff’s right to name, right to likeness, and general personality rights, and therefore brought the case to court.[Judgment]The court held that the defendant’s actions constitute an infringement of the plaintiff’s right to name, right to likeness, and general personality rights. Under the defendant’s software’s functionalities and algorithmic design, users used Ms. He’s name and likeness to create virtual characters and interactive corpus materials. This projected an overall image, a composite of Ms. He’s name, likeness, and personality traits, onto the AI character, creating Ms. He’s virtual image. This constitutes a use of Ms. He’s overall personality, including her name and likeness. The defendant’s use of Ms. He’s name and likeness without her permission constitutes an infringement of his right to name and likeness. At the same time, users can establish virtual identities with the AI character, setting any mutual titles. By creating linguistic material to “tune” the character, the AI character becomes highly relatable to a real person, allowing users to easily experience a genuine interaction with Ms. He. This use, without Ms. He’s consent, infringes upon her personal dignity and personal freedom, constituting an infringement of her general personal rights and interests. Furthermore, the services provided by the software in question are fundamentally different from technical services. The defendant, through rule-setting and algorithmic design, organized and encouraged users to generate infringing material, co-creating virtual avatars with them, and incorporating them into user services. The defendant’s product design and algorithmic application encouraged and organized the creation of the virtual avatars in question, directly determining the implementation of the software’s core functionality. The defendant is no longer a neutral technical service provider, but rather bears liability for infringement as a network content service provider. Judgment: The defendant publicly apologizes to the plaintiff and compensates the plaintiff for emotional and economic losses. The defendant appealed the first-instance judgment but later withdrew the appeal, and the first-instance judgment came into effect. [Typical Significance]This case clarifies that a natural person’s personality rights extend to their virtual image. The unauthorized creation and use of a natural person’s virtual image constitutes an infringement of that person’s personality rights. Internet service providers, through algorithmic design, substantially participate in the generation and provision of infringing content and should bear tort liability as content service providers. This case is of great significance in strengthening the protection of personality rights and has been selected by the Supreme People’s Court as a “Typical Civil Case on the Judicial Protection of Personality Rights after the Promulgation of the Civil Code” and a reference case for Beijing courts.
The original text can be found here: 北京互联网法院涉人工智能典型案例(Chinese only).

FTC Inquires AI Companies on Safeguards for Children and Pilot Program Uses AI to Authorize Medicare Treatments — AI: The Washington Report

The FTC is demanding documentation from major AI companies, such as Meta, to assess how children interact with AI products and what protections are currently in place, amid concerns over mental health risks and inappropriate chatbot interactions. The FTC has emphasized the dual goal of protecting kids online while also supporting US leadership in AI innovation. The orders, issued under the FTC’s Section 6(b) authority, are part of a broader effort to increase oversight as generative AI becomes more embedded in consumer platforms — especially those frequented by minors. The Commission approved the move unanimously, with individual statements from Commissioners Melissa Holyoak and Mark Meador.
A federal pilot program launching in 2026 will use AI to authorize treatments for Medicare patients in six states, drawing backlash from some lawmakers and experts who fear it could lead to wrongful denials of care; Rep. Greg Landsman (D-OH) called it the “AI death panel.” Both investigations highlight bipartisan concerns over AI’s unchecked role in health care and social media, with lawmakers pushing for independent reviews and stricter safeguards. The inquiry centers on how these companies handle issues like age-appropriate design, disclosure of risks to users and parents, compliance with the Children’s Online Privacy Protection Act (COPPA), and enforcement of community guidelines. The FTC also wants to know how companies develop chatbot characters, process user data, and inform users about potential harms and data collection practices.  

FTC Will Use Its Statutory Authority to Obtain from AI Companies and Developers Information About Safeguards for Children’s Use of AI
On September 11, the Federal Trade Commission (FTC) announced a formal inquiry requiring several AI companies and developers to provide documents about how children use their products and what safeguards they have implemented. Parents and advocacy groups have pushed the Trump administration and Congress to prioritize the protection of children as increased usage of chatbots and social media platforms have presented harmful impacts on their mental health.
This follows recent reports revealing that Meta had allowed for its chatbots to engage in inappropriate “conversations that are romantic or sensual” with children. Meta is among several major companies asked to provide details on how chatbots and AI models are trained, how the data of younger users is used, and the next steps on better protections and management of potential risks for children and teenagers. Sen. Josh Hawley (R-MO) launched an investigation into Meta Platforms Inc. following concerns over unregulated use of AI on social media platforms, with a deadline of September 19. This also follows the rise in recent lawsuits and reports accusing Meta chatbots of having been complicit in exploitation, negative impacts on mental health, and suicides of younger people.
Due to the rapid evolution of AI, generative chatbots can “simulate human-like communication and interpersonal relationships with users,” creating further risk of children and teens developing trusting relationships with chatbots. Companies will additionally be asked of things such as how they monetize user engagement, develop and approve characters, test for negative impacts, utilize advertisements, and comply with the Children’s Online Privacy Protection Act Rule.
Section 6(b) authorizes the FTC to compel companies to turn over information to develop better understanding of markets and technology. The outcome of this investigation could potentially lead to changes in the development of social media platforms and AI tech, such as stricter age verification and content filters, and limits on data collection.
AI to Authorize Treatments for Medicare Patients in New Federal Program
A federal pilot program set to begin in January 2026 will use AI to help determine if certain treatments can be authorized for Medicare patients, prompting Democrats and health experts to raise concerns.
The program, the Wasteful and Inappropriate Service Reduction (WISeR) Model, will run across six states: Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington, targeting procedures historically vulnerable to fraud. Representatives Frank Pallone (D-NJ) and Greg Landsman (D-OH), whose districts lie in two of the states, have opposed this pilot, expressing concerns that it could lead to “denials of lifesaving care and incentivize companies to restrict care.” Representative Landsman describes it as the “AI death panel.”
At the subcommittee hearing on the use of AI in health care, other witnesses questioned whether AI improves patient outcomes in prior authorizations, with some asserting it shouldn’t be used for prior authorizations at all.
The debate comes amid lawsuits against major insurers for allegedly using AI or algorithms to systematically deny care. Lawmakers are calling for an independent review before moving forward with the pilot.

International Update: Canada and New Zealand

Privacy Commissioner of Canada Publishes Guidance on Biometrics for Public and Private Sector
The Office of the Privacy Commissioner of Canada (“OPC”) has issued updated guidance for both public and private sector organizations on the responsible use of biometric technologies, such as facial recognition and fingerprint scanning. This guidance follows a public consultation held between November 2023 and February 2024, which included input from academia, civil society, businesses, legal associations, and individuals. The guidance emphasizes the need for a clear and appropriate purpose when collecting, using, or disclosing biometric data. Organizations must assess privacy risks, ensure proportionality, and implement safeguards to protect biometric information. The guidance outlines consent requirements, stresses transparency, and calls for accuracy testing of biometric systems.
New Zealand Privacy Commissioner Announces New Biometrics Rules
The New Zealand Privacy Commissioner has introduced a Biometric Processing Privacy Code (the “Code”) that will create specific privacy rules for businesses and organizations using biometric technologies such as facial recognition. The Code aims to balance innovation with the protection of sensitive personal data while ensuring that businesses and organizations using biometric systems do so safely, transparently, and proportionately. Key requirements of the Code include mandatory assessments of whether biometric use is effective and proportionate, implementation of safeguards to reduce privacy risks, and requirements to notify individuals when biometric data is being collected. The Code prohibits intrusive uses, such as predicting emotions or inferring protected characteristics like ethnicity or sex. The Code comes into force on November 3, 2025, with a grace period until August 3, 2026, for existing biometric systems to comply. It carries the same legal weight as the New Zealand Privacy Act Information Privacy Principles and replaces them for biometric-specific applications.

Artificial Intelligence Provisions in the Fiscal Year 2026 House and Senate National Defense Authorization Acts

Both the US House of Representatives and the US Senate have continued to increase the attention paid to artificial intelligence (AI) issues for the defense sector, most notably by including a number of provisions in the text of the National Defense Authorization Act (NDAA) for Fiscal Year 2026 (FY 2026), alongside the Senate’s version. The Chairman’s Mark of the July 2025 text of the NDAA notes that the House Armed Services Committee “is aware of the rapidly changing capabilities of [AI] and recognizes its expanding potential for application across the Department of Defense.” The House bill emphasizes the widespread impact of AI across administrative, international, and research functions. On the Senate side, the bill stresses the long-term capabilities of AI creating opportunities for experimentation, model development, and risk frameworks. The House Armed Services Committee passed their version of the NDAA with a vote of 55-2 and the Senate Armed Services Committee advanced its version of the bill on 9 July 2025, after a 26-1 vote. Both chambers will spend the first two weeks of September debating their respective NDAAs on the floor while considering hundreds of amendments. It is highly likely both bills will succeed in getting off the floor but the House version will be more partisan. The bills will be reconciled during the conference process before final passage in both chambers and ultimately signed into law by the president. Stakeholders should closely monitor this process, as key provisions related to AI could shift during conference negotiations or floor amendments.
Key Takeaways:

Both the House and Senate versions of the FY 2026 NDAA prioritize the adoption and integration of AI across military operations, logistics, and mission-critical applications. 
Both the House and Senate highlight the importance of workforce development, with AI education, cybersecurity training, and advanced manufacturing skills, while the Senate also establishes experimental sandbox environments for training and model development.
Although governance is a main priority in both the House and Senate, Senate places an emphasized focus on standardized frameworks, risk-based security measures, and supply-chain oversight.

AI Education and Training
In Section 822, the House bill establishes a working group to address workforce shortages in advanced manufacturing, including AI, and encourages public-private partnerships to incentivize government and industry participation. The House Armed Services Committee also adds a renewed focus for the Department of Defense’s (DoD) annual cybersecurity training to include the unique challenges related to AI (Section 1512). The Senate bill complements this approach in Section 1622 by establishing a taskforce within the DoD to create an AI sandbox environment for experimentation, training, and model development. This taskforce is intended to accelerate responsible AI adoption and strengthen public-private partnerships. 
AI Governance, Oversight, and Security 
The House bill emphasizes modernization and security in technology policy. Section 1074 outlines a framework to modernize the technology transfer policies of the military departments and update the National Disclosure Policy, which governs the sharing of classified military information to foreign governments and international organizations. It lists detailed guidelines for security considerations for information sharing between US allies and partners. It further calls for the adoption of industry-recognized frameworks to guide best practices, established standards for governance, and specific training requirements to mitigate vulnerabilities specific to AI and machine learning (Section 1531). The bill also directs the DoD to establish requirements for managing “biological data” generated through DoD-funded research in a way that supports the development and use of AI technologies (Section 1521). Although Section 1521 does not explicitly define “biological data,” it instructs the Secretary of Defense to develop a definition for “qualified biological data resource” based on several criteria: (1) the type of biological data generated, (2) the size of the data collection, (3) the amount of federal funding awarded to the research, (4) the sensitivity level of the data, and (5) any other factor the Secretary deems appropriate.  
The Senate bill includes several provisions to strengthen AI governance and security. It calls for a standardized model assessment and oversight framework (Section 1623), a Department-wide ontology governance working group to ensure data interoperability (Section 1624), and a steering committee to evaluate the strategic implications of AI intelligence (Section 1626). Further, the Senate bill mandates risk-based cybersecurity and physical security requirements for AI systems (Section 1627), prohibits the use of certain foreign-developed AI technologies (Section 1628), and directs the Secretary of Defense to develop digital content provenance standards to safeguard the integrity of AI-generated media (Section 1629). It also includes  provisions to create a public-private cybersecurity partnership focused on advanced AI systems (Section 1621) as well as secure digital sandbox environments for testing and experimentation (Section 1622).
Deployment and Operational Methods for AI Research and Development
In Section 1532, the House bill calls for accelerated utilization of AI in military operations and coordination by launching pilot programs for the Army, Navy, and Air Force branches. These programs would employ commercial AI solutions to improve ground vehicle maintenance. The DoD is also required to produce up to 12 generative AI tools to support mission-critical areas such as damage assessment, cybersecurity, mission analysis, and others (Section 1533). 
Additionally, Section 328 of the Senate bill directs the Secretary of Defense to integrate commercially available AI tools specifically for logistics tracking, planning, operations, and analytics into at least two exercises during FY26. This section mirrors the House’s focus on incorporating commercial AI into logistics operations to test and evaluate AI tools in operational contexts. 
Unique House AI Initiatives 
In Title X and XVIII, the House Armed Services Committee creates broad, yet firm calls for the survey and accelerated adoption of AI technologies. In Title X, the DoD must evaluate and survey all current AI technologies in use to find areas to be improved in terms of accuracy and reducing collateral damage. Additionally, in Title XVIII, the DoD is authorized to accelerate autonomy-enabling software across defense programs using middle-tier acquisition authorities allowed by Section 3603 of Title X. Lastly, the House bill uniquely targets international cooperation. In Section 1202, the bill establishes an emerging technology cooperation program with certain allies to conduct joint research, development, testing, and evaluation in critical areas such as AI, cybersecurity, robotics, quantum, and automation. 
Unique Senate AI Initiatives
The Senate NDAA establishes targeted initiatives for AI across national security domains. Section 3118 limits AI research within the National Nuclear Security Administration to support nuclear security missions, while allowing resource sharing with other agencies. Section 1602 directs the commander of United States Cyber Command, in coordination with DoD AI leadership and research offices, to develop a roadmap for industry collaboration on AI-enabled cyberspace operations. This roadmap will guide private sector engagement and the integration of advanced AI into cyber operations. 
Conclusion
The House and Senate Armed Services Committees are not the only congressional bodies focused on advancing AI federal policy. Multiple committees across jurisdictions have actively engaged in this effort, holding hearings and drafting legislation aimed at shaping the future of AI governance. 
In parallel, the Trump administration recently introduced its AI Action Plan along with a robust AI export strategy (see our alerts on the AI Action Plan and AI Export Strategy for further details). We anticipate continued momentum in both Congress and the Executive Branch in the months ahead. 
Our Policy and Regulatory practice team is closely monitoring both legislative and regulatory developments and is ready to help advocate for your policy priorities in this rapidly evolving landscape.

The AI Cyber Arms Race: Zero-Day Exploits in Minutes!

The cybersecurity landscape has fundamentally changed. In 2025, sophisticated threat actors are increasingly weaponizing generative artificial intelligence (GenAI) to supercharge their attack capabilities, creating a significant escalation in the cyber arms race. This isn’t just about new tools; it’s about scaling existing threats to unprecedented speeds and volumes.
Hexstrike-AI: A Tool Turned Weapon
A chilling example is Hexstrike-AI, an “AI-powered offensive security framework” originally designed to help organizations find and fix their own security weaknesses. Its creators intended it as an AI “brain” to orchestrate over 150 specialized AI agents and security tools to test defenses and identify zero-day vulnerabilities. This framework bridges large language models like Claude, GPT, and Copilot with real-world offensive capabilities.
According to a recent insightful report by cyber-resilience company Check Point published earlier this month, Hexstrike-AI has quickly become a weaponized hacking tool. Within hours of its release, cybercriminals began using it to exploit recent zero-day vulnerabilities, including three major flaws in Citrix NetScaler ADC and Gateway products. While exploiting such complex flaws traditionally required highly skilled hackers days or weeks of work, Hexstrike-AI reduced this process to less than 10 minutes. Attackers can simply command it to “exploit NetScaler,” and the AI automates the entire process, turning complex hacking into a “simple, automated process” and drastically lowering the skill barrier for sophisticated attacks.
Beyond Hexstrike-AI: GenAI’s Broader Impact & Risks
GenAI isn’t inventing new attacks; it’s unleashing a terrifying surge in the speed and effectiveness of existing threats. Threat actors, now empowered by GenAI, can effortlessly craft hyper-realistic phishing, generate adaptive malware that evades detection, deploy convincing deepfakes for social engineering, and instantly automate reconnaissance.
This new reality means your window to protect against these sophisticated attacks is shrinking – dramatically. Small and medium-sized businesses, often lacking robust security resources, are now squarely in the crosshairs, facing unprecedented risk. 

KEEP HUMANS IN CALL CENTERS ACT?: New Bi-Partisan Bill Aims to Keep Call Centers in America– But It Could be an AI Killer

Who wants to talk to AI?
We may soon find out if a new bill proposed by Senators Ruben Gallego (D-AZ) and Senator Jim Justice (R-WV) gains momentum.
The bill is called “Keep Call Centers in America Act of 2025″ and it ostensibly is aimed at keeping call centers in America, but it may also be aimed at limiting the impact of outbound AI on live call center personnel.
At a high level the bill would penalize companies for moving call centers overseas. Such companies would lose the ability to obtain federal loans and grants and existing loan and grant holders would be penalized. Conversely, companies that keep their call centers in America would be eligible for preferential treatment in certain settings.
Perhaps more importantly the bill would also require disclosures related to the use of overseas call centers and/or AI when a consumer receives or makes a call to the call center. The consumer must be empowered to request a live US-based agent (i.e. not AI) with a single button press or command (such as saying “agent.”)
You can read the entire bill here: https://www.congress.gov/bill/119th-congress/senate-bill/2495/text
R.E.A.C.H.’s government affairs team (lead by the remarkable Isaac Shloss) is in talks with Gallegos’ office and we will keep an eye on developments.
Have thoughts? Get involved with R.E.A.C.H. and have your voice heard.

FTC to Study AI Chatbot Risks to Children

As reported by The Wall Street Journal and Bloomberg Law, the U.S. Federal Trade Commission (“FTC”) plans to study the impact of AI-powered chatbots on children’s mental health. According to those familiar with the matter, the study will focus on privacy harms and other risks to people who interact with AI chatbots. The FTC allegedly plans to seek information from the nine largest providers of consumer AI chatbots regarding how data collected by the chatbots is stored and the dangers posed by AI chatbot use. The FTC will conduct the study pursuant to its authority under Section 6(b) of the FTC Act to compel companies to turn over information to help the agency better understand a particular market or technology.
The FTC has not responded to requests for comment about the study. The study underscores the agency’s stated enforcement focus on children’s privacy.

When ChatGPT Meets the Legal Hold: A Survival Guide for the In-House Counsel Who Didn’t Sign Up for This

You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”
Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.
The Plot Twist Nobody Saw Coming
Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.
Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.
Or so say several recent court decisions:

In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable. 
And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.

Your New Reality Check: What Actually Needs Preserving?
Let’s break down what’s now on your preservation radar:
The Obvious Stuff:

Every prompt typed into AI tools (yes, even the embarrassing ones)
All AI-generated outputs used for business purposes
The metadata showing who, what, when, and which AI model

The Not-So-Obvious Stuff:

Failed queries and abandoned outputs (they still count!)
Conversations in AI-powered Slack bots and Teams integrations
That “quick question” someone asked Claude about a competitor

The “Are You Kidding Me?” Stuff:

Deleted conversations (spoiler alert: they’re often not really deleted)
Personal AI accounts used for work purposes
AI-assisted research that never made it into final documents

Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis. 
Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).
P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.

AI Antitrust Landscape 2025: Federal Policy, Algorithm Cases, and Regulatory Scrutiny

AI has continued to come under the scrutiny of government enforcers and private litigants. In July 2025, the White House released America’s AI Action Plan. As we noted in our January 5 Trends to Watch: 2025 Antitrust & Competition Law, on the competition side, plaintiffs have alleged that AI may be used in ways that harm competition, including as part of a conspiracy to use AI-supported algorithms related to pricing or other competitive datapoints. Additionally, control of the data on which AI, at least generative AI is built, is another area that may spur antitrust issues.
This GT Advisory explores the evolving antitrust landscape for AI in 2025, including federal policy developments, algorithm-related litigation, and regulatory scrutiny businesses should be aware of.
America’s AI Action Plan: Regulatory Relief and Innovation Priorities
AI and algorithms continue to be a topic of interest for U.S. and international antitrust enforcers. Antitrust enforcers in Trump’s second administration continue to be interested in “Big Tech,” though with a goal of also promoting certainty and clarity for businesses.
To that end, in July 2025, the White House released America’s AI Action Plan. America’s AI Action Plan outlines President Donald Trump’s perspective on AI and identifies specific steps to ensure the United States leads the race to achieve global dominance in AI. The plan includes three pillars of action—(1) accelerate AI innovation, (2) build American AI infrastructure, and (3) lead in international AI diplomacy and security—and contains 90 specific policy recommendations aimed at removing regulatory barriers to AI infrastructure development.
It also includes a recommendation to review all Federal Trade Commission (FTC) investigations, final orders, consent decrees, and injunctions “commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation … and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation.” Additionally, the plan encourages open-source and open-weight AI (i.e., developers make it freely available for anyone to download and modify), which is relevant to antitrust considerations.
AI Algorithm Pricing Litigation: Class Action and Antitrust Risks
AI-powered algorithms are being watched for their potential to raise antitrust concerns. Using AI-powered algorithms in pricing decisions may create efficiencies, but may also raise concerns about potentially higher prices, including through an alleged conspiracy among competitors.
Several class action lawsuits have been filed across the country in a variety of industries—including hotels, multifamily residential rental units, student housing, mobile homes, and health care services—alleging that defendants have used some type of pricing algorithm to purportedly fix, stabilize, or raise prices for their respective products. These cases have had mixed success, with some being dismissed at the outset but others surviving dismissal and subjecting the defendants to extensive and expensive discovery on the merits, as well as the risk of class certification.
In addition to private litigation, the U.S. antitrust agencies are also active in enforcement against algorithmic coordination. For example, the DOJ Antitrust Division has amended its Guidance on the Evaluation of Corporate Compliance Programs to consider a company’s risk assessment related to AI, including such concerns as:
How does the company’s risk assessment address its use of technology, particularly new technologies such as artificial intelligence (AI) and algorithmic revenue management software, that are used to conduct company business? As new technology tools are deployed by the company, does the company assess the antitrust risk the tools pose? What steps is the company taking to mitigate risk associated with its use of technology? Are compliance personnel involved in the deployment of AI and other technologies to assess the risks they may pose? Does the compliance organization have an understanding of the AI and other technology tools used by the company? How quickly can the company detect and correct decisions made by AI or other new technologies that are not consistent with the company’s values?[1]
Most recently, in August 2025, DOJ Assistant Attorney General Gail Slater stated on social media that she anticipates the DOJ’s algorithmic pricing probes to increase as their use grows. Slater warned that “[f]irms should perform their own due diligence on shared algorithms inputs and functionality to prevent collusion that can harm consumers.”
AI algorithm pricing has been and likely will continue to be a hot area of antitrust litigation and enforcement, especially as the use of AI continues to spread across industries. Reliance on non-public information, especially if it was obtained from a competitor, may increase the risk of litigation and potential liability.
Open-Source AI Antitrust Concerns: Market Control and Regulatory Responses
Some have touted open-source models as key to democratizing access and promoting competition, but some regulators are skeptical that the models provide the hoped-for solution to antitrust concerns. Tech firms retain control over critical infrastructure like hardware, cloud platforms, and proprietary data, potentially limiting the competitive impact of open-source models.
For example, the strategic use of open-source AI might be used to gain market share, followed by a shift to closed models that regulatory agencies could argue restrict access and effectively bar entry of new competitors. Similarly, the lack of interoperability between open and proprietary AI systems may lock in customers, undermining competition. Additionally, as discussed above, companies might face allegations that shared AI tools unintentionally facilitate collusion among competitors even without express agreements to do so.
Regulators are considering different approaches to responding to these concerns. The European Union, for example, is considering expanding the Digital Markets Act to classify AI businesses as “gatekeepers,” and potentially mandating interoperability between systems. The FTC and DOJ, in contrast, are more focused on AI partnerships and acquisitions, emphasizing the use of existing antitrust laws to address access and control antitrust concerns.
AI Antitrust Compliance: Considerations for Businesses
The shifting legal landscape around AI and antitrust creates a complex environment for businesses. In this dynamic, changing area, companies should consider several approaches. Monitoring updates from antitrust authorities, including the FTC, DOJ, and international regulators can help companies adapt their practices to comply with the latest requirements and guidance. Companies using pricing algorithms may benefit from ensuring human oversight in ultimate pricing decisions and regularly reviewing algorithms to confirm that they are not using non-public data, particularly data obtained from competitors. Businesses may also consider keeping records showing that pricing and strategic decisions were made independently, even when using AI tools. Finally, in the mergers and acquisitions context, companies should consider evaluating antitrust implications of any AI-related partnership or acquisition and seeking antitrust legal guidance early in the process.
Footnotes
[1] See U.S. DOJ Antitrust Division, Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations (November 2024) at 9 (DOJ Compliance Guidance). The Antitrust Division’s Guidance is aimed to the criminal context, however the Division notes that these same guidelines “should also minimize risk of civil antitrust violations.” DOJ Compliance Guidance at 2.

Colorado Delays Comprehensive AI Law With Further Changes Anticipated

Implementation of Colorado’s Artificial Intelligence Act (SB 24-205) has been delayed five months to June 30, 2026, as a result of amendments Gov. Jared Polis signed on Aug. 28, 2025 (SB25B-004). This change from the original Feb. 1, 2026, effective date results from extensive negotiations during the state’s recently concluded special legislative session.
The delay intends to give Colorado lawmakers more time to consider substantive amendments during the 2026 regular legislative session, starting in early January 2026. Though the legislature has considered changes since Gov. Polis signed the law in May 2024, there was no agreement on any substantive changes during either the regular 2025 session or the August 2025 special session. It remains to be seen whether the legislature can reach consensus, as stakeholders have strongly debated changes for more than a year.
Even if the legislature agrees to substantive amendments next spring, there will be little time between the session’s scheduled adjournment in May 2026 and the current June 30, 2026, effective date for the attorney general’s office to promulgate rules and for employers to prepare for compliance with updated requirements unless the legislature also agrees to further changes to the law’s implementation date.
Practical Considerations
The extended June 30, 2026, effective date gives businesses more time to assess and develop their compliance strategy, while leaving open the possibility that some of the law’s stringent compliance requirements may be refined or removed entirely. Covered employers may want to continue monitoring developments during the next legislative session (January to May 2026) for further amendments.
If the law becomes effective, its biggest impact may be on employers that have integrated AI tools into hiring and related employment processes, a practice that has permeated many industries. Employers may want to consider advance planning that addresses how the benefits of AI tools can be balanced with the state’s forthcoming extensive compliance requirements.