Maine Enacts Ban on Reporting Medical Debt to Credit Bureaus
On June 9, Maine Governor Janet Mills signed into law LD558, which prohibits the reporting of medical debt to consumer reporting agencies. The law bars medical creditors, debt collectors, and debt buyers from furnishing information about medical debt to credit bureaus, regardless of payment status or consumer repayment activity.
The new statute amends the Maine Fair Credit Reporting Act by replacing the term “medical expenses” with “medical debt” and eliminating carveouts that had previously allowed reporting in limited situations.
Putting It Into Practice: Maine’s statute comes just weeks after the CFPB formally withdrew its proposed rule that would have barred medical debt reporting nationwide (previously discussed here), and follows Vermont’s law that banned medical debt in consumer reporting statewide (previously discussed here). Companies operating in other jurisdictions should expect the trend to continue and plan accordingly.
Listen to this post
California AI Policy Report Outlines Proposed Comprehensive Regulatory Framework
On June 17, 2025, the Joint California Policy Working Group on AI Frontier Models released a final version of its report, “The California Report on Frontier AI Policy,” outlining a policymaking framework for frontier artificial intelligence (AI). Commissioned by Governor Gavin Newsom and authored by leading AI researchers and academics, the report advocates a ‘trust but verify’ approach.
Its recommendations emphasize evidence-based policymaking, transparency, adverse event reporting and adaptive regulatory thresholds. Given California’s role as a global AI innovation hub and its history of setting regulatory precedents, these recommendations are highly likely to influence its overall AI governance strategy.
Key Proposed Reccomendations
The California report provides recommendations that are likely to inform future legislative or regulatory action (although no legal obligations currently arise from its findings):
Enhanced Transparency Requirements: The report proposes public disclosure of AI training data acquisition methods, safety practices, pre-deployment testing results, and downstream impact reporting. This represents a fundamental shift from current practices where companies maintain proprietary control over development processes. If implemented, organizations could face reduced competitive advantages based on data acquisition methods while experiencing increased compliance costs for documentation and reporting requirements. This concern isn’t just about the method of acquisition, but whether certain methods (like exclusive licensing) create anti-competitive advantages.
Adverse Event Reporting System: The report recommends mandatory reporting of AI-related incidents by developers, voluntary reporting mechanisms for users, and a government-administered system similar to existing frameworks in aviation and healthcare. The report highlights that this system “does not necessarily require AI-specific regulatory authority or tools.”
Third-Party Risk Assessment Framework: The report states that companies “disincentivize safety research by implicitly threatening to ban independent researchers” and implicitly calls for a “safe harbor for independent AI evaluation.” This approach could reduce companies’ ability to prevent external security research while requiring formal vulnerability disclosure programs and potentially exposing system weaknesses through independent testing.
Proportionate Regulatory Thresholds: Moving beyond simple computation-based thresholds, the report proposes a multi-factor approach considering model capabilities (e.g., performance on benchmarks), downstream impact (e.g., number of commercial users), and risk levels, with adaptive thresholds that can be updated as technology evolves.
Regulatory Philosophy and Implementation
The report draws from past technology governance experiences, particularly emphasizing the importance of early policy intervention. The authors analyze cases from internet development, consumer products regulation, and energy policy to support their regulatory approach.
While the report doesn’t specify implementation timelines, California’s regulatory history suggests potential legislative action in the 2025–2026 session through a phased approach: initial transparency and reporting requirements, followed by third-party evaluation frameworks and, ultimately, comprehensive risk-based regulation.
Potential Concerns
The California Report on Frontier AI Policy’s acknowledgment of an “evidence dilemma” (the challenge of governing systems without a large body of scientific evidence), however, captures inherent limitations in regulating technology still characterized by significant opacity.
For example, the report notes that “[m]any AI companies in the United States have noted the need for transparency for this world-changing technology. Many have published safety frameworks articulating thresholds that, if passed, will trigger concrete, safety-focused actions.” But it also notes that much of the transparency is performative and limited by “systemic opacity in key areas.”
And while the report proposes governance frameworks based on “trust but verify,” it also documents AI systems that have exhibited “strategic deception” and “alignment scheming,” including attempts to deactivate oversight mechanisms. This raises profound questions about the feasibility of verifying the true safety and control of these rapidly evolving systems, even with proposed transparency and third-party evaluation mechanisms.
Looking Ahead
The California Report on Frontier AI Policy represents the most sophisticated attempt at evidence-based AI governance to date. While these recommendations are not yet law, California’s influence on technology regulation suggests these principles are likely to be implemented in some form.
Organizations should monitor legislative developments, consider engaging in public comment and proactively implementing recommended practices, and develop internal capabilities for ongoing AI governance.
The intersection of comprehensive state-level regulation and rapidly evolving AI capabilities requires flexible compliance frameworks that can adapt to changing requirements while maintaining operational effectiveness.
In Determining Subject Matter Eligibility, the Name of the Game Is the Claim
The US Court of Appeals for the Federal Circuit overturned a district court grant of summary judgment of patent eligibility under 35 U.S.C. § 101 in connection with a patent directed to remote check deposit technology, explaining that details in the specification but not recited in the claims could not be relied on to meet the test for abstraction. United Services Automobile Association v. PNC Bank N.A., Case No. 23-1639 (Fed. Cir. May 6, 2025) (Dyk, Clevenger, Hughes, JJ.)
The patent in issue was directed to a system for allowing a customer to deposit a check using the customer’s handheld mobile device and claimed a “system configured to authenticate the customer using data representing a customer fingerprint.”
After the United Services Automobile Association (USAA) sued PNC for infringement of the patent, both parties filed motions for summary judgment seeking an adjudication as to whether the claims were patent eligible under § 101. The district court granted USAA’s motion, finding that the claims were not directed to an abstract idea and therefore were patent eligible. After a five-day trial, the jury found no invalidity of the asserted claims and found that PNC had infringed. PNC appealed.
The Federal Circuit found that the asserted claim was directed to the abstract idea of depositing a check using a handheld mobile device. At Alice step one, the Court found that the invention claimed steps for carrying out the process of a mobile check deposit by “instructing the customer to take a photo of [a] check,” “using [a] wireless network” to transmit a copy of the photo, and having the configured system “check for errors.” The Court determined that this amounted to a routine process implemented by a general-purpose device. The Court further found that the claim recited routine data collection and analysis steps that have been traditionally performed by banks and people depositing checks – namely reviewing checks, recognizing relevant data, checking for errors, and storing the resultant data.
USAA argued that “accomplishing check deposit on a consumer device required the development of extremely non-obvious algorithms.” The Federal Circuit rejected this argument, noting that the Court focuses on the claims, not the specification, to determine eligibility, because “the level of detail in the specification does not transform a claim reciting only an abstract concept into a patent-eligible system or method.” Since the claims did not recite the algorithms and neither the specification nor the claims contained a “clear description of how the claimed system is configured,” but only “a concept of improving the check deposit process,” the Court found that the claimed subject matter was directed only to an abstract idea.
At Alice step two (not addressed by the district court, which concluded that the claims passed muster at Alice step one), the Federal Circuit considered whether the claim elements contained an inventive concept sufficient to transform the abstract idea into a patent-eligible application. The Court found no inventive concept present, as computer-mediated implementation of routine or conventional activity is not enough to provide an inventive concept.
USAA, citing the 2014 Federal Circuit case of DDR Holdings, LLC v. Hotels.com, L.P., argued that the claim read as a whole, considering the ordered combination of elements, contained an inventive concept because it solved the technological problem of accurate detection and extraction of information from digital images of checks using general purpose mobile devices. The Court disagreed, concluding that the claim recited nothing more than routine image capture, optical character recognition, and data processing steps, which were all well known and routine: “here, there is no technological improvement – though the claim recites a system that makes the remote check deposit process easier and more convenient for bank customers, there is no fundamental change to how any of the technology functions, because it is all operating in a conventional way.”
Finding that there were no genuine disputes of material fact preventing resolution of the § 101 analysis on summary judgment, the Federal Circuit reversed.
Vermont Enacts New Telehealth Legislation Impacting Health Insurers
Vermont’s governor recently signed S 30 into law. The legislation, which goes into effect on September 1, 2025, requires that health insurance plans provide coverage for healthcare and dental services delivered through telemedicine to the same extent as if the services were provided through in-person consultations. Health insurance plans must also provide the same reimbursement rate for services billed using equivalent procedure codes and modifiers, subject to the terms of the health insurance plan and provider contract, regardless of whether the service was provided in person or through telemedicine.
For more updates on state legislative and regulatory developments related to telehealth, check out the latest Trending in Telehealth published by the Health & Life Sciences Group.
Driving Towards the Legalisation of Fully Autonomous Vehicles in the UK
As many will know, the Autonomous Vehicle Act 2024 (the “AV Act”) paved the way to legalising the use of autonomous vehicles on UK roads. However, before any autonomous vehicles can be used on the UK roads (other than under controlled trials), it is important to be aware that the AV Act does not, at this stage, authorise those vehicles for use on the UK’s roads. Rather, the AV Act grants the Secretary of State the power to authorise this at a later date once the “safety principles” for such usage have been determined.
On 10 June 2025, the Secretary of State launched a call for evidence and consultation on the secondary legislation which will be required to establish these “safety principles”:
Call for Evidence on Automated Vehicles: Statement of Safety Principles
The AV Act requires the Secretary of State to prepare a Statement of Safety Principles which is to be used in different ways across the safety framework for automated vehicles including for:
pre-deployment authorisation checks;
carrying out in-use monitoring and regulatory compliance checks; and
undertaking annual assessment on the overall performance of automated vehicles.
This call for evidence seeks information to support an understanding of:
what safety principles might be used;
the safety standards which might be described; and
how safety performance can be measured.
There are also questions about the development of safety principles and how those could be used in practice.
See the “full list of questions” section of the call for evidence for all questions: Automated vehicles: statement of safety principles – GOV.UK
Consultation on Automated Vehicles: Protecting Marketing Terms
The AV Act also gives the Secretary of State the power to protect certain terms, so that they can only be used to market vehicles which have been authorised under the AV Act as being automated (self-driving). In turn, the AV Act then provides that these protected terms must not be used to market driver assistance systems.
This consultation seeks views on this including whether certain terms including “self-driving”, “driverless” and “automated driving” should be protected, whether any symbols should be protected and whether restrictions should only apply only when used to describe a vehicle as a whole.
See section 4 of the consultation for the full list of questions: Automated vehicles: protecting marketing terms – GOV.UK.
Deadline
The deadline for responding to both the call for evidence and consultation (there is no requirement to respond to both though) is 23:59 on 1 September 2025.
Oregon Extends Privacy Law to Specifically List Auto Makers
In ongoing tweaks to state privacy laws, Oregon has amended its state privacy law to cover auto manufacturers. Specifically, those that process or control personal information that they get from a person’s use of a car. As most are aware, the law requires disclosures when collecting personal information, provision of rights to consumers (including the ability to delete and port personal information), and limits on profiling among other things. While the Oregon law, like most state “comprehensive” laws, includes applicability thresholds, there are no thresholds for this new applicability to car manufacturers. The law is slated to go into effect in September of this year.
Putting It Into Practice: This amendment demonstrates a growing concern by law makers and regulators around data collected in motor vehicles. We anticipate seeing similar developments in coming months.
Listen to this post
Navigating SAM.gov: A Guide for Government Contractors
For businesses aiming to win federal contracts, navigating the System for Award Management (SAM.gov) is a necessary — and often daunting — first step. Whether you’re a seasoned government contractor or new to federal procurement, understanding how to use SAM.gov effectively is crucial for compliance, eligibility, and success in the competitive public sector marketplace.
This guide walks you through the core functions of SAM.gov, common pitfalls, and legal tips to ensure your business stays on the right side of procurement regulations.
What Is SAM.gov?
SAM.gov is the official website of the U.S. government for contracting and award management. It consolidates multiple federal procurement systems, including:
CCR (Central Contractor Registration)
FedBizOpps (now under Contract Opportunities)
EPLS (Excluded Parties List System)
SAM.gov is where entities register to do business with the federal government, search for contract opportunities, report on contract performance, and maintain necessary compliance documents.
Entity Registration
Before bidding on federal contracts, your business must register in SAM.gov. Here’s what you’ll need:
Unique Entity ID (UEI) – As of April 2022, replaces the DUNS number
TIN/EIN – A valid taxpayer identification number or employer identification number
Banking Info – For payment via the federal System for Award Management
NAICS Codes – Identify your industry categories for contract eligibility
CAGE Code – Issued automatically during the SAM registration process
Legal Tip: Ensure that your entity name and TIN exactly match IRS records. Mismatches are a common cause of registration delays or rejections.
Navigating Contract Opportunities
SAM.gov serves as the central hub for federal contract solicitations. You can search for opportunities by:
Agency
Set-aside type (e.g., small business, 8(a), HUBZone)
NAICS code
Location
Best Practice: Set up a user account and create saved searches or email alerts to receive real-time updates tailored to your business profile.
Representations and Certifications
When registering, you must complete the “Reps & Certs” section, which includes key affirmations under the Federal Acquisition Regulation (FAR) and other rules.
This section covers:
Business size standards
Socio-economic ownership status (e.g., woman-owned, veteran-owned)
Eligibility for certain government contract opportunities
Compliance with laws such as the Buy American Act
Legal Tip: Misrepresenting your business status (intentionally or not) can lead to penalties under the False Claims Act and/or lead to suspension or debarment. Periodically review and update your certifications to ensure ongoing accuracy.
Staying Compliant and Active
A SAM.gov registration must be renewed annually, but it’s best to review it more frequently for accuracy.
Key compliance reminders:
Update contact information and ownership structure changes promptly
Track your expiration date and start renewal at least 30 days prior
Review FAR and DFARS clauses that apply to your business category
Warning: Lapsed or inaccurate registration can disqualify you from contract awards or delay payments.
Common Pitfalls to Avoid
Incomplete Registration – Missing data, especially banking or IRS info, will stall the process.
Expired Login Credentials – SAM.gov uses Login.gov for access. Inactive accounts can lock you out.
Scams and Third-Party Solicitors – Only use official .gov channels. Watch out for unofficial “registration help” services that charge unnecessary fees.
Not Reading Solicitations Carefully – Every contract opportunity on SAM.gov may have different requirements. Don’t assume they’re standardized.
Final Thoughts
SAM.gov is a powerful tool that connects contractors with billions in federal spending opportunities. But navigating it requires diligence, accuracy, and an understanding of legal obligations under federal procurement law. Contractors should consider consulting legal counsel or compliance advisors to mitigate risk and stay competitive.
Listen to this post
Workplace Strategies Watercooler 2025: A Ransomware Incident Response Simulation, Part 2 [Podcast]
In part two of our Cybersecurity installment of our Workplace Strategies Watercooler 2025 podcast series, Ben Perry (shareholder, Nashville) and Justin Tarka (partner, London) discuss the steps to take after resolving and containing a ransomware incident. Justin and Ben, who is co-chair of the firm’s Cybersecurity and Privacy Practice Group, highlight several key areas, including preparing the response team, implementing training for relevant employees and regular reviews of cybersecurity measures; developing a comprehensive incident response plan and assembling a dedicated response team; identifying opportunities for long-term infrastructure improvements; and assessing other areas of external risk management, such as data mapping and retention processes, vendor due diligence, and notification obligations.
Vermont Enacts Age-Appropriate Design Code
On June 12, 2025, Vermont Governor Phil Scott signed into law the Vermont Age-Appropriate Design Code Act (S.B. 69) (the “Code”). The Code takes effect on January 1, 2027.
The Code requires “covered businesses” that develop or provide online services, products, or features “reasonably likely to be accessed” by minors under the age of 18 to refrain from using privacy-invasive design features in their online services. The Code requires covered businesses to use age-assurance methods specified in rules to be issued by the Vermont Attorney General to verify the age of users.
“Covered business” is defined as “a sole proprietorship, partnership, limited liability company, corporation, association, other legal entity, or an affiliate thereof” that:
conducts business in the state of Vermont;
generates a majority of its revenue from online services;
employs online products, services or features that are “reasonably likely to be accessed” by a minor under the age of 18;
collects Vermont consumers’ personal data or has such data collected on its behalf by a processor; and
alone or jointly with others determines the purposes and means of the processing of Vermont consumers’ personal data.
The Code indicates that an online service is “reasonably likely to be accessed” by a “covered minor” if it meets one or more of the following criteria:
the online service is “directed to children” as defined under COPPA;
the online service is determined to be routinely accessed by an audience composed of at least two percent of minors ages two through 17, based on competent and reliable evidence of audience composition;
the audience of the online service is determined to be composed of at least two percent minors ages two through 17, based on internal company research; or
the covered business knew or should have known that at least two percent of the audience of the online service includes minors ages two through 17.
“Covered minor” is defined as a Vermont consumer who a covered business “actually knows” is a minor or labels as a minor pursuant to age assurance methods in rules adopted by the Vermont Attorney General.
The Code requires covered businesses to meet a “minimum duty of care” with respect to covered minors, by ensuring that a covered business’s use of minors’ personal data and the design of an online service will not result in: (1) reasonably foreseeable emotional distress to a covered minor; (2) reasonably foreseeable compulsive use of the online service by a covered minor; or (3) identity-based discrimination against a covered minor (i.e., based on race, ethnicity, sex, disability, sexual orientation, gender identity, gender expression, religion, or national origin). The Code further requires covered businesses to ensure that the content viewed by a covered minor does not cause emotional distress, compulsive use or discrimination to covered minors.
To meet this minimum duty of care, the Code requires covered businesses to configure all default privacy settings to the highest level of privacy for covered minors, including by:
not displaying the existence of a covered minor’s account on a social media platform to any “known adult” user unless the covered minor has expressly and unambiguously allowed a specific adult user to view their account or made their account public;
not displaying content created or posted by a covered minor on a social media platform to any known adult user unless the covered minor has expressly and unambiguously allowed a specific known adult user to view their content or chosen to make their content publicly available;
prohibiting known adult users from liking, commenting on, or otherwise providing feedback on a covered minor’s social media content unless the covered minor has expressly and unambiguously allowed a specific known adult user to do so;
prohibiting known adult users from direct messaging a covered minor on a social media platform unless the covered minor has expressly and unambiguously decided to allow direct messaging with a specific known adult user;
not displaying a covered minor’s location to other users, unless the covered minor has expressly and unambiguously shared their location with a specific user;
not displaying users connected to a covered minor on a social media platform unless the covered minor expressly and unambiguously chooses to share the information with a specific user;
disabling search engine indexing of a covered minor’s account profile; and
not sending push notifications to covered minors.
A covered business shall not provide covered minors with a singular setting that would make all of the default privacy settings less protective at once, nor shall they request that covered minors reduce their privacy settings unless given express consent. “Known adult” is defined as a Vermont consumer who a covered business “actually knows” is an adult or labels as an adult pursuant to age assurance methods in rules adopted by the Vermont Attorney General.
In addition, the Code requires covered businesses to:
provide a prominent, accessible and responsive mechanism to delete a covered minor’s social media account and honor such deletion requests within 15 days;
provide detailed privacy disclosures prominently and clearly on their websites or mobile applications, including specific information about the use of algorithmic recommendation systems;
refrain from collecting, selling, sharing, or retaining any personal data of a covered minor that is not necessary to provide the online service with which the covered minor is actively and knowingly engaged;
use previously collected personal data of a covered minor only for the purpose for which it was collected, unless necessary to comply with the Code;
provide a conspicuous signal to the covered minor if their online activity or location is being monitored by any individual, including a parent or guardian;
refrain from using a covered minor’s personal data to select, recommend, or prioritize content unless the selection is based on:
the minor’s express and unambiguous request for specific content, such as:
content from a specific account, feed, or user;
a specific category of content (e.g., “cat videos” or “breaking news”); or
content with characteristics similar to the media currently being viewed;
user-selected privacy or accessibility settings; or
a search query initiated by the covered minor, which may be used only to select and prioritize media in response to that search;
refrain from sending push notifications to covered minors between 12:00 midnight and 6:00 a.m.;
limit the collection of personal data for age assurance to that which is strictly necessary for the verification process;
immediately delete any personal data collected for age assurance upon determining whether the user is a covered minor, except for the determination of the user’s age range;
refrain from using age assurance data for any other purpose or combining it with other personal data, aside from the age range determination;
avoid disclosing age assurance data to any third party that is not a processor; and
implement a review process that allows users to appeal their age determination.
The Vermont Attorney General has the authority to enforce the Code.
The enactment of the Code mirrors the actions of other states that have passed similar legislation, including California, Maryland and Nebraska, and reflects a broader movement to implement legal structures that guide the use of minors’ online data in an effort to minimize potentially harmful effects of certain online platforms to minor users. The California and Maryland laws have been the subject of lawsuits on First Amendment grounds, with the California law currently fully enjoined.
LIABLE ON MULTIPLE DIMENSIONS?!: Kentucky Court Grants in Part Motion to Dismiss on Vicarious Liability
Hey, TCPAWorld!
A new case in the Eastern District of Kentucky presents some vicarious liability issues that are worth discussing.
In Hensley v. Dimension Service Corporation, No. 5:24-CV-378-KKC, 2025 WL 1679841 (E.D. Ky. June 13, 2025), the Plaintiff claimed that that he received over 100 calls to his cell phone from unknown numbers, despite this number being listed on the national DNCR. Supposedly, he would either receive a voicemail featuring “odd background noises” or hear an “electronic blip sound” and be transferred to a salesperson if he answered.
In September 2024, Plaintiff allegedly answered one of these calls and purchased a vehicle service contract (the “Contract”) from the salesperson. The Contract identified multiple parties: (1) Pelican Investment Holdings, LLC (“Pelican”) as the seller, (2) Sing for Service, LLC (“SING”) as the payment plan provider, and National Administrative Service Co., LLC (“NAS”) as the administrator and obligor. The Plaintiff then cancelled the Contract and sued, naming these three parties as defendants and Dimension Service Corporation (“Dimension”), likely because it shares an address with NAS. See id. at 6 n.1. Plaintiff brought claims for violations of the TCPA, the Kentucky Consumer Protection Act (“KCPA”), and for invasion of privacy. In response, the Defendants filed a Rule 12(b)(6) motion to dismiss.
TCPA Claims
Direct or Vicarious Liability
Plaintiff alleged that each of the Defendants is liable for violating (1) Section 227(b)(1)(A)(iii) for making calls to his cell phone using a prerecorded message and an ATDS, and (2) Section 227(c)(5) for calling his cell phone number, which was listed on the DNCR.
The Court found that Plaintiff sufficiently pleaded vicarious liability against Pelican but not as to NAS, SING, or Dimension. Specifically, the Court found Plaintiff’s allegation that he was connected to a salesperson and purchased the Contract, by itself, to be sufficient to support “the reasonable inference that Pelican is either directly or vicariously liable for that phone call and other similar phone calls that [Plaintiff] claims he received.” Id. at *2-3 (emphasis added).
As to the other Defendants, however, the Court noted that absent from the complaint was allegations that these entities were directly responsible for the calls or facts demonstrating any semblance of an agency relationship. Though the Contract identified NAS and SING, and the complaint alleged that the “Defendants acted in concert with each other under a common business plan[,]” the Court found this insufficient to give rise to an agency relationship. Indeed, it explained that “simply engaging in business together is insufficient to establish the sort of bedrock agreement present in every agency relationship[.]” Id. at *3 (emphasis added).
And as for Dimension, Plaintiff failed to allege any facts connecting it to the calls at issue. Plaintiff simply pleads that NAS and Dimension are related entities, which is also insufficient to give rise to an agency relationship.
Accordingly, the Court dismissed the TCPA claims as to NAS, SING, and Dimension.
The Substance of the Calls
Next, Section 227(b) requires the calls to be made using an artificial or prerecorded voice or an ATDS, and Section 227(c) requires the calls to be made for telemarketing purposes. And during this analysis, the Court noted that while pleading a prerecorded message is a “low bar,” “courts have required plaintiffs to provide some details describing the prerecorded message.” Id. at *4.
And here, the Court found that Plaintiff provided these details. Indeed, the Court considered Plaintiff’s allegations regarding the identical messages, and odd electronic sounds and pauses before being connected to a salesperson attempting to sell him a vehicle warranty sufficient to establish his 227(b) claim. And though Plaintiff did not explain the contents of each call, the Court found the same allegations sufficient for a 227(c) claim.
KCPA Claims
Plaintiff based his KCPA claims on the very same conduct as the TCPA claims. Because the TCPA claims as to NAS, SING, and Dimension failed, the Court dismissed the KCPA claims against them at the outset.
As to Pelican, however, the KCPA claims were upheld. The Court rejected Pelican’s argument that a cell phone number is not a residential number under the KCPA and cited the definition of a “telephone solicitation” under that statute, which includes mobile phone numbers.
Invasion of Privacy
Finally, Plaintiff charged each of these Defendants with invasion of privacy, which requires (1) an intentional intrusion, (2) into a matter he has a right to keep private, and (3) that the intrusion would be highly offensive to a reasonable person.
This count was ultimately dismissed as to all Defendants. Because this claim was also based on the same facts, the Court quickly discharged NAS, SING, and Dimension from liability. And as to Pelican, the Court found that Plaintiff “fail[ed] to identify a private matter which Pelican intruded on[,]” instead finding the allegations to be conclusory.
Here are some key takeaways:
The allegation that a plaintiff was connected to a salesperson and purchased a contract that names the defendant as the seller may be sufficient to establish direct and/or vicarious liability on a motion to dismiss; and
The fact that an entity is somehow related to the defendant, by itself, is likely insufficient to establish vicarious liability.
Until next time!
“ALL HELL JUST BROKE LOOSE”: Republican National Committee, NRSC, NRCC, and Congressional Leadership Fund Sued in Class Action Over Political Robotexts Under State Law– And This Could Be Huge

Political robotexts.
Need I say more?
It seems like nothing is more roundly disliked than out-of-the-blue messages from candidates and parties urging you to vote or donate or… whatever.
Sure this is core political speech. Very much protected by the First Amendment. But most folks just don’t like to receive it.
And many of these texts are also illegal under various enactments– including a number of state laws. So its always very fascinating to see the pile of lawsuits pouring in after each political cycle.
Well it appears sue-the-political-parties season is upon us again, and I am definitely hear for it.
For instance Samantha Johnson and Cari Johnson just sued a bunch of familiar names– the Republican National Committee, NRSC, NRCC, and Congressional Leadership Fund– for alleged Utah state law violations over alleged robotexts that continued after a stop request was received.
The texts at issue read “From Trump: ALL HELL JUST BROKE LOOSE! I WAS CONVICTED IN A RIGGED TRIAL!” Followed by a CTA and a link.
Thought provoking.
The text links allegedly lead back to a Winred website where donations were sought.
Winred is a (the?) major fundraising platform for the right. Per the Complaint: “WinRed is an online fundraising platform supported by a united front of the. . . RNC, NRSC, and NRCC.” However, Text messages with Winred links are allegedly not sent by Winred. Rather, they are sent by the political entities that are seeking financial donations.
The complaint contains a bunch of images of purported complaints about Winred texts on social media and paints a pretty bleak tale:
Interesting.
To be sure, this is not just a right wing phenomenon, but the complaint focuses on the activities of Winred-aligned entities.
Importantly both Plaintiffs contend they received multiple (sometimes up to 11) messages after requesting texts “stop.”
The complaint seeks to represent the following classes:
The RNC ClassAll Utah residents to whom the RNC sent two or more text messages, between June 6,2022, and the date of class certification, after the resident said stop.The NRSC ClassAll Utah residents to whom the NRSC sent two or more text messages, between June 6,2022, and the date of class certification, after the resident said stop.The NRCC ClassAll Utah residents to whom the NRCC sent two or more text messages, between June 6,2022, and the date of class certification, after the resident said stop.The CLF ClassAll Utah residents to whom the CLF sent two or more text messages, between June 6,2022, and the date of class certification, after the resident said stop.
Each of these classes is alleged to contain more than 10,000 members!
And here’s the most interesting part. The suit isn’t brought under the TCPA, it is brought under a STATE law– UTAH CODE ANN. § 13-25a-107.2
Utah Code Ann. § 13-25a-107.2 prohibits a telephone solicitor from making calls to a person who has previously informed them, either in writing or orally, that they do not want to receive such calls.
Damages are $500.00 per call PLUS attorneys fees– so this is even worse than the TCPA!
Lots of risk here.
The case is brought by Dr. Evil himself– Tom Alvord– so this is a REAL issue for the Winred crowd. That guys is a great lawyer and pushes cases hard.
Complaint here: Winred Complaint
Defendants do not yet seem to have retained counsel or responded to the complaint.
We will pay very close attention to this one.
Texas Legislature Amends Data Broker Law
On May 30, 2025, the Texas legislature passed S.B. 1343 (the “Bill”), which amends the Texas Data Broker Act (the “Act”) to impose new notice and registration obligations on data brokers. The Bill now awaits signature by Texas Governor Greg Abbott.
The Act, which came into effect on September 1, 2023, currently requires data brokers to provide a clear, readily accessible notice on their website or mobile application that (1) states the entity is a data broker and (2) contains language provided by the Texas Secretary of State. The Bill amends the Act to require data brokers to also disclose in the notice how consumers can exercise their privacy rights under the Texas Data Privacy and Security Act.
The Bill further amends the Act to include additional content requirements for data broker registration statements submitted to the Texas Secretary of State. The Act currently requires data brokers to include the following in their registration statements: (1) the legal name of the data broker; (2) a contact person and the primary physical address, email address, telephone number, and website address of the data broker; (3) a description of the categories of data the data broker processes and transfers; (4) a statement of whether or not the data broker implements a purchaser credentialing process; (5) if the data broker has actual knowledge that the data broker possesses personal data of a known child: (A) a statement detailing the data collection practices, databases, sales activities, and opt-out policies that are applicable to the personal data of a known child; and (B) a statement on how the data broker complies with applicable federal and state law regarding the collection, use, or disclosure of personal data from and about a child on the Internet; and (6) the number of security breaches the data broker experienced during the preceding year and the total number of consumers affected by each breach. The Bill amends the Act to require that the registration statement also include a link to a page on the data broker’s website that prominently displays specific instructions on how consumers may exercise their privacy rights under the Texas Data Privacy and Security Act.
If signed by the Governor, the amendments to the Act will take effect September 1, 2025.