WE CAN SPAM BUT YOU CAN’T SPEAK: CTIA Claims Wireless Carriers Should Be Able to Text Consumers Without Consent–While it Tries to Censor Everyone Else

Its funny.
If you study history deeply enough, you’ll learn that morality is almost completely relative.
Anything you can think of (no matter how vile by today’s standards) has been permitted/encouraged/justified by some people somewhere at some time throughout history it seems except for one thing– hypocrisy.
Holding yourself to the same standards you would apply to others is the only actual “golden rule” it would seem. And it looks like CTIA just broke it.
So the Wireless Association–which calls itself CTIA for some reason– publishes a set of “guidelines” for communications to cell phones that it claims nobody is actually required to follow yet every carrier and aggregator requires people to follow.
But in true “do as I say not as I do” fashion CTIA is claiming that its own members– the wireless carriers themselves– don’t have to follow any of those rules at all.
Indeed, in a new filing made by CTIA to the FCC as part of the “Delete, Delete, Delete” proceeding the CTIA urges the FCC to close a proceeding that would require wireless carriers to have consent to contact consumers.
In the CTIA’s mind requiring wireless carriers to have consent before texting consumers would be “UNNECESSARY, OVERLY COMPLEX, OR EXCEED FCC AUTHORITY.”
Instead, it urges the Commission to “exclud[e] calls and texts from wireless providers to their subscribers” from TCPA coverage. Indeed, applying the TCPA to by wireless carriers “would risk harming consumers” the CTIA claims.
Hmmm……
So, preventing wireless carriers from spamming consumers causes “harm” but blocking urgent, consented, informational messages consumers actually asked for– which the CTIA guidelines and carrier terms of use call for– is totally cool?
Ridiculous.
The wireless carriers are trying to right themselves a blank check to spam people while holding speech by American business hostage to an outright censorship regime.
Just gross folks.
Hopefully the Commission acts promptly to ban carrier call blocking of lawful traffic as R.E.A.C.H. requests and hopefully the FCC also APPLY THE TCPA TO WIRELESS CARRIERS so we can stop a MAJOR source of spam calls and messages.

Alliance for Natural Health Calls for Reform to Self-GRAS

The Alliance for Natural Health (ANH) has published a white paper calling for the “balanced reform” of the GRAS system.
The ANH announcement comes shortly after HHS Secretary Robert F. Kennedy, Jr.’s directive for FDA to explore potential rulemaking to eliminate the pathway for companies to self-affirm food ingredients as GRAS, a move which Secretary Kennedy stated would “provide transparency to consumers.”
While the ANH white paper does not support the complete elimination of self-GRAS, it does propose several key reforms:

Prioritization of removal for specific unsafe ingredients such as potassium bromate, propylparaben, and brominated vegetable oil;
Creation of a comprehensive online database of all GRAS determinations;
Implementation of a four-tier system that calibrates evidence requirements based on an ingredient’s history and safety profile;
Creation of a pathway for ingredients with a documented history of safe use to be officially recognized by FDA as “historically safe;”
Use of warnings, rather than outright bans, for ingredients that are generally safe but may be harmful to specific populations.

State Privacy Regulators Announce Formation of Privacy ‘Supergroup’

The concept of the “supergroup” may have originated with rock and roll, but on April 16, 2025, privacy practitioners in the United States learned that a whole new type of supergroup has been formed. Far from being a reboot of Cream or the Traveling Wilburys, however, this latest supergroup is comprised of eight state privacy regulators from seven states (each of which has enacted a comprehensive state privacy law), who announced they have formed a bipartisan coalition to “safeguard the privacy rights of consumers” by coordinating their enforcement efforts relating to state consumer privacy laws.

Quick Hits

State attorneys general from California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon, as well as the California Privacy Protection Agency, announced the formation of the “Consortium of Privacy Regulators.”
While the creation of the Consortium does not reflect a closer alignment in the contents of the actual consumer privacy laws themselves, it will likely heighten regulators’ abilities to enforce those elements of consumer privacy law that are common across states.
Businesses may wish to take this announcement as a sign to revisit their consumer privacy policies and practices, lest they find themselves subject to additional scrutiny by this new regulatory “supergroup.”

The Consortium of Privacy Regulators, comprised of state attorneys general from California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon, as well as the California Privacy Protection Agency, seeks to facilitate discussions of privacy law developments and shared regulatory priorities, and to share expertise and resources to focus on multijurisdictional consumer protection issues. The constituent attorneys general come from states that have been particularly active in the privacy regulation space, and this coalition will ostensibly allow them to pursue more coordinated, large-scale efforts to investigate and hold companies accountable for noncompliance with common legal requirements applicable to personal information. Of particular importance to this new regulatory body is the facilitation of consumer privacy rights, such as the “rights to access, delete, and stop the sale of personal information, and similar obligations on businesses.”
While this announcement is certainly big news, it is not entirely surprising. Over the course of the past several years, there has been an apparent uptick in coordinated regulation in other areas of data privacy law, especially with respect to data breach investigation and enforcement at the state regulatory level. Just as state attorneys general have been following up with companies that have reported data breaches with an increased diligence and depth (and, in some cases, imposing more substantial civil penalties and seeking to enter into consent orders with these companies), companies can likely expect similarly heightened scrutiny with respect to their consumer privacy practices. And, given the Consortium’s announced intent to hold regular meetings and coordinate enforcement based on members’ common interests, businesses can likely expect that this additional scrutiny will begin very quickly.
Next Steps
Given this increased focus on regulatory enforcement, companies that have not already done so may wish to prioritize taking steps to shore up their personal information handling practices. Businesses that collect personal information might consider revisiting their privacy policies to ensure they accurately reflect their personal information collection, disclosure, and other handling practices. They may also want to review their procedures for handling highly visible elements of consumer privacy law, including their processes for responding to data subject rights requests. And, of course, businesses might give some thought to whether this announcement is a timely reminder to refresh their employees’ training with respect to consumer personal information. Finally, given the requirements in many of the constituent states’ privacy laws that consumer personal information be appropriately protected, businesses might consider revisiting their cybersecurity measures, including by updating (or even implementing for the first time) incident response plans and performing tabletop exercises to identify potential gaps and opportunities for increased alignment with applicable legal requirements before the Consortium comes knocking.

New York AG Sues Earned Wage Access Companies for Allegedly Unlawful Payday Lending Practices

On April 14, New York Attorney General Letitia James announced two separate lawsuits against earned wage access providers—one against a company that issues advances directly to consumers, and another targeting a provider that operates through employer partnerships. Both actions allege that the companies engaged in illegal payday lending schemes, charging fees and tips that resulted in annual percentage rates (APRs) far in excess of New York’s civil and criminal usury caps.
The lawsuits assert violations of New York’s civil and criminal usury laws, which cap interest at 16% and 25%, respectively. According to the AG, the companies’ flat fees and “voluntary” tipping features amounted to de facto interest that routinely exceeded those thresholds. Both lawsuits also allege deceptive business practices and false advertising in violation of New York’s General Business Law, as well as abusive and deceptive acts and practices under the federal Consumer Financial Protection Act. In both cases, the AG alleges that the companies trap workers in cycles of dependency through frequent, recurring advances.
The lawsuit against the employer-partnered provider alleges that the company:

Imposed high fees on small-dollar, short-term advances. These fees allegedly resulted in effective APRs that often exceed 500%, despite claims that the advances are fee-free or interest-free.
Diverted wages through employer-facilitated repayment. The company allegedly required workers to assign wages and routed employer-issued paychecks directly to itself, ensuring collection before workers received their remaining pay.
Marketed the product as an employer-sponsored benefit. By leveraging exclusive partnerships, the company allegedly positioned its product as a no-cost financial wellness tool, downplaying costs and repayment risks.

The lawsuit against the direct-to-consumer provider alleges that the company:

Extracted revenue through manipulative tipping practices. Consumers were allegedly nudged to pay pre-set tips through guilt-driven prompts and fear-based messaging, which the company treated as interest income.
Automated repayment from linked bank accounts. The provider allegedly pulled funds as soon as wages were deposited, often before consumers could access them.
Used per-transaction caps to drive repeat usage. Consumers were allegedly forced to take out multiple advances and pay multiple fees to access their full available balance, magnifying the cost of each lending cycle.

Putting It Into Practice: These lawsuits reinforce a growing trend among states to impose consumer protection requirements—particularly around fee disclosures and repayment practices—regarding earned wage access products (previously discussed here). State regulators continue to increase their scrutiny of EWA providers’ business models and marketing tactics. In addition, this is perhaps the first case we have seen with a state attorney general bringing an action under the CFPA (see our related discussion here about this topic). Depending on how this case proceeds, we can expect to see more cases under the federal statute.
Listen to this post

Kansas City Federal Reserve Bank Explores Regulatory Risks in Gaming Ecosystems

On April 9, the Federal Reserve Bank of Kansas City published a research briefing examining how video game platforms are reshaping the digital payments landscape. As in-game purchases and platform-based transactions grow in volume and complexity, these developments are raising new regulatory concerns for both federal and state banking regulators.
The global video game industry generated nearly $190 billion in revenue in 2024, largely fueled by the increase in popularity of free-to-play models, in-game purchases, and subscription offerings. These approaches have fundamentally changed how the video game industry is monetized. Rather than relying on one-time game sales, many platforms are now relying on ongoing microtransactions, charging users small amounts for in-game content, upgrades, or access on a recurring basis. This shift has caught the attention of regulators, evidenced by the CFPB issuing an Issue Spotlight in April 2024, titled “Banking in Video Games and Virtual Worlds”, which analyzed the increased commercial activity within online video games and virtual worlds and the apparent risks to consumers—in this case, to online gamers (previously discussed here).
Overview of the Research Briefing
To support these business models, platforms have expanded the types of payments they accept, layering in credit and debit cards, digital wallets, and prepaid in-game currency. Many platforms also offer installment options at checkout. Most recently, some are exploring instant payments as a way to improve efficiency and reduce costs, especially for small-dollar transactions.
Unlike traditional card payments, instant payments settle in real time and often come with lower processing fees. That pricing difference could give platforms more flexibility in how they price in-game content. Instead of requiring players to buy a $10 bundle of in-game currency to access a $2 item, platforms could offer direct purchases—making prices more transparent and lowering the barrier for occasional or budget-conscious users. Faster payments may also help with subscription billing by reducing failed payments tied to expired cards or insufficient funds.
Adoption of instant payments in the U.S. still lags behind other countries, where some platforms already support local real-time payment systems. As the use of instant payments grows, regulators may also take a closer look at whether existing consumer protection frameworks are keeping up.
Regulatory Concerns
The report notes that the CFPB has identified several risks tied to gaming payments, including lack of transparency around pricing, unauthorized charges, and aggressive use of consumer data. Some platforms personalize offers or pricing based on player behavior, raising concerns about fairness and consent. As the use of virtual currencies and recurring charges becomes more common, regulators are questioning whether existing consumer protections adequately apply to these models.
The report also highlights security as another area of concern. Platforms now use behavioral analytics, tokenization, two-factor authentication, and other tools to prevent fraud and protect payment data. While these measures reduce friction and improve user experience, they also raise questions about how personal data is collected, stored, and used—particularly as the line between gaming and financial services continues to blur.
The report also flags concerns surrounding money laundering. In-game items and currency can often be exchanged for real money, sometimes outside official channels. That has created openings for illicit finance, even though most gaming companies aren’t subject to AML or KYC requirements. As the flow of real funds through these platforms increases, regulators may revisit whether additional oversight is warranted.
Putting It Into Practice: The CFPB and state financial regulators are signaling a growing interest of the gaming industry, particularly where in-game economies begin to resemble consumer financial products. Gaming providers would be wise to proactively assess how their business models could create compliance risk.
Listen to this post

CFPB Announces It Will Not Prioritize Oversight of Repeat Offender Registry

On April 11, the CFPB announced that it will not prioritize enforcement or supervision against nonbank financial companies that miss registration deadlines under its Repeat Offender Registry. The Bureau also stated that it is considering a notice of proposed rulemaking to rescind or narrow the scope of the rule, finalized in 2024, that established the registry.
Under the rule (previously discussed here) covered nonbanks subject to covered orders will be required to submit certain corporate identity information, administrative information, and information regarding the covered order (e.g., copies of the order, issuing agencies or courts, effective and expiration dates, and laws found to have been violated). In addition, the final rule will require covered nonbanks to file annual reports attesting to their compliance with the registered orders. The rule’s compliance deadlines are as follows:

April 14, 2025 for nonbanks already subject to CFPB supervision; and
July 14, 2025 for all other covered nonbanks.

In its press release, the Bureau stated that the temporary non-enforcement policy applies to these deadlines and that it will instead focus enforcement and supervision efforts on more pressing threats to consumers.
Putting It Into Practice: The CFPB’s decision to deprioritize enforcement and consider rescinding the registry rule reflects a broader shift away from regulatory initiatives finalized under the prior administration (previously discussed here and here). While the move eases near-term compliance pressure, it may invite greater attention from state regulators and consumer advocates concerned about regulatory gaps. Nonbank financial institutions should prepare for a shifting landscape of federal and state supervision going forward.
Listen to this post

FISHING FOR INFO?: Not Enough To Reel In A CIPA Claim!

Greetings CIPAWorld!
I’m back with the latest scoop. Imagine browsing hunting gear at Sportsman’s Warehouse online, checking out a fishing rod. You don’t buy anything, don’t create an account, and don’t enter your email. You look around and leave. Weeks later, you get an email from a completely different outdoor retailer suggesting products similar to what you viewed. That’s creepy, right? That’s the essence of Cordero v. Sportsman’s Warehouse, Inc., No. 2:24-CV-575-DAK-CMR, 2025 U.S. Dist. LEXIS 72337 (D. Utah Apr. 15, 2025).
Here, Plaintiff filed a Complaint against Sportsman’s Warehouse after discovering that his browsing activity had been tracked by a third-party service called AddShoppers without his consent. I was digging through this case last night while coincidentally shopping for tickets to an upcoming Blink-182 concert, and caught myself about to click “Accept All” on a cookie banner to see those ticket prices (long story short… I didn’t buy them… yet). The irony wasn’t lost on me. Those seemingly mundane clicks we make without thinking twice? They matter. And sometimes, as this case shows, even not clicking anything can land your data in someone else’s hands. What’s particularly unsettling about the AddShoppers system described in Cordero is that it operates largely invisibly to the average consumer browsing online. Unlike standard cookies that work on a single website, this third-party tracking system follows you across an entire network of seemingly unrelated websites. Have you all watched the new season of Black Mirror yet?
Unlike the recent decision in Lakes v. Ubisoft, Inc., No. 24-cv-06943-TLT, 2025 U.S. Dist. LEXIS 67336 (N.D. Cal. Apr. 1, 2025), in which cookie consent banners saved the day for the video game company, Plaintiff’s lawsuit failed for a different reason altogether: standing. Judge Dale A. Kimball determined that the Plaintiff hadn’t suffered a concrete harm sufficient to give him Article III standing to bring the case. Interesting stuff.
So let’s dig into what’s at issue here. According to the Complaint, AddShoppers operates a “Data Co-Op” where participating companies install AddShoppers’ code on their websites. When an internet user creates an account or purchases with one business in the network, a third-party tracking cookie with a unique identifier is created that AddShoppers associates with that user. Pretty straightforward, right? Once that cookie is in your browser, AddShoppers can track your activity across any website in its network. What? Yes, you read that right. Suppose you’ve provided personal information to one site in the network. In that case, AddShoppers can use that information to target you with ads from completely different companies, even if you never gave those companies your information. For instance, you’re shopping around, adding a pair of boots to your cart on one site and then, days later, getting an email from another company you’ve never interacted with about outdoor gear, all because the two retailers are plugged into the same silent backend tracking system. Spooky stuff.
AddShoppers’ co-founder described this operation as having two data sources: a “blind Co-Op” where brands submit data in exchange for using the collective data pool, and “publisher relationships” where they license data for additional scale. This creates what the Complaint described as a “data lake, ” a centralized repository where information from different sources is matched to create detailed profiles of individuals. What’s particularly concerning is that, according to the Complaint, AddShoppers’ network includes companies selling highly personal products like feminine hygiene and men’s health items, potentially revealing private information to anyone sharing a computer with the user.
Plaintiff’s data request from AddShoppers revealed he had been tracked by at least a dozen companies, including Sportsman’s Warehouse, for several years. The timestamps revealed exactly when he visited these websites. But here’s the critical point that ultimately closed down Plaintiffs case. Here, Plaintiff never provided any personal information to Sportsman’s Warehouse. Plaintiff simply visited their website.
The Court analyzed this lack of personal information exchange through precedent established by the Supreme Court in TransUnion L.L.C. v. Ramirez, 594 U.S. 413 (2021). The Court emphasized that “Only those plaintiffs who have been concretely harmed by a defendant’s statutory violation may sue that private defendant over that violation in federal court.” Id.The Court ruled that a timestamp showing when someone last visited a website doesn’t constitute personal information that can identify an individual under California law.
Distinguishing this case from In re Facebook, Inc., Internet Tracking Litigation, 956 F.3d 589 (9th Cir. 2020), where Facebook was accused of capturing detailed browsing information that was used to create personally identifiable profiles describing users’ “likes, dislikes, interests, and habits over a significant amount of time.” Unlike Facebook, Sportsman’s Warehouse never obtained Plaintiff’s personally identifiable information.
As the Court put it, “Sharing the last time someone visited a website is not statutorily protected in California as protected personal information.” Cordero, 2025 U.S. Dist. LEXIS 72337. This reasoning was based on similar rulings in Cook v. GameStop, Inc., 689 F. Supp. 3d 58 (W.D. Pa. 2023), which held that “product preference information is not personal information” and In re BPS Direct, L.L.C., 705 F. Supp. 3d 333 (E.D. Pa. 2023), which stated that “browsing activity is not sufficiently private to establish concrete harm.”
The Court also declined to follow decisions like Lineberry v. Addshopper, Inc., No. 23-cv-01996-VC, 2025 U.S. Dist. LEXIS 29903 (N.D. Cal. Feb. 19, 2025), where the Complaint alleged plaintiff made a purchase and was plausibly identifiable. Here, Plaintiff made no purchase, provided no email, and entered no personal details. Those factual gaps were key here in the Court’s reasoning.
Sportsman’s Warehouse also asserted the persuasive authority in Ingrao v. AddShoppers, Inc., No. 24-1022, 2024 WL 4892514 (D. Utah Nov. 25, 2024), which involved nearly identical facts and resulted in dismissal for lack of standing. The Court found Ingrao persuasive and more directly applicable than Facebook, particularly because both Ingrao and Plaintiff’s case lacked allegations that personal data was ever collected, let alone misused.
So what does this mean for you and me? While Plaintiff’s case was dismissed for lack of standing, it provides more insight into how companies track our digital footprints in ways most of us never consider. Those seemingly browsing sessions on retail websites could be feeding into vast data-sharing networks that follow you across the internet.
For companies, the lesson here might seem to be you’re in the clear as long as you don’t collect identifiable personal information. But that would be missing the bigger picture. As privacy laws evolve and consumers become more aware of tracking practices, the legal landscape could shift dramatically. In Lakes, the Court held that layered cookie banners could provide legally binding consent. In Cordero, the Court didn’t even get that far, because if there’s no injury, it doesn’t matter what disclosures you did or didn’t give. The procedural defenses may differ, but the outcome is the same: these privacy lawsuits are shut down at the door.
For consumers, cases like this highlight why those cookie preferences matter. When was the last time you read a privacy policy? And if you did, would you expect it to list every company that might get your data? Probably not. But these cases show that courts won’t always assume you didn’t know. Those extra 30 seconds spent clicking “customize settings” instead of “accept all” could mean the difference between your browsing habits being your own business or becoming valuable data points in a marketplace you never consented to join.
Looking at Lakes and Cordero together, a clear trend emerges. If a company collects personal data with explicit consent, it’s protected. Additionally, if it avoids collecting personal data altogether, it’s also protected. For businesses, that’s a powerful takeaway. The current legal framework favors companies that either build robust consent flows or steer clear of collecting personally identifiable information. Even under California’s strong privacy laws, the bar for plaintiffs to survive a motion to dismiss remains high, especially when standing is questioned.
So next time you’re shopping online, whether for camping gear, concert tickets, or anything else, remember that your digital footprints may be tracked in ways you never imagined. Unlike the plaintiffs in Lakes v. Ubisoft, who clicked through consent banners, Cordero never got the chance to consent or decline. Plaintiff just browsed.
All in all, what stood out to me in Judge Kimball’s ruling was how decisively the conversation ended. This wasn’t a case where more facts might have saved the Complaint. The Court didn’t say plead better. It said you were never supposed to be here.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!

CLEAR, UNMISTAKABLE, COMPELLING: Court Compels Arbitration Based On Inclusion Of AAA Rules

Hey, TCPAWorld!
The District of Utah just issued a defendant-friendly decision staying a case and compelling arbitration. See generally Christiansen v. Desert Rock Cap., Inc., No. 2:24-cv-00808, 2025 WL 1135598 (D. Utah Apr. 17, 2025). This case serves as a straightforward reminder of the importance of including an arbitration provision that clearly delegates questions of arbitrability to the arbitrator and incorporates the American Arbitration Association (AAA) rules.
In Christiansen, Plaintiff Christiansen applied for and was issued a loan from Defendant Desert Rock Capital, Inc. (“Desert Rock”). In the loan documents, Christiansen consented to be contacted by Desert Rock for “potential extensions of credit, marketing and advertisement, and any other business purpose.” Id. at *1. He also agreed to resolve “[a]ny and all controversies, claims, alleged breaches or disputes arising out of or relating in any way” to the loan documents through arbitration and to waive his ability to bring a class action. Id.
In the lawsuit, Christiansen alleged that Desert Rock called and texted him advertisements despite being on the national DNCR and despite his repeated DNC requests. Accordingly, he brought claims under the TCPA’s national DNCR and internal DNC provisions. In response, Desert Rock moved to dismiss the complaint or, alternatively, to stay the case and compel arbitration.
In deciding this motion, the Court first explained that it must enforce arbitration agreements according to their terms. And where there is “clear and unmistakable evidence” that the parties delegated the issue of arbitrability to the arbitrator, then it must be submitted to the arbitrator and is not for the court to decide. The Tenth Circuit has found this standard to be met where an arbitration agreement incorporates the AAA rules.
Although Christensen disputed the validity of the arbitration agreement and its applicability to the dispute, the Court rejected this argument because the loan documents explicitly referenced the AAA rules. Accordingly, the Court found the “clear and unmistakable” evidence standard to be met with respect to the issue of arbitrability.
And while Desert Rock sought dismissal of the complaint, the Court explained that “[w]hen a federal court finds that a dispute is subject to arbitration, and a party has requested a stay of the court proceedings pending arbitration, the court does not have discretion to dismiss the suit on the basis that all the claims are subject to arbitration.” Id. at *4 n.26 (quoting Smith v. Spizzirri, 601 U.S. 472, 475-76 (2024), and noting the Tenth and Seventh Circuits’ agreement). Per the Supreme Court’s instruction, the Court therefore stayed the case and compelled arbitration.
Until next time.

FCC’s POWER CUT: Fifth Circuit Guts FCC’s Ability to Issue Forfeiture Orders And this is Completely Game Changing

Not long ago we covered the story of an FCC forfeiture penalty issued against Telnyx related to a robocall scam targeting the FCC itself.
The Commission had determined Telnyx seemingly violated vague know-your-customer rules and was set to hit Telnyx with a multi-million dollar penalty. Telnyx fought back aggresively but the FCC was still left to determine how much it would fine Telnyx for the behaviour.
If that seems weird its because it is.
The FCC was simultaneously acting as victim, witness, prosecutor, judge and jury.
Telnyx’ response noted that Commission staff that received calls at issue should recuse themselves since they were directly involved with the underlying claim– which just makes common sense.
But there is a larger issue– one that AT&T just used to its high advantage. The FCC is weighing the evidence and then imposing a penalty without a court or a jury’s involvement.
And that, my friends, rather obviously violates the Constitution.
None of us can be harmed in any way without: i) a law that makes conduct illegal in place before we engaged in illegal conduct; and ii) a judge and/or jury determining that we, in fact, violated that law based on admissible evidence.
That’s the bedrock of due process and the bedrock of what makes us “free.”
But AT&T was recently denied that freedom by the FCC when it unilaterally determined AT&T was guilty of misusing consumer data and fined it $57MM without a jury’s involvement.
And while that might not sound too scary–I mean, why was AT&T misusing customer location data?–consider that Mr. Trump has recently ordered the FCC to do his exclusive bidding. In theory allowing the FCC to unilaterally determine and assign penalties to any communications company in America could very quickly escalate into something highly political and unpleasant.
The Fifth Circuit Court of Appeals cut all of that off at the pass, however, with a ruling in favor of AT&T holding the FCC’s actions violated the constitution.
In AT&T v. FCC the Court determined the forfeiture penalty at issue was a remedy akin to damages and not akin to equitable relief. The difference is critical– Americans (and companies) have a right to a jury and judge determinations for any relief that is properly considered a damage recovery. Further although the FCC argued it had the exclusive right to determine matters related to common carriers as a “public right” the Court disagreed noting claims against common carriers are often litigated by private rights in court.
The ruling itself is pretty straightforward: outside of very narrow situations, no penalties can be handed down in this country without a judge and jury. There were no exceptions to that rule present here. So out goes the award.
So where does this leave the FCC’s forfeiture power?
Well, unless there is an appeal to the Supreme Court–probably will be–I’d say it is dead, at least in so far as the penalties handed down are monetary awards. That is a MASSIVE change for the FCC that has just lost one of its most potent enforcement tools.
One wonders whether other extra-judicial penalties– such as “shut down orders” targeting intermediate and upstream carriers permitting robocalls to traverse their network–might also be set aside under this doctrine. Very fascinating to consider how deeply this new ruling cuts.
For now though, AT&T gets to walk away from $57MM in penalties–it already paid the money so will be curious to see how it gets it back– and Telnyx is sitting pretty.
Will keep an eye on all of this.

Is Insurtech a High-Risk Application of AI?

While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.
What is a “High-Risk AI System”?
Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.
Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.
The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.
Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope.  For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.
Is Insurtech Considered High Risk?
Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.
Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.
What To Do If You Are Developing or Deploying a “High-Risk AI System”?
Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.
Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States. 
Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:

Validating and testing data sets involved in the development of AI models used in a high-risk AI system to ensure they are sufficiently representative, free of errors, and complete in view of the intended purpose of the AI system;
Technical documentation that demonstrates the high-risk AI system complies with the requirements set out in the EU AI Act, to be drawn up before the system goes to market and is regularly maintained;
The AI system must allow for the automatic recording of events (logs) over the lifetime of the system;
The AI system must be designed and developed in a manner that allows for sufficient transparency. Deployers must be positioned to properly interpret an AI system’s output. The AI system must also include instructions describing the intended purpose of the AI system and the level of accuracy against which the AI system has been tested;
High risk AI systems must be developed in a manner that allows for them to be effectively overseen by natural persons when they are in use; and
High risk AI systems must deploy appropriate levels of accuracy, robustness, and cybersecurity, which are performed consistently throughout the lifecycle of the AI system.

When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide.

[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”
[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/
[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.

CFPB Memo Details Less Oversight on Fintechs, Shift to State-Led Enforcement

Go-To Guide:

On April 16, 2025, the Consumer Financial Protection Bureau (CFPB)’s chief legal officer issued a memorandum to CFPB staff that set out the agency’s 2025 supervision and enforcement priorities.  
Per the memorandum, the CFPB is likely to only exercise authority it has expressly been granted via statute and then only for “actual” and “tangible” consumer harms to “identifiable victims with material and measurable consumer damages.” 
Where permissible, the agency appears poised to defer to states and other federal agencies’ supervisory and enforcement activities. 
The CFPB will shift focus away from fintechs and in favor of the largest banks and depository institutions.

On April 16, 2025, the CFPB’s Chief Legal Officer, Mark R. Paoletta, issued a memorandum to CFPB staff that sets out the agency’s 2025 supervision and enforcement priorities.
The memorandum, which the CFPB has not publicly released, provides that the CFPB “will focus its enforcement and supervision resources on pressing threats to consumers” and that, in order to focus on “tangible harms to consumers,” the CFPB will “shift resources away from enforcement and supervision that can be done by the States.”
The memorandum also rescinds all prior enforcement and supervision priority documents and explains the CFPB’s focus in 2025 will be on the following:

The CFPB will engage in fewer supervisory exams and focus on “collaborative efforts.” The memorandum states the number of supervisory exams is “ever-increasing” and directs the CFPB’s supervision staff to decrease the overall number of “events” by 50%. Going forward, supervision staff are also directed to focus on “conciliation, correction, and remediation of harms” identified in consumer complaints and “collaborative efforts” with supervised entities to resolve problems that will lead to measurable benefits to consumers. 
The CFPB will focus more on the largest depository institutions, less on fintechs. The memorandum notes that, in 2012, the CFPB focused 70% of its supervision on banks and depository institutions and only 30% on nonbanks. It further notes that the proportion has “completely flipped,” such that 60% of the agency’s focus is directed at nonbanks. Going forward, the memorandum provides that the CFPB must “seek to return to the 2012 proportion” and “focus on the largest banks and depository institutions.” 
The CFPB will focus less on key topics from the Biden administration. In a move away from some of the hot topics under the Biden administration and former Director Chopra’s leadership, the CFPB will “deprioritize” the following:


 
loans or other initiatives for “justice involved” individuals, which the memorandum clarifies to mean “criminals” 


 
medical debt 


 
peer-to-peer platforms and lending 


 
student loans 


 
remittances 


 
consumer data 


 
digital payments

The CFPB will focus on “actual fraud” and “tangible harms” to consumers. Rather than focus on the CFPB’s “perception that consumers made ‘wrong’ choices,’” the CFPB will instead focus on “actual fraud” involving “identifiable victims with material and measurable consumer damages.” Moreover, instead of “imposing penalties on companies in order to simply fill the Bureau’s penalty fund,” the CFPB will focus on returning money directly back to consumers by redressing “tangible harms.” In doing so, the CFPB’s areas of priority will be:


 
mortgages, as the highest priority 


 
the Fair Credit Reporting Act and Regulation V data furnishing violations 


 
the Fair Debt Collection Act and Regulation F violations relating to consumer contracts and debts 


 
fraudulent overcharges, fees, etc. 


 
inadequate controls to protect consumer information resulting in “actual loss” to consumers

The CFPB will focus on service members and veterans. Going forward, the CFPB will prioritize providing redress to service members and their families and veterans. 
The CFPB will “respect Federalism” and defer to the states. The CFPB will, where permissible, defer to the states to exercise regulatory and supervisory authority. It will do so by (a) deprioritizing participation in multi-state exams unless participation is required by statute, (b) deprioritizing supervision where states “have and exercise ample authority” unless such supervision is required by statute, and (c) minimizing enforcement where State regulators or law enforcement are engaged or have investigated. 
The CFPB will “respect other federal agencies’ regulatory ambit.” The CFPB will, where permissible, defer to other federal regulators. It will do so by (a) eliminating “duplicative supervision” and “supervision outside of the Bureau’s authority” (e.g., supervision of mergers and acquisitions), (b) coordinating exam timing with “other/primary” federal regulators, and (c) “minimize duplicative enforcement” where another federal agency is engaged or has investigated. 
The CFPB will not rely on “novel” legal theories. The memorandum provides that the CFPB will focus “on areas that are clearly within its statutory authority” and will not look to “novel” legal theories, including about its authority, to pursue supervision. 
The CFPB will not engage in or facilitate “unconstitutional racial classification or discrimination.” With respect to its enforcement of fair lending law, the CFPB will pursue only matters with “proven actual intentional racial discrimination and actual identified victims,” for which “maximum penalties” will be sought. Accordingly, the CFPB will not engage in redlining or bias assessment supervisions or enforcement “based solely on statistical evidence and/or stray remarks that may be susceptible to adverse inferences.” 
The CFPB will not attempt to “create price controls.” The memorandum provides that the CFPB’s “primary enforcement tools are its disclosure statutes” and that it will not engage in attempts “to create price controls.”

Key Takeaways
The memorandum represents what is likely to be a drastic reduction in CFPB supervision and enforcement activity and encouragement for some state agencies to increase their oversight.
Instead of an agency that utilizes an expansive view of its authority to redress what it perceives as consumer harms, the memorandum suggests that the CFPB under the Trump administration will instead only look to exercise powers that it is explicitly granted via statute and, even then, only to address “actual” and “tangible” consumer harms. And, where permissible, the CFPB appears poised to defer to other federal agencies and the state regulators.
The reduced focus on fintechs, P2P platforms, consumer data, and digital payments will likely be well received by nonbanks, but all in the industry should be vigilant for state regulators to step into the space vacated by the CFPB.

Privacy Tip #440 – Text Scam Proceeds Surpass $470M in 2024

I have been getting a lot of texts that are clearly scams, and those around me have confirmed an increase in spammy texts.
According to an FTC Consumer Protection Data Spotlight, individuals lost over $470 million resulting from text scams. The top text scams of 2024 that accounted for half of the $470 million lost by consumers to fake texts included:

Fake package delivery problems;
Phony job opportunities;
Fake fraud alerts;
Bogus notices about unpaid tolls; and
“Wrong number” texts that aren’t.

According to the FTC, actionable ways to help stop text scams include:

Forwarding messages to 7726 (SPAM). This helps your wireless provider spot and block similar messages.
Reporting it on either the Apple iMessages app for iPhone users or Google Messages app for Android users.
Reporting it to the FTC at ReportFraud.ftc.gov.

How can you avoid text scams?
Never click on links or respond to unexpected texts. If you think it might be legit, contact the company using a phone number or website you know is legitimate. Don’t use the information in the text message. Filter unwanted texts before they reach you.
Remember that texts are just like emails and can be used for smishing instead of phishing. Treat them the same—with a healthy bout of caution and vigilance to avoid being victimized.