Utah Pioneers App Store Age Limits

Utah’s governor recently signed the first law which puts age restrictions on app downloads. The law (the App Store Accountability Act, SB 142), was signed yesterday (Wednesday, April 26, 2025). We anticipate that the law may be challenged, similar to NetChoice’s challenge to the Utah Social Media Regulation Act and other similar state laws.
Once in effect, the law will apply to both app stores and app developers. There are various effective dates – May 7, 2025, May 6, 2026 and December 31, 2026— as outlined below. Among its requirements are the following:

Age Verification: Under the new law, beginning May 6, 2026, app stores will need to verify the age of any user located in the state using “commercially reasonable” measures. Prior to that time, the Division of Consumer Protection will need to create rules that outline how age can be verified. Also starting May 2026, app developers will need to verify age categories “through the app store’s data sharing methods.” Age categories are children (users under age 13), younger teenagers (users between the ages of 13 and 15), older teenagers (users aged 16 or 17), and adults (users aged 18 and up).
Parental Consent/Notification: Beginning May 6, 2026, app stores will need parental before a minor can download or purchase an app, or make in-app purchases. Consent is to be obtained through a parental account that links to the child’s account. At the same time, app developers will need to verify that app stores have parental consent for minors’ accounts. They also have to notify app stores of any significant changes to their apps. When this happens, the app stores will need to notify users and parents of these changes and get parents’ renewed consent. App stores will also need to notify developers any time parents revoke their consent.
Contract Enforcement: Under the new law, beginning May 6, 2026, app stores will not be able to enforce contracts against minors unless they already have consent from the minors’ parents. This applies to app developers as well, unless they verify that the app store has consent from the minor’s parents.
Safe Harbor: The new law contains safe harbor provisions for app developers. Developers won’t be responsible for violating this law if they rely in good faith on information provided by the app store. This includes age information as well as confirmation that parents provided consent for minors’ account. For the safe harbor to apply, developers also need to follow the other rules set out for them by the law (described above).

Putting it into Practice: While we anticipate that this law will be challenged, it signals that states are continuing their focus on laws relating to children in the digital space. This is the first law that is focused on app stores, but we expect to see more in the future.
 
James O’Reilly contributed to this post.

California Cryobank Hit with Lawsuit over Sperm Donor Databank Breach

California Cryobank, LLC, the largest sperm bank in the country, faces a lawsuit in the U.S. District Court for the Central District of California over an April 2024 data breach. Cryobank provides frozen donor sperm and specialized reproductive health care services, including egg and embryo storage.
Cryobank notified the affected individuals this month that it detected suspicious activity on its network and determined that an unauthorized party gained access to its IT environment and may have accessed files containing personal information.
While sperm is commonly donated anonymously, the information is associated with a donor-assigned ID number. That ID number can then be used by offspring at 18 if they want to learn more about their biological father. Nevertheless, the security incident affected information including, patient names, Social Security numbers, driver’s license numbers, financial account numbers, and health insurance information. The complaint alleges that Cryobank failed to sufficiently protect and secure its patients’ personal and health information. The plaintiff is seeking class certification to include others affected by the data breach.
The complaint states that the individual notifications did not include “the identity of the cybercriminals who perpetrated this Data Breach, the details of the root cause of the Data Breach, the vulnerabilities exploited, and the remedial measures undertaken to ensure such a breach does not occur again.”
The lawsuit asserts claims of negligence, breach of implied contract, and unjust enrichment, as well as violations of the California Unfair Competition Law and Confidentiality of Medical Information Act.

Joint Bulletin Warns Health Sector of Potential Coordinated Multi-City Attack

On March 20, 2025, the American Hospital Association (AHA) and the Health-ISAC issued an alert to the health care sector warning of a social media post that posed a potential threat “related to the active planning of a coordinated, multi-city terrorist attack on hospitals in the coming weeks.” The post targets “mid-tier cities with low-security facilities.”
The alert recommends “that teams review security and emergency management plans and heighten staff awareness of the threat,” including physical security protocols and practices, such as “having a publicly visible security presence.”
The alert, updated on March 26, 2025, indicates that the FBI has not identified a “specific credible threat targeted against hospitals in any U.S. city.” Nonetheless, the threat is concerning, and the recommendations of the AHA and Health-ISAC are worth noting.

Pennsylvania Teacher’s Union Faces Class Action over Data Breach

The Pennsylvania State Education Association (PSEA) faces a class action resulting from a July 2024 data breach. The proposed class consists of current and former members of the union as well as PSEA employees and their family members. The lawsuit alleges that the union was negligent and breached its fiduciary duty when it suffered a data breach that affected Social Security numbers and medical information. The complaint further alleges that the PSEA failed to implement and maintain appropriate safeguards to protect and secure the plaintiffs’ data.
The union sent notification letters in February 2025 informing members that the data acquired by the unauthorized actor contained some personal information within the network files. The letter also stated, “We took steps, to the best of our ability and knowledge, to ensure that the data taken by the unauthorized actor was deleted [. . .] We want to make the impacted individuals aware of the incident and provide them with steps they can take to further protect their information.” The union also informed affected individuals that they did not have any indication that the information was used fraudulently.
The complaint alleges “actual damages” suffered by the plaintiff related to monitoring financial accounts and an increased risk of fraud and identity theft. Further, the complaint states that “the breach of security was reasonably foreseeable given the known high frequency of cyberattacks and data breaches involving health information.”
In addition to a claim of negligence, the class alleges that the breach violates the Federal Trade Commission Act and the Health Insurance Portability and Accountability Act. The class is demanding 10 years of credit monitoring services, punitive, actual, compensatory, and statutory damages, as well as attorneys’ fees.

Personal Information Released in JFK Files

I am not sure what the rush was to make the JFK assassination files available, but the perceived urgency caused Social Security numbers of individuals involved in the investigation to be released to the public. Although The Washington Post found 3,500 Social Security numbers in the documents, it is estimated that many were duplicates, and over 400 individuals were affected.
The Social Security numbers contained in the over 60,000 pages of documents can be accessed online or in person. The Washington Post reported the unauthorized disclosure, and the National Archives then screened the documents “so that the Social Security Administration could identify living individuals and issue them new numbers.”
Unfortunately, the documents were not previously screened for personal information, a basic tenet of protection. It is another message reaffirming that the new administration does not prioritize data security.

Phishing Attacks – Anyone Can Get Owned

HaveIBeenPwned is a website that allows users to check whether their data has been involved in data breaches. The website’s creator, Troy Hunt, was the subject of a phishing attack earlier this week. The attack was unrelated to the HaveIBeenPwned website and compromised Hunt’s personal Mailchimp account.
According to Hunt, he received an email purporting to be from Mailchimp regarding a flag on his account. When he clicked the “Review Account” button, he was taken to a fake Mailchimp domain. Hunt notes in a blog post that he manually entered his credentials and that they did not auto-populate from his password management application as they usually would.
Hunt received and entered a one-time password and was taken to a hung page. Now suspicious, he then reportedly logged into the legitimate Mailchimp site and changed his password, but the phishing attack was likely an automated process. Within minutes, Hunt had already received notification emails from Mailchimp regarding login activity and list exports from another unknown IP address. Hunt noted that the list included approximately 16,000 records, including current and former blog subscribers.
Below is the screenshot shared on Hunt’s blog:

Our conception is that a typical phishing email tends to be poorly worded, involves an unusual payment request, and is a blatantly implausible email. However, this incident demonstrates that phishing attacks are becoming increasingly sophisticated and can happen to anyone.
Takeaways:

Sense of urgency can be subtle – As bad actors become more sophisticated, not all phishing emails will create an unbelievable sense of urgency, such as asking users to update their payment or billing information to unlock an account. In Hunt’s case, he acknowledged that the notification created “just the right amount of urgency without being over the top.” Any email from an organization or person creating a sense of urgency warrants pause and contemplation before clicking or performing any action.
Circumvention of password manager could be a sign – Password managers are designed to autofill credentials on known websites. Hunt realized that his credentials did not populate into the fake Mailchimp site, which, in hindsight, was a potential sign of unusual activity. If a site that typically remembers your credentials requests them, this might be (though it is not always) a sign of a spoofed domain.
One-time passwords are not foolproof – Although multi-factor authentication provides enhanced security over using only usernames and passwords, one-time passwords cannot protect against such automated phishing attacks because once the user enters the one-time password onto the spoofed site, the bad actor now has access to the legitimate account.
Passkeys are more phishing-resistant – A passkey is a password replacement, where a digital credential tied to a user’s account allows them to authenticate into the account. Passkeys rely on biometrics or swipe patterns to sign users into accounts. Passkeys cannot be stolen as easily as passwords because they require the bad actor to have access to users’ biometrics or swipe patterns, which is not readily accessible.

No single tip or trick can help prevent phishing attacks, but remaining vigilant and enacting certain security measures can minimize the chances of becoming subject to such social engineering schemes.

THE WHITE COAT DIDN’T BETRAY YOU—THE PIXEL DID: Judge Keeps Florida Wiretap Case Against Hospital Alive

Greetings CIPAWorld!
Your search history reveals more about you than you might realize. If you’ve ever noticed suspiciously specific medical ads appearing after researching health concerns online, you’re not just being paranoid; you’re witnessing sophisticated tracking technologies at work.
A federal court in Florida handed down a decision that should make us pause before typing that symptom into a healthcare website’s search bar. Here, this case involves a patient who claimed her medical searches on Orlando Health’s website allegedly led to targeted Facebook ads for her specific medical conditions. See W.W. v. Orlando Health, Inc., No. 6:24-cv-1068-JSS-RMN, 2025 U.S. Dist. LEXIS 40038 (M.D. Fla. Mar. 6, 2025).
Judge Julie S. Sneed’s ruling in W.W. v. Orlando Health, Inc. denied most of the healthcare provider’s attempts to dismiss the lawsuit, potentially opening the door for closer scrutiny of how medical websites track and share our sensitive health information. As someone who has researched medical information online in the past (who doesn’t these days?), I wondered exactly what happens when I click that “search” button on my insurance carrier’s website.
The Plaintiff alleged she used Orlando Health’s website to research conditions, including ileostomy, heart problems, and fatty liver disease. She later noticed Facebook advertisements popping up for products related to these exact conditions—ileostomy bags, heart failure treatments, and services from Orlando Health neurologists. Coincidence? Plaintiff didn’t think so, and Judge Sneed found her claims plausible enough to proceed.
However, the medical context elevates this case beyond another privacy suit. The Court noted that Orlando Health operates over 100 medical facilities. It encourages patients to use its website to communicate medical symptoms, conditions, and treatments via the search bar and related webpages, including access to appointment booking and the MyChart patient portal. As such, this wasn’t a casual browsing session but an online extension of the doctor-patient relationship.
What makes this case particularly concerning is the nature of the tracking technology itself. Plaintiff alleges that Orlando Health employed tracking tools that operate largely invisibly to users. Judge Sneed acknowledged this reality, noting these technologies are hidden from users’ view and difficult to avoid, even for the particularly tech-savvy user. This creates a troubling power imbalance—patients have no meaningful way to opt out of tracking that they don’t even know is happening.
Even more fascinating is how the court analyzed the claims of the Florida Security of Communications Act (“FSCA”). I think it’s important I highlight the FSCA… after all, I am a Floridian. The FSCA prohibits the intentional interception of electronic communications, and Orlando Health argued that what was being tracked was merely metadata, not the actual content of communications. But Judge Sneed distinguished this case from previous decisions involving commercial websites.
The key difference? Medical searches reveal something fundamentally private about us. For instance, if I decide to search “cardiologist for heart palpitations,” I’m not just clicking links—I’m communicating sensitive information about my health condition. The Court recognized this distinction, noting that information about a user’s medical conditions and healthcare searches constitutes ‘contents’ protected under these statutes.
To break this down further, the FSCA defines “contents” as “any information concerning the substance, purport, or meaning of that communication.” Fla. Stat. § 934.02(7). The Court emphasized that URLs and search queries on a medical website reflect the message Plaintiff sought to convey to Defendant through its website, thus satisfying the statutory standard. Judge Sneed’s approach relied on Black’s Law Dictionary to define “substance,” “purport,” and “meaning,” grounding her interpretation in long-standing legal usage.
As a result, Judge Sneed determined that W.W. successfully alleged all three required elements for an FSCA claim: (1) that Orlando Health intentionally intercepted her electronic communications, (2) that these interceptions captured protected “contents” under the statute, and (3) that she had not consented to this interception. The Court emphasized that Plaintiff has adequately alleged that the electronic communications she claims were intercepted were ‘contents’ as defined by the FSCA.
Orlando Health relied heavily on a Florida case, Jacome v. Spirit Airlines, Inc., No. 2021-000947-CA-01, 2021 WL 3087860, at *1 (Fla. Cir. Ct. June 17, 2021), which involved “session replay” technology tracking users’ movements on a commercial airline website. But Judge Sneed pointed out three crucial differences: first, Jacome involved different tracking technology in a non-healthcare context; second, the very case Orlando Health relied on actually supported W.W.’s position by acknowledging that medical records deserve protection; and third, other courts facing similar healthcare tracking cases have reached conclusions favorable to patients. The Court held that Plaintiff’s claims are predicated on the tracking tools’ interception of her communications… not on the simple fact that her movements on Defendant’s website were tracked.
Moreover, the Court analyzed multiple cases where similar tracking tools on healthcare websites were found potentially liable under wiretap laws. In A.D. v. Aspen Dental Mgmt., Inc., No. 24 C 1404, 2024 WL 4119153, at *5-7 (N.D. Ill. Sept. 9, 2024), the Northern District of Illinois denied a motion to dismiss, finding that URLs containing search terms about medical conditions constituted protected content. Similarly, in R.C. v. Walgreen Co., 733 F. Supp. 3d 876, 885, 903 (C.D. Cal. 2024), the Court found that when tracking technologies shared information about “sensitive healthcare products” with Meta and Google, resulting in targeted ads, this information “reveal[ed] a substantive message about [the p]laintiffs’ health concerns.”
As such, the ruling on the FSCA claim is principally significant because, as Judge Sneed noted, “the FSCA was modeled after the Wiretap Act, [and] Florida courts construe the FSCA’s provisions in accord with the meaning given to analogous provisions of the Wiretap Act.” W.W., 2025 U.S. Dist. LEXIS 40038, at *7. This means the Court’s interpretation of what constitutes “contents” under the FSCA directly influenced its analysis of the federal Wiretap Act claim.
What I found particularly striking was the Court’s reference to the Ninth Circuit’s decision in In re Zynga Priv. Litig., 750 F.3d 1098 (9th Cir. 2014). While that case found that basic website header information wasn’t protected content, it explicitly stated that “a user’s request to a search engine for specific information could constitute a communication such that divulging a URL containing that search term to a third party could amount to disclosure of the contents of a communication.” This distinction has become crucial in healthcare privacy cases, with courts like the Northern District of California in Doe v. Meta Platforms, Inc., 690 F. Supp. 3d 1064, 1076 (N.D. Cal. 2023), recognizing that “a URL disclosing a ‘search term or similar communication made by the user’ ‘could constitute a communication’ under the [Wiretap Act].”
Next, the Court also looked at similar cases in other jurisdictions. In In re Grp. Health Plan Litig., 709 F. Supp. 3d 707, 712, 718, 720 (D. Minn. 2023), a Minnesota Court determined that technology that “surreptitiously track[ed] users’ interactions on the [defendant’s w]ebsites and transmit those interactions to [Meta]” was actionable under the Wiretap Act. Similarly, in Doe v. Microsoft Corp., No. C23-0718-JCC, 2023 WL 8780879, at *9 (W.D. Wash. Dec. 19, 2023), a Washington Court found similar allegations sufficient under California’s Invasion of Privacy Act (“CIPA”).
The Court’s analysis demonstrated a sophisticated understanding of how modern tracking tools actually function. Judge Sneed described how the Facebook Pixel works, explaining that it causes the user’s web browser to instantaneously duplicate the contents of the communication with the website and send the duplicate from the user’s browser directly to Facebook’s server. In a sense, it’s like having a third person secretly photocopy your private medical forms as you fill them out—except it happens digitally, all without your knowledge. That’s a scary thought.
One crucial legal issue the Court had to address was whether Orlando Health could be liable under the Wiretap Act as a party to the communications. Normally, a party to communications can’t “intercept” them under the law. But Judge Sneed found that the “crime-tort exception” might apply, which creates liability when a party intercepts communications “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This exception has created a split among federal courts, with some like B.K. v. Eisenhower Med. Ctr., 721 F. Supp. 3d 1056, 1065 (C.D. Cal. 2024) rejecting its application, while others like Cooper v. Mount Sinai Health Sys., Inc., 742 F. Supp. 3d 369, 380 (S.D.N.Y. 2024) have held that “A defendant’s criminal or tortious purpose of knowingly disclosing individually identifiable health information to another person in violation of HIPAA may satisfy the crime-tort exception.”
Let’s just think about this for a moment. When you visit your healthcare provider’s website and search for information about a medical condition, you’re effectively having a private conversation about your health. This is a conversation you reasonably expect to stay between you and your provider. Plaintiff alleges that Orlando Health allowed Facebook and Google to listen to this conversation without her knowledge or consent and then use what they heard to sell her things. That’s not just invasive—it’s monetizing vulnerability. The Complaint even describes Meta Pixel and Google’s APIs duplicating real-time communications and sending them to third-party servers without user awareness.
I remember searching for allergy specialists on my insurance provider’s website, only to suddenly see my social media feeds filled with ads for allergy medications. It felt like someone had been reading over my shoulder—because in a digital sense, they had been. This is a troubling loophole in our digital privacy framework. While HIPAA strictly regulates how healthcare providers handle patient information in traditional contexts, the rules often become murky in digital environments. The law hasn’t caught up to the technology, and it’s essential that case law helps close that gap.
The Court recognized other claims as well, including breach of confidence. Judge Sneed emphasized the profoundly personal nature of health information, quoting Norman-Bloodsaw v. Lawrence Berkeley Lab., 135 F.3d 1260, 1269 (9th Cir. 1998): “One can think of few subject areas more personal and more likely to implicate privacy interests than that of one’s health.” Additionally, the Court also allowed unjust enrichment and breach of implied contract claims to proceed, acknowledging that private health information has economic value that healthcare providers shouldn’t be able to exploit without consent. Judge Sneed agreed that Defendant obtained enhanced advertising services and more cost-efficient marketing from the data disclosures, which plausibly conferred a benefit on Orlando Health without Plaintiff’s consent.
In an interesting development for data privacy attorneys, the Court expressly recognized the economic value of personal health information. As Judge Sneed noted, courts should not “ignore what common sense compels it to acknowledge—the value that personal identifying information has in our increasingly digital economy…. Consumers too recognize the value of their personal information and offer it in exchange for goods and services.” W.W., 2025 U.S. Dist. LEXIS 40038, at *32-33 (quoting In re Marriott Int’l, Inc., 440 F. Supp. 3d 447, 462 (D. Md. 2020)).
Interestingly, the Court did dismiss one claim—invasion of privacy by intrusion upon seclusion—finding that Florida law requires an intrusion into a private “place” rather than merely a private activity. As Pet Supermarket, Inc. v. Eldridge, 360 So. 3d 1201, 1207 (Fla. Dist. Ct. App. 2023) specified, “Florida law explicitly requires an intrusion into a private place and not merely into a private activity.” This reveals a gap in privacy law that has not yet adjusted to the digital age, where violations occur in virtual rather than physical spaces.
The irony here is palpable. Healthcare providers are bound by HIPAA and other regulations that severely restrict how they can share our health information in traditional contexts. Yet some providers may allow tech companies to access this information through their websites with far less oversight.
Judge Sneed’s decision aligns with similar rulings in cases like D.S. v. Tallahassee Mem’l HealthCare, No. 4:23cv540-MW/MAF, 2024 WL 2318621, at *1 (N.D. Fla. May 22, 2024), and Cyr v. Orlando Health, Inc., No. 8:23-cv-588-WFJ-CPT (M.D. Fla. July 5, 2023). In Tallahassee Memorial, the Court denied dismissal of identical claims where a healthcare provider allegedly disclosed patient information to Meta and Google through website tracking. Similarly, in Cyr—another case against Orlando Health itself—the Court found the plaintiff’s claims plausible and worthy of proceeding past the pleading stage. This suggests that Courts are increasingly receptive to these digital privacy concerns in the healthcare context.
All in all, healthcare marketers may need to rethink their digital strategies, and patients might finally gain transparency into how their online health searches are being monetized. The next time you search for symptoms online or book a medical appointment through a website, remember that a seemingly private digital conversation might have more participants than you realize.

From Seizures to Strategy: The U.S. Government’s Move Toward a National Crypto Reserve

Following President Trump’s March 6 Executive Order establishing a Strategic Bitcoin Reserve, released alongside a White House Briefing, the U.S. government has taken its most formal step yet toward integrating digital assets into national economic and security policy. The order outlines a broader strategy to manage and expand the federal government’s holdings of Bitcoin and other designated cryptocurrencies through the creation of a Strategic Bitcoin Reserve and U.S. Digital Asset Stockpile.
While many details remain forthcoming, existing government practices around crypto asset custody, combined with reporting on the administration’s plans, offer a glimpse into how the reserve may operate in practice.
Bitcoin: The Foundation of the Reserve
The executive order calls for the formation of a Strategic Bitcoin Reserve, leveraging the U.S. government’s existing crypto holdings—estimated to exceed 200,000 BTC based on seizures of crypto in connection with illicit activities. These assets are already under federal control and provide a ready base for the reserve.
The Department of Justice (DOJ) has historically overseen management of some of the U.S. government’s crypto assets under its Digital Asset Forfeiture Program. The U.S. government has also contracted with third-party institutional crypto custodians to provide secure custody, wallet management, and liquidation services for seized crypto assets. The U.S. Marshals Service, a unit of the DOJ, has also periodically offered crypto for sale, just as it does with artwork, vehicles and other assets forfeited to the government in various criminal, civil and administrative cases.
However, the White House Briefing points out shortcomings in the U.S. government’s current crypto asset management protocols, including that assets are scattered across multiple Federal agencies, leading to a non-cohesive approach where options to maximize value and security of crypto holdings have been left unexplored. Additional measures could include multi-signature wallet storage, layered access controls, segregated storage (as opposed to pooling crypto assets in one omnibus wallet), strategic portfolio management, and specialized regulatory oversight via the Presidential Working Group on Digital Asset Markets.
Beyond Bitcoin: The Digital Asset Stockpile
In addition to Bitcoin, the executive order also calls for the creation of a U.S. Digital Asset Stockpile, which will include four cryptocurrencies, reportedly selected for their market relevance, technical resilience, and utility in decentralized finance (DeFi) and cross-border settlement use cases. The rationale, as outlined in a White House briefing, is to ensure the United States maintains influence and optionality in emerging blockchain ecosystems while encouraging domestic innovation.
To date, no details have surfaced regarding a formal acquisition program for these assets or how the crypto asset portfolio will be managed.
Putting It Into Practice: The launch of the Strategic Bitcoin Reserve and Digital Asset Stockpile marks a watershed moment in U.S. crypto policy. This policy signals a clear shift toward legitimizing digital assets as sovereign financial instruments and could prompt other nations to consider similar reserves (for our previous discussions on recent developments in the ongoing shift in U.S. crypto policy, see here, here, here, and here). This development also suggests the U.S. intends to play an active role in shaping global crypto governance—not only through regulation, but also through participation and ownership.

 

ANCIENT TEXTS: Plaintiff Brings Class Action Against Ancient Cosmetics 3 years 364 Days After Text Was Sent

When people tell you the statute of limitations for a TCPA violation is four years– we really mean it.
Back on March 25, 2021 a company called Ancient Cosmetics allegedly sent a marketing text message to a lady named Patrice Gonzalez.
At that time Tom Brady had just won a Super Bowl over the Chiefs, that big ship Ever Given was still stuck in the Suez canal and the Czar was still working in big law.
Yeah, that was a looooong time ago.
But just this week Ms. Gonzalez filed a TCPA class action lawsuit against Ancient Cosmetics over the ancient text messages–what are the odds of that BTW?–and its a great reminder to folks.

Compare!
What you do today in TCPAWorld has consequences for a loooong time to come.
That means you need to be keeping records of consent–especially if you are buying leads–for that entire time.
And yes people WILL sue you 3 years, 364 days after you allegedly violate the TCPA.
Gross, right?
Let those who have ears to hear, hear.

Blockchain+ Bi-Weekly; Highlights of the Last Two Weeks in Web3 Law: March 27, 2025

The past two weeks brought some notable progress for the industry, though it still often feels like “regulation by lack of enforcement” rather than a truly proactive approach. The SEC clarified that most proof-of-work mining activities do not amount to securities transactions—a welcomed statement for miners but limited in scope. Meanwhile, Ripple announced a potential settlement that would end the SEC’s appeal, continuing a trend of non-fraud crypto cases winding down without generating long-term clarity. On Capitol Hill, the Senate’s markup of its own stablecoin act signals a significant step forward yet also highlights a lack of consensus necessary for any final bill. Finally, in a notable display of bipartisan alignment, both chambers of Congress overwhelmingly passed legislation overturning the IRS’s crypto broker reporting rules, demonstrating the possibility of constructive actions in areas where consensus can be reached.
These developments and a few other brief notes are discussed below.
SEC Clarifies That Most Proof-of-Work Mining Activities Are Not Securities Transactions: March 20, 2025
Background: The SEC’s Division of Corporation Finance released a statement clarifying its view that most proof-of-work (“PoW”) mining activities do not qualify as securities transactions under federal securities laws. The statement applies specifically to “Protocol Mining” activities involving “Covered Crypto Assets”, which are defined as crypto assets tied to the functioning of a public, permissionless PoW network. According to the release, whether through self-mining or pooled mining, miners perform the essential “work” themselves. Under the Howey test, one crucial element for a transaction to be deemed a security is that profits must flow primarily from the “managerial or entrepreneurial efforts of others.” Because PoW miners generate rewards by contributing their own computational power, the SEC concluded that these returns are not derived from someone else’s management. Thus, PoW mining generally fails this aspect of the Howey test, placing it outside the scope of federal securities laws.
Analysis: It’s important to note that releases like these do not create binding law and each set of facts can differ and may yield different legal results, which may make certain PoW mining fall outside of this safe-harbor-like guidance. Still, the statement signals that, under typical PoW mining arrangements, participants who merely contribute computational power to validate transactions and receive rewards likely do not cross into securities territory, including through pooling arrangements. This may allow more risk-averse entities to contribute compute to mining or provide services to mining pools, which only serves to strengthen network resilience and efficiency.
Ripple CEO Announces Pending Settlement With SEC: March 19, 2025
Background: Ripple has announced that the SEC will drop its appeal of the portion of the ruling against it in Ripple. This will bring an end to at least part of the case originally brought in 2020 during Jay Clayton’s term as Chairman of the SEC. This will still need to be approved at the next meeting of the commissioners, and it is unclear what this dismissal will entail. Representatives of Ripple have stated that they are evaluating what to do with their own cross-appeal relating to institutional investor sales. Still, there wouldn’t be an announcement like this if a deal was not in place, so now it is just a waiting game to see the details.
Analysis: Ripple was one of the few digital asset issuers from the ICO boom that had the resources to fully litigate against the SEC, and it has been doing so for half a decade. And litigate they did, with over 25 filings related to the “Hinman Speech” documents alone. Combined with the dismissal of the Coinbase matter and its pending appeal, there is still no binding precedent from higher courts on the applicability of the Howey test to digital assets.
Stablecoin Senate Markup Developments: March 13, 2025
Background: The Senate Banking Committee had a markup of the GENIUS Act, which is the Senate’s version of a stablecoin bill. Even before the markup and vote, there were some changes made due to bipartisan efforts to reach an agreement on how stablecoins should be registered and monitored in the U.S. The bill passed through committee on an 18-6 vote, with five Democrats (Warner-VA, Kim-NJ, Gallego-AZ, Rochester-DE and Alsobrooks-MD) voting in favor, meaning the 4 most junior Democrats on the committee (along with Warner) crossed party lines to vote in favor of the GENIUS Act.
Analysis: Senator Warren predictably tried to propose amendments that would have killed the viability of the bill (to the delight of traditional banks), but all those proposals failed. It can be expected there will be closed door work on the bill to address the concerns of Democrats who want some changes to the bill to help it receive as much bipartisan support as possible. The House is also working on its own bill, holding a hearing on stablecoins and CBDCs this week, and the Senate Banking Committee also passed a bill regarding debanking that went along party lines.
House Votes to Overturn IRS Crypto Broker Reporting Rules: March 11, 2025
Background: The House voted overwhelmingly in favor of repealing the IRS broker rule change, which was adopted in the final months of President Biden’s term, which would have made all self-custodial wallet providers, DeFi protocols and even arguably internet service providers themselves reporting entities for any digital asset transaction. The vote was 292-132 in the House and 70-28 in the Senate. It will go to the Senate again before being signed by President Trump, who has stated he intends to sign as soon as it hits his desk.
Analysis: The IRS broker rule, as finalized, was overly broad and aggressive, potentially capturing industry participants like self-hosted wallet providers, automated market makers, validators and possibly even ISPs. This might be a “played yourself” moment because some classes of entities in the digital asset space could logically be included as reporting entities under broker reporting rules. If the bill goes into law as expected, any such rule will need to come from Congress now.
Briefly Noted:
SEC Likely to Abandon Reg ATS Rule Changes for Crypto: Acting Sec Chair Mark Uyeda gave a speech saying he directed staff to kick the tires on (i.e., abandon) a proposed rule change that would expand the definition of an “exchange” in a way that might have looped in certain DeFi protocols and service providers.
Geofenced Airdrop Costs to Americans: Dragonfly released its State of Airdrops report for 2025, which shows that Americans missed out on as much as $2.6 billion in potential revenue (and the U.S. missed out on taxing that revenue) by policies that resulted in Americans being disqualified from those airdrops.
Leadership Changes at Crypto Policy Leaders: Amanda Tuminelli is taking over as CEO of industry advocacy group DeFi Education Fund. Meanwhile, Cody Carbone deserves congratulations on his recent promotion to CEO of the Digital Chamber. Those organizations are in great hands under their leadership.
Come in and Register: Now that crypto firms can actually have a dialog with the SEC without fear that opening the dialog will lead to investigations and hostile actions, a record number are filing for various approvals at the agency. Crazy how that works.
CFTC Withdraws Swap Exchange Letter: The CFTC withdrew its prior Staff Advisory Swap Execution Facility Registration Requirement which arguably required DeFi participants to register with the agency and which 3 DeFi platforms were charged with disobeying in 2023. This may signal an intent to ease the prosecution of decentralized platforms for failing to register as swap execution facilities.
OFAC Removes Tornado Cash Designations: In another huge industry development, OFAC has finally removed protocol addresses from its sanctions list, which is a huge win for software developers and privacy advocates everywhere.
SEC Hosts First Crypto Roundtable: The SEC’s first crypto roundtable is available to view. Not many major takeaways, but it’s good to see these conversations occurring in public forums. This is ahead of the expected SEC Chair Atkins’ hearing before the Senate.
Stablecoin Legislation Update: Ro Khanna (D-CA) said he believes stablecoin and market structure legislation gets done this year at the Digital Assets Summit on March 18, 2025, stating there are 70 to 80 Democrats in the House who view this as an important issue to maintain American dollar dominance and influence. Bo Hines also stated stablecoin legislation will get done in the next few months.
SEC Permits Some Rule 506(c) Self-Certification: Rule 506(c), which allows for sales of securities to accredited investors while using general advertising and solicitation, historically has required independent verification of accredited investor status, such as through getting broker letters or tax returns. In a new no-action letter, the SEC clarified that issuers can rely on self-certifications of accredited investor status as long as the minimum purchase price is high enough and certain other qualifications are met.
Conclusion:
Although not legally binding, the SEC’s acknowledgment that most proof-of-work mining activities are not securities transactions remains a welcomed development for the industry. Meanwhile, the potential conclusion of the SEC’s appeal against Ripple carries both positive and negative implications. On one hand, it suggests that the SEC may follow through on ending non-fraud crypto litigations; on the other, it underscores the ongoing uncertainty in crypto rulemaking absent further regulatory clarity. As the Senate and House each work through their own crypto bills and rules, legislative activity around digital assets is likely to remain robust in the near future.

China Releases New Rules Regarding the Use of Facial Recognition Technology

On March 21, 2025, the Cyberspace Administration of China and the Ministry of Public Security jointly released the Security Management Measures for the Application of Facial Recognition Technology (the “Measures”), which will become effective on June 1, 2025. Below is a summary of the scope and certain of the key requirements of the Measures.
Scope of Application of the Measures
The Measures apply to activities using facial recognition technology to process facial information to identify an individual in China. However, the Measures do not apply to activities using facial recognition technology for research or algorithm training purposes in China.
Facial information refers to biometric information of facial features recorded electronically or by other means, relating to an identified or identifiable natural person, excluding information that has been anonymized.
Facial recognition technology refers to individual biometric recognition technology that uses facial information to identify an individual’s identity.
Specific Processing Requirements for Facial Recognition Technology
The Measures include specific processing requirements which must be complied with when activities are in scope of the Measures. These include:

Storage: The facial information should be stored in the facial recognition device and prohibited from external transmission through the Internet, unless the data handler obtains separate consent from the data subject or is otherwise permitted by applicable laws and regulations.
Privacy Impact Assessment (“PIA”): The data handler should conduct a PIA before processing the data.
Public Places: Facial recognition devices can be installed in public places, subject to the data handler establishing the necessity for maintenance of public security. The data handler shall reasonably determine the facial information collection area and display prominent warning signs.
Restriction: The data handler should not use facial recognition as the only verification method if there is any other technology that may accomplish the same purpose or meet the equivalent business requirements.
Filing Requirement: If the data handler processes facial information of more than 100,000 individuals through facial recognition technology, it should conduct a filing with the competent Cyberspace authority at the provincial level or higher within 30 business days upon reaching that threshold. The filing documents should include, amongst other things, basic information of the data handler, the purpose and method of processing facial information, the security protection measures taken, and a copy of the PIA. In cases of any substantial changes of the filed information, the filing shall be amended within 30 business days from the date of change. If the use of facial recognition technology is terminated, the data handler shall cancel the filing within 30 business days from the date of termination, and the facial information involved shall be processed in accordance with the law.

Mexico’s New Personal Data Protection Law: Considerations for Businesses

On March 20, 2025, Mexico’s new Federal Law on the Protection of Personal Data held by Private Parties (FLPPDPP) published in the Official Gazette of the Federation. Effective March 21, the new law replaces the FLPPDPP published in July 2010.  
Among the key changes the decree and new FLPPDPP introduce is the dissolution of the National Institute of Transparency, Access to Information, and Protection of Personal Data (INAI). Before the decree’s publication, INAI served as an autonomous regulatory and oversight authority for matters related to transparency, information access, and personal data protection. As of March 21, 2025, these responsibilities will be transferred to the Ministry of Anticorruption and Good Governance (Ministry), a governmental body reporting directly to the executive branch. The Ministry will now supervise, oversee, and regulate personal data protection matters.  
Related to personal data protection, companies may wish to consider the following points when preparing to comply with the new FLPPDPP:

The definition of “personal data” is amended to remove the previous limitation to natural persons, expanding the scope to any identifiable individual—when their identity can be determined directly or indirectly through any information.   
The law now requires that the data subject give consent “freely, specifically, and in an informed manner.”   
Public access sources are now limited to those the law explicitly authorizes for consultation, provided no restrictions apply, and are only subject to the payment of the applicable consultation fee.   
The scope of personal data processing expands to encompass “any operation or set of operations performed through manual or automated procedures applied to personal data, including collection, use, registration, organization, preservation, processing, communication, dissemination, storage, possession, access, handling, disclosure, transfer, or disposal of personal data.”   
As a general rule, the data subject’s tacit consent is deemed sufficient for data processing, unless the law expressly requires obtaining prior explicit consent.   
Regarding the privacy notice, the new FLPPDPP requires data controllers to specify the purposes of processing that require the data subject’s consent. Additionally, the express obligation to disclose data transfers the controller carries out is eliminated.   
Resolutions the Ministry issues may be challenged through amparo proceedings before specialized judges and courts.

Takeaways

1.
 
Although this amendment does not introduce substantial changes with respect to the obligations of those responsible for processing personal data, companies should review their privacy notice and, if necessary, adjust it to the provisions of the FLPPDPP including, where appropriate, replacing references to the INAI.   

2.
 
If any data protection proceedings were initiated before the INAI while the previous law was in effect, the provisions of the prior law will continue to govern such proceedings, with the exception that the Ministry will now handle them.   

3.
 
The executive branch will have 90 days to issue the necessary amendments to the new FLPPDPP regulations. Companies should monitor for the amendments’ publication to identify changes that may impact their compliance obligations under the new law.

Read in Spanish/Leer en español.