President Signs EO to Restore Gold Standard for Science, Calls for Reevaluation of Biden Administration’s Scientific Integrity Policies
On May 27, 2025, President Trump signed an Executive Order (EO) on “Restoring Gold Standard Science.” 90 Fed. Reg. 22601. The EO states that the Trump Administration “is committed to restoring a gold standard for science to ensure that federally funded research is transparent, rigorous, and impactful, and that Federal decisions are informed by the most credible, reliable, and impartial scientific evidence available.” The EO restores the scientific integrity policies of the first Trump Administration and “ensures that agencies practice data transparency, acknowledge relevant scientific uncertainties, are transparent about the assumptions and likelihood of scenarios used, approach scientific findings objectively, and communicate scientific data accurately.”
Restoring Gold Standard Science
The EO directs the Director of the White House Office of Science and Technology Policy (OSTP), in consultation with the heads of relevant agencies, to issue guidance within 30 days for agencies on implementing “Gold Standard Science” in the conduct and management of their respective scientific activities. The EO defines Gold Standard Science as science conducted in a manner that is reproducible; transparent; communicative of error and uncertainty; collaborative and interdisciplinary; skeptical of its findings and assumptions; structured for falsifiability of hypotheses; subject to unbiased peer review; accepting of negative results as positive outcomes; and without conflicts of interest. Once OSTP publishes the guidance, the EO directs each agency head to update promptly applicable agency policies governing the production and use of scientific information, including scientific integrity policies, to implement the OSTP Director’s guidance. Within 60 days of the publication of OSTP’s guidance, agency heads must report to the OSTP Director on the actions taken to implement Gold Standard Science at their agency.
Improving the Use, Interpretation, and Communication of Scientific Data
Within 30 days after the date of the EO, agency heads and employees must adhere to the following rules governing the use, interpretation, and communication of scientific data, unless otherwise provided by law:
Employees shall not engage in scientific misconduct nor knowingly rely on information resulting from scientific misconduct;
Except as prohibited by law, and consistent with relevant policies that protect national security or sensitive personal or confidential business information (CBI), agency heads shall in a timely manner and, to the extent practicable and within the agency’s authority;
Make publicly available the following information within the agency’s possession:
The data, analyses, and conclusions associated with scientific and technological information produced or used by the agency that the agency reasonably assesses will have a clear and substantial effect on important public policies or important private sector decisions (influential scientific information), including data cited in peer-reviewed literature; and
The models and analyses (including the source code for such models) the agency used to generate such influential scientific information. The EO states that employees may not invoke exemption 5 to the Freedom of Information Act (FOIA) to prevent disclosure of such models unless authorized in writing to do so by the agency head following prior notice to the OSTP Director;
Risk models used to guide agency enforcement actions or select enforcement targets are not information that must be disclosed under this subsection;
When using scientific information in agency decision-making, employees must transparently acknowledge and document uncertainties, including how uncertainty propagates throughout any models used in the analysis;
Where employees produce or use scientific information to inform policy or legal determinations they must use science that comports with the legal standards applicable to those determinations, including when agencies evaluate the realistic or reasonably foreseeable effects of an action;
Employees must be transparent about the likelihood of the assumptions and scenarios used. The EO states that “[h]ighly unlikely and overly precautionary assumptions and scenarios should only be relied upon in agency decision-making where required by law or otherwise pertinent to the agency’s action”;
When scientific or technological information is used to inform agency evaluations and subsequent decision-making, employees shall apply a “weight of scientific evidence” approach;
Employees’ communication of scientific information must be consistent with the results of the relevant analysis and evaluation and, to the extent that uncertainty is present, the degree of uncertainty should be communicated. The EO notes that “[c]ommunications involving a scientific model or information derived from a scientific model should include reference to any material assumptions that inform the model’s outputs”; and
Once the guidance on Gold Standard Science is established and promulgated, it shall, among other things, form the basis for employees’ evaluation of all scientific and technological information called for in the EO except where otherwise required by law.
Interim Scientific Integrity Policies
Until the issuance of updated agency scientific integrity policies, the EO states that scientific integrity policies in each agency must be governed by the scientific integrity policies that existed within the executive branch on January 19, 2021. The EO directs agency heads to take all necessary actions to reevaluate and, where necessary, revise or rescind scientific integrity policies or procedures, or amendments to such policies or procedures, issued between January 20, 2021, and January 20, 2025. Under the EO, each agency head must promptly revoke any organizational or operational changes, designations, or documents that were issued or enacted pursuant to the Presidential Memorandum of January 27, 2021 (Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking), which was revoked pursuant to EO 14154 and shall conduct applicable agency operations in the manner and revert applicable agency organization to the same form as would have existed in the absence of such changes, designations, or documents.
In updating applicable scientific integrity policies, the EO directs agencies to ensure they:
Encourage the open exchange of ideas;
Provide for consideration of different or dissenting viewpoints; and
Protect employees from efforts to prevent or deter consideration of alternative scientific opinions.
Agencies must review agency actions taken between January 20, 2021, and January 20, 2025, including regulations, guidance documents, policies, and scientific evaluations, and take all appropriate steps, consistent with law, to ensure alignment with the policies and requirements of the EO.
Scope and Applicability
The policies and rules set forth in the EO apply to all employees involved in the generation, use, interpretation, or communication of scientific information, regardless of job classification, and to all agency decision-making. Agency heads and employees must, to the extent practicable and consistent with applicable law, require agency contractors to adhere to these policies and rules as though they were agency employees. The EO’s policies and rules govern the use of science that informs agency decisions, but the EO notes that “they are not applicable to non-scientific aspects of agency decision-making.”
Enforcement and Oversight
The EO requires each agency head to establish internal processes to evaluate alleged violations of the requirements of the EO and other applicable agency policies governing the generation, use, interpretation, and communication of scientific information. Such processes will be the responsibility, and administered under the direction, of a senior appointee designated by the agency head and shall provide for taking appropriate measures to correct scientific information in response to violations, consistent with the requirements and procedures of Section 515 of the Information Quality Act (IQA). According to the EO, the designated senior appointee may also forward potential violations to the relevant human resources officials for discipline to the extent the potential violation also violates applicable agency policies and procedures. The designated senior appointee may consult appropriate officials with scientific expertise when establishing such processes.
Commentary
There is no serious disagreement with the idea of conducting and relying upon quality science. Quality science is non-partisan. The challenge is not with the goal, it is with defining “best available science” as, like so many qualitative terms, it is in the eye of the beholder. Too often, individuals rely on preferred science. It is human nature to be more open to data that confirm your perspective and less receptive to data that refute your view. Scientists must remain open to different views and different interpretations of data. Doing otherwise fundamentally undermines science.
The objection to “secret science” must also be carefully explored. It can be used to diminish the value of quality studies even though there are legitimate reasons for information in those studies to be maintained as confidential. Most would agree that individual identities in an epidemiological study, for example, are legitimately confidential based on individual privacy concerns. In a different case, the name of the sponsor that funded a study conducted according to Good Laboratory Practice (GLP) standards is not needed to evaluate the quality of the study. GLP protocols were established to minimize a study sponsor’s influence on the outcome or interpretation of a study. GLP protocols are arguably more protective of the best available science than peer review. One can reasonably assume that the sponsor of a GLP study has a financial interest in that study, so knowing the specific identity of the sponsor neither adds nor detracts from another’s interpretation of the study. If data are only valid if they align with your views, you are not relying on the best available science. We hope that the EO will be heeded by agencies and departments, and that decisions will be based on the best available science.
We, like many others, struggle with understanding a commitment to science with the Administration’s dramatic reduction in the executive branch’s scientific expertise. These are two realities difficult to rationalize. Can the goal be achieved when the means are undermined? Federal science agencies have been a bastion of outstanding science and scientists. Even if there are some examples of science generated by federal efforts (in-house or through contracts and grants) not perfectly meeting the standard of “best available science,” the solution can only be realized through better science.
FTC Permanently Bans Debt Collector for UDAP and FDCPA Violations
On April 30, the FTC filed a stipulated order for a permanent injunctive relief and a monetary judgment against a Georgia-based debt collection company and its owner, which the court granted on May 9, to resolve allegations that the company used false claims, threats, and harassment to collect more than $7.6 million in bogus debts.
The FTC’s complaint alleged violations of Section 5(a) of the FTC Act, the Fair Debt Collection Practices Act (FDCPA) and Regulation F, the Gramm-Leach-Bliley Act, and the FTC’s Impersonation Rule. Under the order, the defendants are permanently banned from participating in debt collection or brokering activities. The judgment imposes a $9.6 million in monetary relief, which was partially suspended based on the defendants’ inability to pay.
The FTC alleged the company engaged in several unlawful practices, including:
Making false claims. The company allegedly fabricated or misrepresented debts to extract payments from consumers.
Threatening consumers with arrest or lawsuits. Consumers were told they would face arrest, wage garnishment, or civil litigation unless they paid immediately.
Harassing consumers and family members. The company made repeated, unsolicited calls and contacted relatives to pressure consumers into paying.
Obtaining financial information through false pretenses. The company misrepresented its purpose to gain access to consumers’ bank accounts and personal data.
Pretending to be affiliated with other businesses. The company used fictitious names and falsely claimed to represent, or be associated, with legitimate lenders or mediation firms.
Putting It Into Practice: The enforcement action highlights the FTC’s ongoing focus on UDAP violations, particularly those involving threats, impersonation, or deception (previously discussed here). Debt collectors and affiliated vendors should ensure their practices comply not only with the FDCPA and Regulation F, but also with broader federal UDAP standards and the FTC’s Impersonation Rule.
Listen to this post
Is Your Website a Legal Target? Why Chatbots, Cookies + AdTech Are Drawing Lawsuits Under an Old California Law
It’s 2025, and somehow, we’re still dealing with lawsuits over a law that was born in the pen registers and rotary phones era. That law, the California Invasion of Privacy Act (CIPA), a decades-old statute that’s suddenly found new life in the digital age, could put your company in legal crosshairs based on its website and its tracking technology.
Over the past year, we’ve seen a sharp uptick in demand letters and litigation targeting businesses over alleged privacy violations tied to digital website tools like:
Chatbots and live chat features
Website analytics tools
Ad campaign tracking (Meta Pixel)
Social media plugins and integrations
In many of these cases, plaintiffs allege that businesses are “eavesdropping” on users, all under the theory that using these technologies without their consent violates CIPA.
Enacted in 1967, CIPA outlawed wiretapping and pen registers, tools used to monitor telephone calls and communication metadata.
Fast forward to today: plaintiffs are arguing that third-party tracking cookies, IP address collection, session replays, and chatbots serve as modern-day equivalents of those old-school surveillance devices. And, surprisingly, some courts are letting these arguments move forward.
What can you do to avoid these types of claims? First, ask yourself some basic questions:
Do you operate a website or mobile app?
If yes, you’re already in the conversation. These are the primary platforms where privacy issues pop up.
Do you use a chatbot or live chat feature?
If you’ve installed any customer support chat tool, even through a third-party vendor, you could be logging and transmitting data that CIPA litigants say violates user privacy.
Are you using web analytics, ad tracking, or social media plugins?
These tools often track user behavior via cookies, beacons, or IP logs, which are now being challenged as CIPA violations.
Does your website have a privacy policy?
If so, is it up-to-date and accurate? A vague or outdated policy can hurt you more than it helps.
Do you have a cookie notice and consent mechanism?
Simply saying “we use cookies” isn’t enough anymore. Laws increasingly require clear disclosures and opt-in mechanisms, especially in California and Europe.
Does your chatbot have a disclaimer?
Users should know what data is collected via chat and how it’s used. No disclaimer could be a big risk.
What actions can you take?
Update your privacy policy: make sure it reflects all current data practices, including chat features, tracking tools, and any third-party sharing, and that it is compliant with applicable consumer privacy rights laws.
Give notice and get consent: for tools like analytics and targeted advertising, disclosure is key. In some jurisdictions, prior consent is required before deploying any tracking technology.
Review your chat tools: add a disclaimer or notification to users when they engage with chat features, explaining how their data is handled.
Rethink your tech stack: not all third-party vendors are created equal. Vet your service providers, understand their data practices, and ensure contracts include privacy and indemnification clauses.
These CIPA (or trap and trace) lawsuits are not fringe cases anymore. They’re part of a broader wave of privacy litigation targeting the ad tech ecosystem. The claims may sound like a stretch, but courts are entertaining them. Businesses that don’t stay ahead of these developments may find themselves paying to settle lawsuits they didn’t even see coming.
If your business touches user data online, you can’t afford to ignore these issues. A proactive approach to privacy and transparency is no longer optional.
The BR Privacy & Security Download: June 2025
On May 5, 2025, the newest Commissioner of the Federal Trade Commission (FTC), Mark R. Meador, spoke at the Second Annual Antitrust Conference at George Washington University.
His prepared remarks offer insight into his approach to antitrust enforcement, addressing what he sees as common antitrust enforcement myths.
In dispelling the first myth—“antitrust is regulation”—the Commissioner is very clear: “Antitrust is law enforcement, period. Full stop.”
He similarly and succinctly rejects four other myths related to antitrust enforcement:
“Vertical integration is always procompetitive.” Commissioner Meador makes the contrary case that vertical integration is not always procompetitive, particularly in non-physical markets such as technology.
“Innovation can justify exclusion.” The Commissioner instead asserts the need to identify conduct that forecloses alternatives.
“We need national champions to compete with China.” The Commissioner suggests, to the contrary, that competition is better suited by free enterprise.
“Structural remedies are an extreme measure.” He counters that structural remedies can be a way to restore free markets.
Commissioner Meador concludes his comments with what might be seen as a policy warning, making clear that the current FTC’s interest in antitrust enforcement is not limited to technology platforms or “Big Tech,” but extends to every industry, including “groceries, healthcare, and energy.”
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. LandyAmanda M. Noonan, and Karen H. Shin.
What is Agentic AI? A Primer for Legal and Privacy Teams
As companies begin to move beyond large language model (LLM)-powered assistants into fully autonomous agents—AI systems that can plan, take actions, and adapt without human-in-the-loop—legal and privacy teams must be aware of the use cases and the risks that come with them.
What is Agentic AI?
Agentic AI refers to AI systems—often built using LLMs but not limited to them—that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention.
Agentic AI often blends LLMs with other components like memory, retrieval, application programming interfaces (APIs), and reasoning modules to operate semi-autonomously. It goes beyond chat interfaces and can initiate real actions—inside business applications, internal databases, or even external platforms.
For example:
An agent that processes inbound email, classifies the request, files a ticket, and schedules a response—all autonomously.
A healthcare agent that transcribes provider dictations, updates the electronic health record , and drafts follow-up communications.
A research agent that searches internal knowledge bases, summarizes results, and proposes next steps in a regulatory analysis.
These systems aren’t just helping users write emails or summarize docs. In some cases, they’re initiating workflows, modifying records, making decisions, and interacting directly with enterprise systems, third-party APIs, and internal data environments. Here are a handful of issues that legal and privacy teams should be tracking now.
System Terms of Use Are (Still) Built for HumansMost third-party platforms—whether cloud apps, SaaS platforms, enterprise tools, or APIs—were not designed for autonomous agents. Terms of service often restrict access to human users or prohibit automated tools altogether. When an agent accesses systems, modifies records, makes queries, or connects data across systems, that may breach current contractual limits.
Takeaway: Review system terms and licensing agreements for automation restrictions. If you plan to deploy agents, negotiate permission or amend access terms in writing.
Liability Flows Through YouIf an agent triggers errors—like deleting records, misusing credentials, overloading systems, or pulling data in ways that breach policy—your company is still responsible. There’s rarely contractual coverage or indemnity language that contemplates AI acting on your behalf across platforms.
Takeaway: Treat agentic systems like high-privilege users. You need clear boundaries around what they’re allowed to do and internal accountability for oversight.
Privacy Impacts Are UnderexploredAgentic AI creates new data privacy exposure. These systems may access sensitive data, make inferences, or combine data sources in ways your existing data processing agreements and compliance processes don’t cover. In addition, they often operate without strong logs, making audit or breach response difficult.
Takeaway: Treat autonomous agents as processors. Run data protection impact assessments . Map data access and flow. Limit scope and tie all actions to traceable logs.
Regulators Will Expect You to Stay in Control of Agents’ Decision-Making and Data ProcessingIf an AI agent makes decisions that affect consumers, processes personal data, or impacts fairness or transparency, it will potentially implicate a variety of laws and regulations, such as the FTC Act and state unfair and deceptive acts and practices laws, privacy laws and regulations – including the GDPR and the CCPA’s forthcoming automated decision-making technology regulations, and AI-specific laws and regulations such as the Colorado AI Act and the EU AI Act. Your company is responsible for ensuring your agents’ compliance with these and other laws and regulations. All this said, we are clearly headed, at least from a U.S. perspective, towards a hands-off regulatory approach and pro-business environment when it comes to AI. This is certainly true from a federal perspective and also perhaps from a state perspective, if the One Big Beautiful Bill Act (H.R. 1) – which purports to ban states from regulating AI for 10 years – is passed by Congress.
Exemplary enforcement hooks:• AI agents making consequential decisions on your behalf without providing proper notice or privacy rights• Material deception about whether humans or AI are making decisions• Use of sensitive data in ways that violate prior notice or consent• Lack of accountability for outcomes from automated systems
Takeaway: If an agent operates on consumer-facing data or makes consequential decisions, treat it like any other high-risk algorithm. Monitor, test, disclose.
Audit and Explainability Gaps Are RealYou may not be able to explain why an agent did what it did—because the system is goal-directed, not rule-bound. Many enterprise systems don’t separate human and agent activity, and internal logs may be incomplete.
Takeaway: Layer audit and observability more broadly than just the system the agent touches. Require rollbacks, alerts, and human override paths.
No One Owns This YetAgentic AI is crossing boundaries—legal, privacy, InfoSec, and engineering. But, without a designated policy owner, these tools could be deployed ad hoc without legal input.
Takeaway: Create a simple policy for agent deployment approvals, access controls, and post-deployment reviews. Assign a directly responsible individual.
The Bottom Line
Agentic AI isn’t theoretical. It’s being shipped into business operations quietly—through pilot projects, dev team prototypes, and platform-native tooling. Legal and privacy teams should engage early, set guardrails, and help the business use these systems responsibly.
When Satire Meets Statute: The Onion’s VPPA Class Action
Video Privacy Protection Act (VPPA) class action lawsuits have been on the rise, and the owner of the The Onion, a popular satire site, finds itself the subject of a recent one. On May 16, 2025, a plaintiff-initiated litigation against Global Tetrahedron, LLC, the owner of The Onion, alleges that the defendant installed the Meta Pixel on its website, with host videos for streaming, without user knowledge.
The plaintiff alleges that, unbeknownst to consumers, the Meta Pixel tracks users’ video consumption habits “to build profiles on consumers and deliver targeted advertisements to them.” According to the complaint, the Meta Pixel is configured to collect HTTP headers, which contain IP addresses, information about the user’s web browser, page location, and document referrer (the URL of the previous document or page that loaded the current one). Since the Meta Pixel is reportedly attached to a user’s browser, “if the user accesses Facebook.com through their Safari browser, then moves to theonion.com after leaving Facebook, the Meta Pixel will continue to track that user’s activity on that browser.” The complaint also alleges that Meta Pixel collects a Meta-specific value called the c-user cookie, which is a unique user ID for users logged into Facebook. By combining the points of data collection, the complaint asserts, the Onion transmits personally identifiable information to Meta.
In a novel approach, the complaint uses screenshots of the plaintiff’s ChatGPT conversation to demonstrate how ChatGPT can help an ordinary user decipher what information is allegedly being disclosed to Meta through the Onion website. According to the screenshots, when the plaintiff asked ChatGPT how to check if a website was disclosing their browsing activity to Meta, the plaintiff was directed to use developer tools to inspect the page’s network traffic. Each internet browser has an integrated developer tool, which allows developers to analyze network traffic, measure performance, and make temporary changes to a page. Any website user can open the developer tool, as ChatGPT directed the plaintiff to do.
Following ChatGPT’s instructions, the plaintiff reportedly opened the developer tool page for the Onion website. Then, the plaintiff uploaded a screenshot of the Onion’s developer tool onto ChatGPT. ChatGPT analyzed the request in the screenshot and broke down the parameters contained within, including Pixel ID, Page Views, URL, and Facebook cookie ID. Many VPPA complaints in recent months have described the technical processes behind tracking technologies, but by using ChatGPT in this complaint, the plaintiff underscores how such large language model tools can help an average website user decipher seemingly complex technical concepts and better understand the data flows from tracking technologies.
The case reflects a broader trend in VPPA litigation, in which plaintiffs are challenging the use of third-party tracking technologies on sites that offer any form of video content. As VPPA litigation evolves, this case could peel back another layer of risk for publishers across industries providing video streaming content.
Google Releases June Security Bulletin for Android Devices to Fix Vulnerabilities
Google recently issued its June Android Security Bulletin that is designed to patch 34 vulnerabilities, all of which Google designates as high-severity defects. The most serious flaw the patch is designed to fix in the Android system would allow threat actors “to achieve local escalation of privilege with no additional privileges required.” The bulletin contains two security patch levels so that “Android partners have the flexibility to fix a subset of vulnerabilities that are similar across all Android devices more quickly.”
The bulletin provides common questions and answers for Android users, including how to determine if your device is updated, why it has two security patch levels, how to read its entries, and how to update Google Play.
Google states, “Android partners are encouraged to fix all issues in this bulletin and use the latest security patch level.”
If you own an Android device, confirm that you have patched these vulnerabilities as soon as possible.
TikTok’s Motion to Dismiss Denied by NY State Court
New York Attorney General Letitia James and 13 other Attorneys General filed suit in October 2024 against TikTok “for misleading the public about the safety of its platform and harming young people’s mental health.” TikTok moved to dismiss the case and, on May 28, 2025, New York Supreme Court Judge Anar Rathod Patel denied the motion.
The denial of TikTok’s motion to dismiss allows the New York case to move forward and serves as precedent for TikTok’s other efforts to dismiss the cases filed by over a dozen Attorneys General nationwide.
Adidas and UChicago Sued Over Data Breaches Caused by Third-Party Vendors
What do a global sportswear giant, and a prestigious medical center have in common? Apparently, a shared struggle defending data breach lawsuits for breaches of sensitive personal information caused by third-party vendors.
This week, Adidas America and the University of Chicago Medical Center found themselves on the receiving end of data breach lawsuits. The plaintiffs say both organizations failed to keep their personal info safe, and now want the courts to step in. According to the complaints, Adidas customer Karim Khowaja and UChicago patients Alta Young and Judy Rintala are calling out the companies for what they claim were lax data protection practices that led to their sensitive personal information falling into the wrong hands. Their key argument? The organizations should have known—and done—better.
Khowaja’s lawsuit alleges that Adidas provided a notification of the data breach that left customers with more questions than answers. Khowaja claims that Adidas did not identify the third-party vendor involved, what data was accessed, or when the breach occurred. Further, Khowaja claims this is not Adidas’ first data security blunder—he points back to a 2018 breach as proof the company should have been more vigilant.
“The more accurate pieces of data an identity thief obtains about a person, the easier it is… to take on the victim’s identity,” Khowaja warns in his complaint.
The same allegations are being directed at the University of Chicago Medical Center. According to Young and Rintala, the hospital didn’t discover the breach until ten months after suspicious activity was first detected—by its financial services vendor, National Recovery Services LLC (NRS). Young’s lawsuit claims the breach affected 38,000 patients, and Rintala’s goes further, alleging that the hospital didn’t encrypt or redact any of the compromised data—leaving names, birth dates, and other sensitive information widely available to cybercriminals. “That ‘utter failure’ will present risks to patients for their respective lifetimes,” Rintala claims.
All three plaintiffs are looking to represent classes of similarly affected individuals and are asking for damages and injunctive relief. Each of the plaintiffs are also emphasizing the “real-world” costs of these breaches: time, money, and the emotional stress of trying to prevent identity theft or fraud.
These lawsuits highlight a growing trend: courts being asked to hold companies accountable for third-party vendor breaches. It raises an important question: How far does the responsibility go when it comes to data security? It may be as simple as: if you use a third-party vendor who has access to or maintains sensitive personal information, there is a known risk. Here, a “known risk” refers to a security vulnerability or threat that a reasonable organization should have been aware of—either through industry standards, past incidents, or internal warnings—and failed to adequately address.
In the UChicago case, Young argues that the medical center knew about the risks of working with external vendors like NRS, especially since the kind of breach that occurred is a common method of attack in healthcare data security:
Healthcare is a top target for hackers due to the volume of sensitive personal and financial data. This isn’t new—HIPAA guidance and cybersecurity advisories have warned about it for years.
NRS discovered “suspicious activity” ten months prior to informing UChicago.
The plaintiffs say this delay, paired with the lack of encryption or redaction, shows UChicago failed to properly vet or monitor its vendor—even though outsourcing doesn’t relieve them of responsibility under HIPAA and other regulations.
In Khowaja’s complaint, he makes a similar argument: Adidas previously experienced a breach. So, when it happened again—this time via a third-party customer service provider—he says the company can’t plead ignorance:
Adidas “knew or should have known” that outsourcing customer service introduced a risk of exposure.
Despite that, they allegedly didn’t put in the necessary safeguards to protect customer data or notify affected users with enough information to respond.
Again, the argument isn’t just about the breach itself—it’s about Adidas’ failure to anticipate a risk they’d already seen firsthand.
If the courts agree that failure to safeguard against a “known risk” is enough to trigger liability, we could see more plaintiffs lining up in similar cases across industries for incidents caused by third-party vendors.
Blockchain+ Bi-Weekly; Highlights of the Last Two Weeks in Web3 Law: June 5, 2025
The most important development of the last two weeks is likely the release of a revised bipartisan digital asset market structure bill in Congress, which now gives real momentum to the possibility of comprehensive legislation. At the same time, the SEC is continuing to reposition its posture, pulling back from aggressive litigation, acknowledging areas outside its jurisdiction such as staking, and signaling a more measured approach as we await the first report from its new Crypto Task Force. Meanwhile, the courts continue to shape the legal boundaries of decentralized finance, as seen in the closely watched ruling overturning fraud charges in the Mango Markets case.
These developments and a few other brief notes are discussed below.
Bipartisan Market Structure (“CLARITY Act”) Bill Text Released: May 29, 2025
Background: After releasing draft language of an unnamed market structure bill a few weeks ago, a revised and now titled version, the CLARITY Act, dropped last week. Sponsored by House Financial Services Committee Chair French Hill, the bill has five Republican and three Democratic co-sponsors, all members of either the House Financial Services or House Agriculture Committees. It is expected to be fast-tracked for markup in the Financial Services Committee, as early as June 10th, so this could move quickly through committees. Broader House timing remains unclear, however, as Congressional attention is divided among numerous competing priorities beyond digital asset regulation.
Analysis: The sponsors appear to have seriously considered industry feedback, and several technology-specific issues flagged in the prior version were meaningfully addressed. For example, many pointed to the definition of “Decentralized Finance Trading Protocol,” previously criticized as overly broad, has been revised and now more closely tracks the drafters’ likely intent. There was a hearing earlier this week in the House Financial Services Committee (which we will cover in the next Bi-Weekly update), which was designed to discuss digital asset regulation more broadly but focused heavily on this bill as well.
SEC Releases Guidance That Certain Proof of Stake Staking Activities Do Not Implicate Securities Laws: May 29, 2025
Background: The SEC Division of Corporate Finance put out a “Statement on Certain Protocol Staking Activities” clarifying its view that certain proof-of-stake blockchain protocol “staking” activities are not securities transactions within the scope of the federal securities laws. This follows related guidance on Proof-of-Work mining which was put out in March. “Accordingly, it is the Division’s view that participants in Protocol Staking Activities do not need to register with the Commission transactions under the Securities Act or fall within one of the Securities Act’s exemptions from registration in connection with these Protocol Staking Activities.”
Analysis: This likely clears the way for staking in ETH ETFs or other ETFs linked to proof-of-stake blockchain assets, which may be approved in the near future (although there are still tax and other securities law issues that could make this complicated). It is unclear how this might affect the prior Kraken consent order, as many of the staking services offered by Kraken now appear to be “Ancillary Services” under this guidance. It is great to see all this guidance coming out, but until the guidance is formalized into rulemaking or until there is action from Congress in this area, then the industry is left with few, if any, assurances those viewpoints will continue under different leadership.
SEC Moves to Dismiss Binance Case with Prejudice: May 29, 2025
Background: The SEC has asked the Court to dismiss the agency’s case against the various Binance entities and its founder, Changpeng Zhao (“CZ”), with prejudice, which would bring an end to the cases brought under the prior administration against the biggest U.S. digital asset exchanges, which we have been covering on the BiBlog. This follows previously dismissing cases against Coinbase and Kraken and closing investigations into OpenSea, Circle, and others shortly after the change in administration and resignation of prior SEC Chair Gary Gensler.
Analysis: As we noted in our 2024 year-end digital asset rundown, the cases against various exchanges were bet-the-company litigation for all the exchanges sued. If it was ruled that sales on the platforms of exceedingly common tokens like SOL were securities transactions, that would have made it difficult for most individuals to transact in digital assets in the United States, particularly those lacking experience interacting with decentralized finance. With these lawsuits behind the exchanges, all eyes turn to formal guidance and rulemaking from the SEC/CFTC and whether there will be comprehensive digital asset legislation out of Congress, which is currently being considered by both chambers.
Conviction Overturned in Mango Markets Exploit: May 23, 2025
Background: District Court Judge Arun Subramanian has overturned the fraud convictions against Mango Markets exploiter Avraham (“Avi”) Eisenberg, ruling that venue was improper since there was no evidence that the routing engine for Avi’s trades were in New York. The more interesting ruling, though, was finding there was insufficient evidence of falsity to support a wire-fraud charge (see ruling starting at pg. 26). The Court ruled that because the user terms and conditions didn’t make intent to repay a condition upon borrowing, and because Avi didn’t make any false representations about the value of his assets (he just exploited an oracle into making those false representations for him), the government could not support a fraud conviction, ruling “[o]n a platform with no rules, instructions, or prohibitions about borrowing, the government needed more to show that Eisenberg made an implicit misrepresentation by allowing the algorithm to measure the actual value of his collateral.”
Analysis: This case raises broader questions about what level of human interaction is needed for “wire fraud,” where the alleged fraud is primarily being perpetrated against an algorithm and not a person. There remains the issue that Avi sued Numeris, Ltd. before the Mango Markets trading activities, claiming it was fraud for others to artificially increase the price of tokens to borrow against knowingly inflated values, similar to what Avi did in his exploit. It seems disingenuous to claim “code is law” for his actions while he previously asked the government to save his funds when a protocol that he was using had a similar exploit. Avi is still going to jail on other charges to which he pled guilty. It will be interesting to see how the case law regarding the extent “code-is-law” holds up in the use of permissionless protocols.
Briefly Noted:
401K and Bitcoin Reserve Updates: The Department of Labor has retracted guidance discouraging retirement managers from considering cryptocurrency as an investment option in 401(k) plans. This came as Whitehouse Crypto Czar David Sacks was at a major Bitcoin conference in Vegas where he talked about how the announced Bitcoin strategic reserve is progressing.
Reputational Risk Ban Passes House Committee: The House Financial Services Committee advanced on a 33-19 bipartisan vote a bill that would prohibit federal banking agencies from considering “reputational risk” when supervising, examining, or regulating depository institutions.
SEC Crypto Task Force Updates: The SEC is set to release its first Crypto Task Force Report in the upcoming months; meanwhile Commissioner Peirce delivered a great speech about the importance of the SEC setting clear rules of the road for the space (including noting where the SEC doesn’t have jurisdiction).
Emmer and Torres Reintroduce Right to Code Law: Tom Emmer (R-MN) and Ritchie Torres (D-NY) have reintroduced legislation that would protect the developers of non-custodial blockchain software developers and providers from being classified as money transmitters. This would be huge in convincing developers to stay in the United States when developing blockchain-enabled technologies.
CFTC U.S. Persons Guidance: The CFTC put out some helpful guidance on what they consider to be U.S. persons subject to CFTC jurisdiction in an internet age. This guidance provides that where the company’s high-level officers primarily direct, control, and coordinate the company’s activities is most important for determining whether the company is considered a domestic entity for CFTC jurisdictional purposes.
SafeMoon CEO Found Guilty of Fraud: Braden Karony, the former CEO of SafeMoon, was convicted on three counts of fraud after he was ruled to have diverted millions of tokens, which he said were “locked,” and sold those tokens for personal gain.
Investment Company Act Status of ETFs Questioned: The SEC Division of Investment Management, in a letter to a crypto ETF operator, stated that, in light of recent developments, it is unsure that the ETFs are investment companies that can register under the Investment Company Act of 1940. Generally, a company wouldn’t be an investment company if, among other things, less than 40% of its assets constituted investment securities. Registration statements, application requirements, and ongoing reporting requirements are different for investment companies and other issuers, and certain crypto ETFs (including Bitcoin ETFs) already register as non-investment companies. This calls into question whether the SEC might be exploring rule changes more tailored towards this type of entity.
Conclusion:
These developments mark a potential turning point in the digital asset regulatory landscape. With Congress moving forward on bipartisan legislation like the CLARITY Act and federal agencies such as the SEC and CFTC issuing meaningful (if still preliminary) guidance, the pieces of a more coherent framework are starting to take shape. However, the regulatory environment remains fragmented and uncertain, especially absent formal rulemaking or statutory clarity. As agencies shift direction and courts weigh in on key enforcement matters, market participants should remain vigilant, engage with regulators, and prepare for a fast-evolving legal landscape where the line between code and law continues to be tested.
DJI Says “Bring It On” to U.S. Drone Security Scrutiny
In a surprising move, China-based DJI, the world’s largest drone manufacturer, is not flinching at the prospect of tighter U.S. restrictions on Chinese drone companies. In fact, they’re embracing it.
Currently, the Trump administration is finalizing executive orders that would affect the commercial drone landscape in the U.S., which could be set for a serious shake-up. These potential measures would require companies like DJI, and its competitor Autel, to undergo national security reviews before selling new drone models in the U.S.
You might think DJI would be sounding the alarm—but instead, they’re rolling out the welcome mat. “DJI welcomes and embraces any opportunities to demonstrate our privacy controls and security features,” explained a company spokesperson.
The company has been submitting its systems for independent security audits since 2017. Evaluations from heavyweights like Booz Allen Hamilton, FTI Consulting, and even U.S. government bodies like the Department of the Interior and Idaho National Laboratory have come to a consistent conclusion: DJI’s drones are secure, and there’s no evidence of data being transmitted to unauthorized entities—including the Chinese government.
The legal spotlight is now on Section 1709 of the FY2025 National Defense Authorization Act. This provision requires a designated national security agency to determine—within a year—whether DJI’s equipment presents an “unacceptable risk” to U.S. national security.
If that assessment isn’t completed within the deadline, DJI could end up on the FCC’s Covered List by default, effectively barring them from launching new products in the U.S.
So, yes, the stakes are high. But DJI seems ready to bet on its track record.
In response to longstanding concerns over data privacy and national security, DJI has introduced several robust features aimed at giving control back to users:
Local Data Mode: Operates like an air-gapped device—no internet, no data leakage.
Default Data Settings: No automatic syncing of photos, flight logs, or videos.
Third-party software compatibility: Users can fly DJI drones and analyze data using U.S.-based software, without touching DJI’s ecosystem.
DJI no longer allows U.S. users to sync flight records to its servers.
“Unlike our competitors, we do not force people to use our software,” DJI spokesperson pointed out.
While the upcoming executive orders are designed to boost domestic drone production and address national security risks, DJI is using the moment to double down on its commitment to transparency. Their message is clear: judge us by the tech, not the passport.
Whether that’s enough to maintain access to the U.S. market will depend on how these reviews play out—and how political winds blow in the coming months.
But one thing’s for sure: DJI isn’t backing down. It’s gearing up for inspection—and maybe even looking forward to it.
Stay tuned as we track legal developments on this issue and how it could reshape the drone industry in the U.S.
Privacy Tip #446 – Department of Motor Vehicles Warns Drivers About Smishing Text Surge
Smishing schemes involving Departments of Motor Vehicles nationwide have increased. Scammers are sending SMS text messages falsely claiming to be from the DMV that “are designed to deceive recipients into clicking malicious links and submitting personal and/or financial information under false threats of license suspension, fines and credit score or legal penalties.”
The Rhode Island Division of Motor Vehicles (RIDMV) issued an alert to the public indicating that one of the smishing messages sent to drivers was a “final notice” from the DMV that states that if the driver doesn’t pay an unpaid traffic violation that enforcement penalties, including license suspension will begin imminently. The DMV warned drivers that the text message cites “fictitious legal code and link to fraudulent websites.”
The DMV warned drivers that the messages are not from the DMV and that it does “NOT send payment demands or threats via text message, and we strongly urge the public to avoid clicking on any suspicious links or engaging with these messages. Clicking any links may expose individuals to identity theft, malware, or financial fraud.”
The RIDMV provides these tips to avoid smishing scams:
Do NOT click on any links or reply to suspicious text messages.
Do NOT provide personal or financial information.
Be aware that DMV related information is sent via mail, not text messages.
Report fraudulent messages to the FBI’s Internet Crime Complaint Center (www.ic3.gov) or forward. them to 7726 (SPAM) to notify your mobile provider.
Report the message to the FTC.
These tips apply to all drivers. No state DMV is sending a text message to drivers, so if you get one, it is surely a scam. Do not be lured into clicking on links in text messages for fear of license suspension or other actions by the DMV. If you get a text purporting to be from DMV alleging your license is at risk, don’t click on the link—it’s a smishing scam.