CFTC Withdraws Pair of Advisories on Heightened Review Approach to Digital Asset Derivatives [Video]

On March 28, the staff of the Commodity Futures Trading Commission (CFTC) issued two press releases announcing the withdrawal of two previous advisories that reflected the agency’s heightened review approach to digital asset derivatives. 
These announcements appear to mark the end of the CFTC’s heightened review of digital asset products. The CFTC rules certainly still apply, but this seems to be a deliberate move by the CFTC to start treating digital asset derivatives like other CFTC-regulated products. It also gives a glimpse of how the CFTC would regulate digital asset spot transactions if Congress gives it the authority to do so.
The first advisory the CFTC withdrew was Staff Advisory No. 18-14, Advisory with Respect to Virtual Currency Derivative Product Listings, which was issued on May 21, 2018. The withdrawal is effective immediately. That advisory provided certain enhancements that CFTC-regulated entities were asked to follow when listing digital asset derivatives. These included enhanced market surveillance, closer coordination with the CFTC, reporting obligations, risk management and outreach to members and market participants. That advisory was withdrawn in its entirety, with the CFTC staff citing its increased experience with digital asset derivatives and that the digital asset industry has increased in market growth and maturity.
The second advisory the CFTC staff withdrew was Staff Advisory No. 23-07, Review of Risks Associated with Expansion of DCO Clearing of Digital Assets, issued on May 30, 2023. It stated that CFTC staff would focus on the heightened risks of digital asset derivatives to system safeguards, fiscal settlement procedures and conflicts of interest. 

United States: House Committee on Financial Services Urges the SEC to Withdraw Final and Proposed Rules

On 31 March 2025, the House Committee on Financial Services (Committee), in a letter to Acting Chairman of the US Securities and Exchange Commission (SEC), Mark Uyeda, identified a series of proposed and adopted rules that the SEC should withdraw or rescind. The letter notes the Committee’s view that the SEC, under the prior Chair, had lost sight of its mission. The identified proposals and rules represent significant rulemaking efforts on the part of the SEC, many of which were controversial and subject to significant industry opposition. The specific proposals identified are the following:

Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure;
Short Position and Short Activity Reporting by Institutional Investment Managers;
Reporting of Securities Loans;
Pay Versus Performance;
Investment Company Names;
Form N-PORT and Form N-CEN Reporting; Guidance on Open-End Fund Liquidity Risk Management Programs; 
Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker Dealers and Investment Advisers;
Open-End Fund Liquidity Risk Management Programs and Swing Pricing;
Regulation Best Execution;
Order Competition;
Position Reporting of Large Security-Based Swap Positions;
Regulation Systems Compliance and Integrity;
Outsourcing by Investment Advisers; and
Enhanced Disclosures by Certain Investment Advisers and Investment Companies about Environmental, Social, and Governance Investment Practices.

While the Committee does not have the authority to compel the SEC to take action on any if these final or proposed rules, the letter is a strong indication of support for an overall deregulatory environment and could provide a blueprint for SEC regulatory policy once Paul Atkins is confirmed.

EU: New European Consumer Protection Guidelines for Virtual Currencies in Video Games

On March 21, 2025, ahead of a consultation and call for evidence on the EU’s Digital Fairness Act, the Consumer Protection Cooperation (CPC) Network[1] highlighted the pressing need for improved consumer protection in the European Union, particularly regarding virtual currencies in video games. This move comes in response to growing concerns about the impact of gaming practices on consumers, including vulnerable groups such as children. The CPC Network has defined a series of key principles and recommendations aimed at ensuring a fairer and more transparent gaming environment. These recommendations are not binding and without prejudice to applicable European consumer protection laws[2] but they will likely guide and inform the enforcement of consumer protection agencies on national level across the EU.
What Are the Key Recommendations for In-Game Virtual Currency?
The CPC Network’s recommendations are designed to enhance transparency, prevent unfair practices, and protect consumers’ financial well-being. These principles are not exhaustive but cover several crucial areas:

Clear and Transparent Price Indication: The price of in-game content or services must be shown in both in-game currency and real-world money, ensuring players can make informed decisions about their purchases. (Articles 6(1)(d) and 7 of the UCPD, and Article 6 (1) (e) of the CRD)
Avoiding Practices That Obscure Pricing: Game developers should not engage in tactics that obscure the true cost of digital content. This includes practices like mixing different in-game currencies or requiring multiple exchanges to make purchases. The goal is to avoid confusing or misleading players.(Articles 6 (1) (d) and 7 of the UCPD, and Article 6 (1) (e) of the CRD)
No Forced Purchases: Developers should not design games that force consumers to spend more money on in-game currencies than necessary. Players should be able to choose the exact amount of currency they wish to purchase.(Articles 5, 8 and 9 of the UCPD)
Clear Pre-Contractual Information: Prior to purchasing virtual currencies, consumers must be given clear, easy-to-understand information about what they are buying. This is particularly important for ensuring informed choices.(Article 6 of the CRD)
Respecting the Right of Withdrawal: Players must be informed about their right to withdraw from a purchase within 14 days, particularly for unused in-game currency. This is crucial for ensuring consumers’ ability to cancel transactions if they change their mind.(Articles 9 to 16 of the CRD)
Fair and Transparent Contractual Terms: The terms and conditions for purchasing in-game virtual currencies should be written clearly, using plain language to ensure consumers fully understand their rights and obligations.(Article 3 (1) and (3) of the UCTD)
Respect for Consumer Vulnerabilities: Game developers must consider the vulnerabilities of players, particularly minors, and ensure that game design does not exploit these weaknesses. This includes providing parental controls to prevent unauthorized purchases and ensuring that any communication with minors is carefully scrutinized.(Articles 5-8 and Point 28 of Annex I of the UCPD)

These principles reflect the growing concern by European regulators of exploitation of consumers, particularly vulnerable groups such as children, in the gaming world. The European Consumer Organisation (BEUC) has strongly supported these measures, which aim to provide a safer, more transparent gaming experience for players.
Enforcement Actions and Legal Proceedings
On the same day, coordinated by the European Commission the CPC Network initiated legal proceedings against the developer of on online game. This action, driven by a complaint from the Swedish Consumers’ Association, addresses concerns about the company’s marketing practices, particularly those targeting children. Allegations include misleading advertisements urging children to purchase in-game currency, aggressive sales tactics such as time-limited offers, and a failure to provide clear pricing information.
A Safer Gaming Future
This enforcement action, along with the introduction of new principles, is part of the European Commission’s stated objective to ensure better consumer protection within the gaming industry. The Commission aims to emphasize the importance of transparency, fairness, and the protection of minors within gaming platforms.
What Should Video Game Companies and Gambling Operators Do Next?
In light of these new developments, video game companies and gambling operators especially those offering virtual currencies are well advised to review their practices to ensure ongoing compliance with existing EU consumer protection laws.
Failure to align with the above principles does not automatically mean that consumer laws are infringed but as the recent enforcement action shows could result in investigations and enforcement actions under the CPC Regulation or national laws. If gaming content is available across multiple EU countries, a coordinated investigation may be triggered, with the possibility of fines up to 4% of a company’s annual turnover.
To further support the industry, the European Commission is organising a workshop to allow gaming companies to present their strategies for aligning with the new consumer protection standards. This will provide a valuable opportunity for companies to share their plans and address any concerns related to these proposed changes. If you would like to know more, please get in touch.
FOOTNOTES
[1] The CPC Network is formed by national authorities responsible for enforcing EU consumer protection legislation under the coordination of the European Commission.
[2] Reference is made to Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 on unfair commercial practices (UCPD); the Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights (CRD); the Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts (UCTD).

Confirmation Hearing for SEC Chair Nominee Atkins — Takeaways for Fund Managers

The Senate Banking Committee convened on Thursday to consider the nomination of Paul Atkins, President Trump’s nominee for Chair of the Securities and Exchange Commission, along with the nominees for the Comptroller of the Currency, the Assistant Secretary of the Treasury and the Department of Transportation.
Atkins, a former SEC Commissioner, shared his views on the current regulatory landscape, contending that today’s environment stifles capital formation and indicating a pivot from the SEC’s recent emphasis on aggressive enforcement. Overall, nothing occurred at the hearing that would change the expectation that Atkins will be confirmed. Currently, the SEC only has three members, meaning the Democratic Commissioner in theory could effectively have veto power over actions requiring a vote of the SEC because she can deny a quorum for any action she strongly opposes; if Atkins is confirmed, the Republican majority would no longer need the Democratic Commissioner, so it will be able to begin with formal rulemaking steps.
Key takeaways for fund managers from Atkins’ testimony are below.
Position on Private Funds
Surprisingly, Atkins faced relatively few questions about private funds. Nonetheless, in responding to questions, he noted that investors in private funds are typically sophisticated and have sufficient resources to hire advisers. In response to a question from a Democratic member of the Committee, he conceded that retail investors in registered funds benefit from additional investor protections, such as diversification rules. Atkins confirmed that the SEC would continue to enforce penalties against firms that mislead investors, but he drew a distinction between accredited investors—who he said have the sophistication and means to fend for themselves—and registered fund investors, possibly indicating a less restrictive or more principles-based regulatory and enforcement framework for the private fund industry.
Focus on Disclosure Practices
Atkins expressed concerns about the inefficient disclosures that investors face, stating, “investors are flooded with disclosures that do the opposite of helping them understand the true risks of an investment.” At the same time, he stated that investors should be protected from incorrect or materially misleading private fund disclosures. While his testimony suggests that the SEC would continue scrutinizing firms’ marketing practices, this could signal a willingness to pare back rules that require voluminous disclosure that most investors do not read.
Digital Assets and Cryptocurrency
In his opening statement, Atkins signaled that digital assets and cryptocurrency will be a prominent focus if he is confirmed. He highlighted his experience developing best practices for the digital asset industry since 2017, pointing to what he views as ambiguous or outdated regulations that have led to market uncertainty and inhibited innovation. Atkins stated that a “firm regulatory foundation” for digital assets would be a top priority, emphasizing a “rational, coherent, and principled approach.” Consistent with the work that already has started under the Crypto Task Force, his comments suggest a more measured and predictable environment for market participants, which could foster greater institutional involvement and spur technological developments in the digital asset space. Consistent with his overarching views on regulation expressed throughout the hearing, Atkins stressed the importance of clear rules that encourage capital formation, which believes are critical as the SEC considers its role in overseeing rapidly evolving cryptocurrency markets.
Creating Efficiencies within the SEC
In response to questions regarding how he might work with the Department of Government Efficiency, Atkins indicated general support for seeking greater efficiency in the SEC’s operations. “If there are people who can help with creating efficiencies in the agency or otherwise, I would definitely work with them.” As has been reported elsewhere, more than 12% of the SEC has already taken a voluntary buyout; any further cuts as a result of involvement by DOGE could result in the SEC prioritizing certain types of investment adviser firms for focus from the Division of Examinations. While the SEC’s future staffing levels are not yet known, its future resource allocation is likely to be influenced by any priority given to protecting less sophisticated and less well-resourced investors. 

China Regulator Proposes Amendments to Cybersecurity Law

On March 28, 2025, the Cyberspace Administration of China issued draft amendments to China’s Cybersecurity Law (“Draft Amendment”) for public comment until April 27, 2025. The Draft Amendment aims to harmonize relevant provisions of the Personal Information Protection Law (“PIPL”), Data Security Law (“DSL”) and Law of Administrative Penalties, all of which were issued after the Cybersecurity Law came into effect in 2021.
The Draft Amendment amends the liability provisions of the Cybersecurity Law as follows:

Legal liability for network operation security: (1) classifies massive data leakage incidents, loss of partial functions of critical information infrastructure (“CII”) and other serious consequences that jeopardize network security as violations of the Cybersecurity Law and increases the range of fines set forth in the DSL for such violations; (2) imposes liability for the sale or provision of critical network equipment and specialized cybersecurity products that do not meet the Cybersecurity Law’s requirements for security certification and security testing; and (3) clarifies penalties for CII operators that use network products or services that have either not undergone or passed security review.
Legal liability for security of network information: (1) increases the penalty range for failure to report to the competent authorities, or failure to securely dispose of, information that is prohibited by applicable law to be published or transmitted; and (2) clarifies penalties for violations of the Cybersecurity Law that have particularly serious impacts and consequences.
Legal liability for security of personal information and important data: Amends the Cybersecurity Law to incorporate the PIPL’s and DSL’s penalty structure for violations of the law involving the security of personal information and other important data.
Mitigation of penalties: Adds provisions to mitigate, alleviate or withhold penalties for violations of the Cybersecurity Law where: (1) the network operator eliminates or mitigates the harmful consequences of the violation; (2) the violation is minor, timely corrected and does not result in harmful consequences; or (3) it is a first time violation that is timely corrected and results in minor harmful consequences. The Draft Amendment also clarifies that the competent authorities are responsible for formulating the corresponding benchmarks for administrative penalties.

NEW HAMPSHIRE DEEPFAKE SCANDAL TCPA LAWSUIT: Court Refuses To Dismiss Claims Against Platforms That Allegedly Aided In Sending The AI/Deepfake Calls Impersonating President Biden

Hi TCPAWorld! Remember last year when that political consultant from Texas hired the New Orleans magician to sound like Joe Biden in order to make calls using AI technology to New Hampshire voters in an attempt to convince them not to vote?
Well, that saga continues!
So for some background here, Steve Kramer, a political consultant, used AI technology to create a deepfake recording of President Joe Biden’s voice. Days before the New Hampshire primary, nearly 10,000 voters received a call in which the AI voice falsely suggested that voting in the primary would harm Democratic efforts in the general election. To further the deception, Kramer spoofed the caller ID to display the phone number of Kathleen Sullivan, a well-known Democratic leader. Voice Broadcasting Corporation and Life Corporation enabled the call campaign, providing the technology and infrastructure necessary to deliver the calls.
Steve Kramer, Voice Broadcasting Corporation, and Life Corporation in the US District Court of New Hampshire were sued on March 14, 2024, for violations of the TCPA (as well as violations of the Voting Rights Act of 1965 and New Hampshire statutes regulating political advertising) by the League of Women Voters of the United States, the League of Women Voters of New Hampshire, and three individuals who received those calls. League of Women Voters of New Hampshire et al v. Steve Kramer et al, No. 24-CV-73-SM-TSM.
Broadcasting Corporation and Life Corporation filed a motion to dismiss arguing 1) they did not “initiate” the at-issue calls and 2) these calls did not violate the TCPA because they were “’political campaign-related calls,’ which are permitted when made to landlines, even without the recipient’s prior consent.”
The court denied their motion on 3/26/25 finding that the plaintiffs adequately alleged a plausible claim for relief under the TCPA. League of Women Voters of New Hampshire et al v. Steve Kramer et al, No. 24-CV-73-SM-TSM, 2025 WL 919897 (D.N.H. Mar. 26, 2025).
The TCPA makes it unlawful “to initiate any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party.” While the TCPA does not specifically define what it means to “initiate” a call, the FCC has established clear guidance. According to In the Matter of the Joint Petition filed by Dish Network, Federal Communications Commission Declaratory Ruling, 2013 WL 1934349 at para. 26 (May 9, 2013), a party “initiates” a call if it takes the steps necessary to physically place it or is so involved in the process that it should be deemed responsible.
In this case, the court assumed, without deciding, that neither Voice Broadcasting nor Life Corporation physically placed the calls. But that didn’t absolve them. The court turned to the totality of the circumstances to determine whether the companies were sufficiently involved to bear liability.
The allegations were that Voice Broadcasting didn’t merely act as a passive service provider. Instead, it actively collaborated with Kramer to refine the message and even suggested adding a false opt-out mechanism that directed recipients to call Kathleen Sullivan’s personal phone number. Life Corporation, in turn, allegedly facilitated the delivery of thousands of the calls using its telecommunications infrastructure. The court found that these facts were more than enough to justify holding the companies accountable under the TCPA.
Quoting the FCC’s guidance, the court explained that companies providing calling platforms cannot simply “blame their customers” for illegal conduct. Liability attaches to those who “knowingly allow” their systems to be used for unlawful purposes. Voice Broadcasting and Life Corporation had the means to prevent the deepfake calls— but they didn’t. As the court explained, “Even if one were to assume that neither Voice Broadcasting nor Life Corp. actually ‘initiated’ the Deepfake Robocalls, they might still be liable for TCPA violations, depending upon their knowledge of, and involvement in, the scheme to make those illegal calls.”
As for the defendants’ second argument, that the calls were political and therefore exempt from the TCPA’s consent requirements, the court acknowledged that political campaign calls using regulated technology, such as the AI-voice technology used in the alleged calls, to landlines are generally permissible, even without prior express consent. However, this exemption is not a free pass. The calls must comply with other key provisions of the TCPA, including the requirement to provide a functional opt-out mechanism.
Here, instead of providing a legitimate way for recipients to opt-out, the alleged calls instructed the recipients to call Kathleen Sullivan’s personal phone number. This sham opt-out mechanism not only failed to meet TCPA standards but also contributed to the deception. The court had no trouble rejecting the claim that this constituted compliance: “Little more need be said other than to note that such an opt-out mechanism plainly fails to comply with the governing regulations and is not, as defendants suggest, ‘adequate.’”
And in case you are all wondering about Mr. Kramer himself, a default was entered against Kramer on 8/29/24.
The entire story behind these calls has been something to watch. This is definitely a case to keep an eye on!

Federal Regulators Continue Crypto Rationalization

Following President Trump’s Executive Order on Digital Assets, which instructed agencies to streamline and rationalize regulation of the digital asset space in a way that is technology-neutral, federal agencies have been responding. Below we summarize recent activities by the Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC) and Commodity Futures Trading Commission (CFTC).
On March 7, 2025, the OCC, which supervises national banks, issued Interpretive Letter 1183 regarding Certain Crypto-Asset Activities for national banks. IL 1183 withdraws several previous interpretive letters that limited national banks’ ability to engage in various crypto-asset activities. Instead, the OCC sought to “ensure that bank activities will be treated consistently, regardless of the underlying technology.”
On March 28, the FDIC took similar action and issued Financial Institution Letter 7-2025 to establish a process for banks engaging in crypto-related assets. FIL 7-2025 replaces prior guidance, FIL 16-2022, and affirms that FDIC-supervised institutions may engage in permissible activities, including activities involving new and emerging technologies such as crypto-assets and digital assets, provided that they adequately manage the associated risks. In contrast to FIL-16-2022, which established a prior notification requirement specific to crypto-related activities, FIL 7-2025 clarifies that FDIC-supervised institutions may engage in certain permissible crypto-related activities without receiving prior FDIC approval.
Also on March 28, the CFTC staff announced the withdrawal of its prior staff advisory entitled Review of Risks Associated With Expansion of DCO Clearing of Digital Assets. In withdrawing the prior guidance, the CFTC staff noted that its regulatory treatment of digital asset derivatives does not vary from its treatment of other products. Instead, the staff conducts its supervision of clearing activities and oversight of compliance with the Commodity Exchange Act and Commission regulations regardless of the specific commodity underlying relevant contracts.

Legal AI Unfiltered: Legal Tech Execs Speak on Privacy and Security

With increasing generative AI adoption across the legal profession, prioritizing robust security and privacy measures is critical. Before using any generative AI tool, lawyers must fully understand the underlying technology, beginning with thorough due diligence of legal tech vendors.
In July 2024, the American Bar Association issued Formal Opinion 512, which provides some guidance on the proper review and use of generative AI in legal practice. The opinion underscores some of the ABA Model Rules of Professional Conduct that are implicated by lawyers’ use of generative AI tools. This includes the duty to deliver to competent representation, keep client information confidential, communicate generative AI use to clients, properly supervise subordinates in their use of generative AI, and to only charge reasonable fees. 
Even before deploying generative AI tools, however, lawyers must understand a vendor’s practices. This includes verifying vendor credentials and fully reviewing policies related to data storage and confidentiality.
According to Formal Opinion 512, “all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies.” Lawyers may also need to consult IT and cybersecurity professionals to understand terminology and assess any potential risks.
In practice, this means carefully reviewing vendor contract terms related to a vendor’s limitation of liability, understanding if a vendor’s tool “trains” on your client’s data, assessing data retention policies (before, during, and after using the tool), and identifying appropriate notification requirements in the event of a data breach.
To further explore these ethical guidelines in practice, we spoke with legal technology executives about the security and privacy measures they implement, as well as best practices for lawyers when evaluating and vetting legal tech vendors.
What security measures do you take to protect client data?

Troy Doucet, Founder @ AI.Law

Enterprise-expected security measures including SOCII, HIPAA, and robust encryption at rest and in transit for data. We also follow ABA guidance on AI, including confidentiality, not training our models on our users’ data, and making it clear that we do not own the data users input.

Jordan Domash, Founder & CEO @ Responsiv

The foundation must be traditional security and privacy controls that have always been important an enterprise software. On top of that, we’ve built a de-identification process to strip out PII and corporate identifiable content before processing by an LLM. We also have a commitment to not have access to or train on client questions and content.

Michael Grupp, CEO & Co-founder @ BRYTER

We have an entire team focused on security and compliance so the answer is of course, all of them: SOC 2 Type II, ISO27001, GDPR, CCPA, EU AI Act etc. And, BRYTER does not use client data to develop, train or fine-tune the AI models we use.

Gil Banyas, Co-Founder & COO @ Chamelio

Chamelio safeguards client data through industry-standard encryption, SOC 2 Type II certified security controls, and strict access management with multi-factor authentication. We maintain zero data retention arrangements with third-party LLMs and employ continuous security monitoring with ML-based anomaly detection. Our comprehensive security framework ensures data remains protected throughout its entire lifecycle.

Khalil Zlaoui, Founder & CEO @ CaseBlink

Client data is encrypted in transit and at rest, and is not used to train AI models. We enforce a strict zero data retention policy – no user data is stored after processing. A SOC 2 audit is nearing completion to certify that our security and data handling practices meet industry standards, and customers can request permanent deletion of their data at any time.

Dorna Moini, CEO & Founder @ Gavel

Gavel was built for legal documents, so our security standards exceed those typical of software platforms. We use end-to-end encryption, private AI environments, and enterprise-grade access controls—backed by SOC II databases and third-party audits. Client data is never used for training, and our retention policies give firms full control, ensuring compliance and peace of mind.

Ted Theodoropoulos, CEO @ Infodash

Infodash is built on Microsoft 365 and Azure and deployed directly into each customer’s own tenant, which means we host no client data whatsoever. This unique architecture ensures that law firms always maintain full control over their data. Microsoft’s enterprise-grade security includes encryption at rest and in transit, identity management via Azure Active Directory, and compliance with certifications like ISO/IEC 27001 and SOC 2.

Jenna Earnshaw, Co-Founder & COO @ Wisedocs

Wisedocs uses services that implement strict access controls, including role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to prevent unauthorized access to your data. Our organization employs configurable data retention policies as agreed upon with our clients. Wisedocs has achieved our Soc 2 Type 2 attestation, as well as established information security and privacy program in accordance with SOC 2, HIPPA, PIPEDA, PHIPA, as well as annual risk assessments and continual vulnerability scans.

Daniel Lewis, CEO @ LegalOn

Security and privacy are top priorities for us. We are SOC 2 Type II, GDPR, and CCPA compliant, follow industry-standard encryption protocols, and use state-of-the-art infrastructure and practices to ensure customer data is secure and private.

Gila Hayat, CTO & Co-Founder @ Darrow

Darrow is working mostly on the open web realm, utilizing as much as publicly available data as possible, surfacing potential matters from the open web. Our clients confidentiality and privacy is a must, therefore we adhere to security standards and regulations, and collect minimal data as possible to maintain trust. We take client confidentiality and privacy very seriously.

Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora

We exclusively use reputable, secure providers and AI models that never store or log data, with no human review or monitoring permitted. All vendors are contractually bound to ensure data is never retained or used for training in any form. This, in combination with ISMS certifications and adherence to industry standards, ensures robust data security and privacy.

Gary Sangha, CEO @ LexCheck Inc.

We are SOC 2 compliant and follow rigorous cybersecurity standards to ensure client data is protected. Our AI tools do not retain any personally identifiable information (PII), and all data processing is handled securely within Microsoft Word, leveraging Azure’s built-in data protection. This ensures client data remains encrypted, confidential, and under the highest level of enterprise-grade security.

Tom Martin, CEO & Founder @ Lawdroid

As a lawyer myself, I understand the fiduciary responsibility we have to handle our client data responsibly. At LawDroid, we use bank-grade data encryption, do not train on your data, and provide you with fine grain control over how long your data is retained. We also just implemented browser-side masking of personally identifiable information to prevent it from ever being seen.
Lawyers are very concerned about data privacy. What would you tell a lawyer who doesn’t use legal-specific AI tools due to privacy concerns? 

Troy Doucet, Founder @ AI.Law

You have control over what you input into AI, so do not input data that you do not feel comfortable inputting. AI products vary in their functionality too, meaning different levels of concern. For example, asking AI about the difference between issue and claim preclusion is a low-risk event, versus mentioning where Jonny buried mom and dad in the woods.

Jordan Domash, Founder & CEO @ Responsiv

You’re right to be skeptical and critically consider a vendor before giving them confidential or privileged information! The risk is vendor-specific – not with the category. The right vendor designs the platform with robust data privacy measures in mind.

Michael Grupp, CEO & Co-founder @ BRYTER

We have been working with the biggest law firms and corporates for years, and we know that trust is earned, not given. This means that first, we try to be over-compliant – so this means agreements with providers to protect attorney-client privilege. Second, we make compliance transparent. Third, we provide references to those who are already advanced in the journey.

Gil Banyas, Co-Founder & COO @ Chamelio

Adopting new technology inevitably involves some privacy trade-offs compared to staying offline, but this calculated risk enables lawyers to leverage significant competitive advantages that AI offers to legal practice. Finding the right risk-reward balance means embracing innovation responsibly by selecting vendors who prioritize security, maintain zero data retention policies, and understand legal confidentiality requirements. Success comes from implementing AI tools strategically with appropriate safeguards rather than avoiding valuable technology that competitors are already using to enhance client service.

Khalil Zlaoui, Founder & CEO @ CaseBlink

Not all AI tools treat data the same, and legal-specific platforms like ours are built with strict safeguards and guardrails. Data is never used to train models, and everything is encrypted, access-controlled, and siloed. Only clients can access their own data. They retain full ownership and control at all times, with the ability to keep information private even across internal teams.

Dorna Moini, CEO & Founder @ Gavel

With consumer AI tools, your data may be stored, analyzed, or even used to train models—often without clear safeguards. Professional-grade and legal-specific tools like Gavel are built with attorney-client confidentiality at the core: no data sharing, no training on your client data inputs, and full control over retention. Avoiding AI entirely isn’t safer—it’s just riskier with the wrong tools (and that’s not specific to AI!).

Ted Theodoropoulos, CEO @ Infodash

Legal-specific platforms like Infodash are purpose-built with confidentiality at the core, unlike general-purpose consumer AI tools. These solutions are built with the privacy requirements of legal teams in mind. With new competitors like KPMG entering the market, delaying AI adoption poses a real competitive risk for firms.

Jenna Earnshaw, Co-Founder & COO @ Wisedocs

Legal-specific AI tools are designed to be both secure and transparent, helping legal professionals understand and trust how AI processes their data while maintaining strict privacy controls. With human-in-the-loop (HITL) oversight, AI becomes a tool for efficiency rather than a risk, ensuring that outputs are accurate and reliable. By adopting AI solutions that follow strict security protocols such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance standards, legal teams can confidently leverage technology while maintaining control over their data through role-based access control (RBAC), multi-factor authentication (MFA), and configurable data retention policies.

Daniel Lewis, CEO @ LegalOn

Ask questions about how your data may be used — will it touch generative AI (where, without the right protections, your content could display to others), or non-generative AI? If it’s being processed by LLMs like OpenAI, understand whether your data is being used to train those models and if it’s being used in non-generative AI use cases, understand how. The use of your data might make the product you use better, so consider the risk/benefit trade-offs.

Gila Hayat, CTO & Co-Founder @ Darrow

Pro-tip for privacy preservation and worry-free experimentation with various AI tools: Have a non-sensitive or redacted document or use-case ready that you know the answers that you wouldn’t expect – and benchmark the various tools against it to have a fair comparison and no stress over leaking random work documents.

Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora

Make sure to use a trusted vendor where no model training or fine-tuning is happening on client input.

Gary Sangha, CEO @ LexCheck Inc.

Lawyers should first understand what information they are actually sharing when using legal specific AI tools, often it is not personally identifiable information or sensitive client data. In many cases, you are not disclosing anything subject to confidentiality, especially when working with redlined drafts or standard contract language. That said, if you are sharing sensitive information, it is important to review your firm’s protocols, but depending on what you are sharing, it may not be a concern.

Tom Martin, CEO & Founder @ Lawdroid

Lawyers should be concerned about data privacy. But, steering away from legal-specific AI tools due to privacy concerns would be a mistake. If anything, legal AI vendors take greater security precautions than consumer-facing tools, given our exacting customer base: lawyers.
For security and privacy purposes, what should lawyers and law firms know about a legal AI vendor before using their product? 

Troy Doucet, Founder @ AI.Law

Knowing what they do to protect data, how they use your data, certifications they have, and encryption efforts are smart. However, knowing what your privacy and security needs are before using the product is probably the best first step.

Jordan Domash, Founder & CEO @ Responsiv

I’d start with a traditional security and privacy review process like you’d run for any enterprise software platform. On top of that, I’d ask: Do they train on your data? Do they have access to your data? What is your data retention policy?

Michael Grupp, CEO & Co-founder @ BRYTER

Even the early-adopters and fast-paced firms ask their vendors three questions: Where is the client data stored? Do you use the firm’s data, or client data, to train or fine-tune your models? How is legal privilege protected? 

Gil Banyas, Co-Founder & COO @ Chamelio

Before adopting legal AI tools, lawyers should verify the vendor has strong data encryption, clear retention policies, and SOC 2 compliance or similar third-party security certifications. They should understand how client data flows through the system, whether information is stored or used for model training, and if data sharing with third parties occurs. Additionally, they should confirm the vendor maintains appropriate legal expertise to understand attorney-client privilege implications and offers clear documentation of privacy controls that align with relevant bar association guidance.

Dorna Moini, CEO & Founder @ Gavel

I did a post on what to ask your vendors here: https://www.instagram.com/p/C9h5jVYK5Zc/. Lawyers need clear answers on what happens to their data and how it’s being used. When choosing a vendor, it’s also important to understand output accuracy and the AI product roadmap as it relates to legal work – you are engaging in a marriage to a software company you know will continue to improve for your purposes.

Ted Theodoropoulos, CEO @ Infodash

Firms should ask where and how data is stored, whether it’s isolated by client, and if it’s used for training. Look for vendors that run on secure environments like Microsoft Azure and support customer-managed encryption keys. Transparency around data flows and integration with existing infrastructure is essential.

Jenna Earnshaw, Co-Founder & COO @ Wisedocs

Lawyers and law firms should ensure that any legal AI vendor follows strict security protocols, such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance, along with role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to protect sensitive legal data. They should ensure the AI vendor is not using third party models or sharing data with AI model providers and the deployment of their AI is secure and limited. Additionally, firms should assess whether the AI system includes human-in-the-loop (HITL) oversight to mitigate hallucinations and organizational risks, ensuring accuracy and reliability in legal workflows.

Gila Hayat, CTO & Co-Founder @ Darrow

When choosing a legal AI vendor, it’s important to make sure it follows top-tier security standards and has a solid track record when it comes to protecting data.Don’t forget the contract: make sure it includes strong confidentiality terms so your clients’ data stays protected and compliant. Trusting the human and knowing the team: the legal tech scene is tight and personal, hop on a call with one of the team members to make sure you’re doing business with a trustworthy partner.

Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora

You should understand whether a vendor’s AI models are trained on user data, this is a critical distinction. Vendors that fine-tune or improve their models using client input may pose significant privacy risks, especially if sensitive information is involved. It’s important to evaluate whether specially trained or fine-tuned models offer enough added value to justify the potential trade-off in privacy.

Gary Sangha, CEO @ LexCheck Inc.

Lawyers and law firms should understand what information they are sharing through the AI tool, as it is often personally identifiable information or subject to confidentiality. They should confirm whether the vendor is compliant with frameworks like SOC-2 which ensures rigorous controls for data protection and ensure that data is encrypted and securely processed. Reviewing how the tool handles data protection helps ensure it aligns with the firm’s security and privacy policies.

Tom Martin, CEO & Founder @ Lawdroid

Lawyers need to ask questions: 1) Do you employ encryption? 2) Do you train on data I submit to you? 3) Do you take precautions to mask PII? 4) Can I control how long the data is retained?
By carefully evaluating security credentials, vendor practices, and model usage policies, lawyers can responsibility and confidently employ generative AI tools to improve their delivery of legal services. As these technologies evolve, best practices for security and implementation will also evolve, making it important for lawyers to continue following industry updates and new best practices.

New Guidelines Establishing the Requirements and Procedures That Must Be Observed to Obtain Permission to Advertise Prepackaged Food and Non-Alcoholic Beverages

Following our newsletter dated March 31, 2020 “The new Mexican Official Standard for the labelling of pre-packaged food and non-alcoholic beverages” and other newsletters regarding labelling of products, after five years of the publication of this Mexican Official Standard, on March 11, 2025, the Guidelines regarding advertising of prepackaged food and non-alcoholic beverages were published in the Official Gazette and entered into force on March 12, 2025.
These Guidelines appear to now restrict the advertising of these types of products, imposing advertisers, advertising agencies and media, the obligation to obtain a permit/approval for advertising the products on open television, restricted television, movie theaters, internet and other digital platforms.
Any product is subject to approval by the Federal Comision Against Sanitary Risks (COFEPRIS) when their label includes one or more warning seals of the front labeling system.
The main restrictions, among others, are the following:

It is forbidden to use animated characters, pets or interactive games directed at children to promote the consumption of the products.
To compare the products with natural ones.
To compare with similar products regarding their composition or nutritional contents.
To suggest physical or intellectual abilities from its consumption.
To promote excessive consumption of the product.
To suggest that the products may modify body proportions.

The requirements for obtaining the permit/approval to advertise the products are to fill in a format, pay government fees and attach the “operation notice” (authorization) of the product.
Once submitted the application, COFEPRIS has a term of 20 working days to approve the advertisement and/or 10 days to issue a requirement. Applicant has a term of 5 days to reply or else, the approval will be dismissed.
Although, we consider all these requirements to be an unnecessary burden to the industry, this Guidelines provide definitions of terms such as, “pets”, “celebrities”, “children’s characters”, “digital downloads”, “cartoons” and “indirect advertising”, that were missing in the Mexican Official Standard for the labelling of pre-packaged food and non-alcoholic beverages.

Tick-Tock, Don’t Get Caught: Navigating TCPA’s Quiet Hours

In recent months, businesses across various industries have been hit with a wave of lawsuits targeting alleged violations of the Telephone Consumer Protection Act’s (“TCPA”) call time rules. Plaintiffs are increasingly claiming that text messages, often sent just minutes outside the allowable hours, violate the Federal Communication Commission’s (“FCC”) rules and entitle them to substantial compensation. These lawsuits are creating challenges for businesses that rely on telemarketing and short message service (“SMS”) programs, even when they have received prior consent from their customers.
Understanding the TCPA’s Statutory and Regulatory Framework
The TCPA, enacted in 1991, was designed to protect consumers from unwanted telemarketing calls. Over time, its reach has expanded to cover text messages, making businesses that engage in text message marketing campaigns subject to compliance. One key area of regulation is the TCPA’s call time rules, found in the Do-Not-Call (“DNC”) regulations issued by the FCC. These rules prohibit telephone solicitations to residential subscribers before 8:00 AM or after 9:00 PM local time at the called party’s location.
Under the TCPA, a “telephone solicitation” is defined as a call or message made for the purpose of encouraging the purchase or rental of, or investment in, property, goods, or services. Importantly, the statute and regulations carve out several exceptions, including for calls or messages made to individuals who have given prior express consent to be contacted.
The penalties for violating the TCPA can be severe. Violations can result in statutory damages ranging from $500 to $1,500 per call or message, depending on whether the violation was willful. These potential damages create significant exposure for businesses that rely on telemarketing or SMS outreach, particularly when multiple calls or messages are at issue.
Recent Wave of Lawsuits and Why the Claims Are Unmeritorious
Despite the FCC’s long-standing guidance and the clear statutory language regarding consent, plaintiffs have increasingly filed lawsuits alleging that text messages sent outside the 8:00 AM – 9:00 PM window violate the TCPA’s call time restrictions. Many of these lawsuits focus on minor deviations from the permissible time window, such as texts sent just minutes before 8:00 AM or shortly after 9:00 PM.
What makes these lawsuits particularly problematic is that in many cases, the plaintiffs had previously opted into the SMS programs and expressly consented to receive marketing messages. Under the plain language of the TCPA and FCC regulations, such consent removes the text message from the definition of a “telephone solicitation” and, by extension, exempts it from the call time restrictions. This means that businesses with valid consent should not be subject to these lawsuits.
However, plaintiffs are exploiting the uncertainty created by the lack of clear FCC guidance on whether the call time rules apply to text messages where consent has been provided. They argue that, regardless of consent, any text message sent outside the permissible hours violates the TCPA, leaving businesses vulnerable to litigation and potential class action exposure.
The FCC Petition for Declaratory Ruling
In response to this growing litigation trend, an industry group recently filed a petition with the FCC, seeking a declaratory ruling that the TCPA’s call time restrictions do not apply to text messages sent to individuals who have given prior express consent. The petition highlights the plain language of the statute and regulations, arguing that consent should exempt businesses from the call time rules and shield them from the growing number of predatory lawsuits.
The petition also requests clarification or waiver of the rule requiring knowledge of the recipient’s location for compliance, arguing that current standards are unworkable and lead to abusive litigation practices. The petitioners emphasize that the TCPA’s unique combination of strict liability, statutory damages, and private right of action make it ripe for lawsuit abuse, with opportunistic litigators targeting legitimate businesses.
While this petition represents a positive step towards clarifying the law, the FCC’s rulemaking process can be lengthy. In the meantime, businesses must continue to operate in a landscape where uncertainty about the applicability of the call time rules remains. It could be months, if not longer, before the FCC issues a ruling, and during this time, we expect plaintiffs’ attorneys to continue targeting businesses with TCPA lawsuits.
Recommendations for Reducing Risk
Until the FCC provides clear guidance on the issue, businesses should take proactive steps to mitigate the risk of being targeted by TCPA quiet hour lawsuits. Here are several recommendations to help ensure compliance and reduce exposure:

Observe Call Time Windows: Despite the legal uncertainties surrounding the applicability of the call time rules to text messages, businesses should err on the side of caution and adhere to the 8:00 AM – 9:00 PM window for sending marketing messages. This simple step can help reduce the likelihood of being sued.
Review and Update Consent Mechanisms: Businesses should review their SMS consent processes to ensure that they are obtaining clear and unambiguous consent from consumers. This includes updating terms and conditions to include disclosures about the potential timing of messages and ensuring that consumers understand the nature of the messages they will receive.
Implement Robust Compliance Procedures: Businesses should implement internal procedures to monitor the timing of their telemarketing and SMS campaigns. Consider using software that can automate the scheduling of messages.
Document Consent Thoroughly: If a lawsuit arises, being able to produce clear documentation that demonstrates a consumer’s consent to receive text messages will be critical in defending against the claim. Businesses should maintain detailed records of when and how consent was obtained.

Conclusion
The recent surge in TCPA lawsuits alleging violations of the call time restrictions highlights the need for businesses to stay informed and proactive in their compliance efforts. While we believe that many of these lawsuits are unmeritorious, businesses should still remain cautious. By observing the 8:00 AM – 9:00 PM call time window, reviewing consent mechanisms, and implementing strong compliance procedures, businesses can reduce their risk of being targeted by predatory lawsuits.
We will continue to monitor litigation in the courts and the FCC’s response to the pending petition, and provide updates as new developments arise. In the meantime, please reach out if you have any questions or need assistance in reviewing your telemarketing and SMS programs to ensure compliance with the TCPA.

MAKING SMART TCPA MOVES: Rocket Mortgage Follows Up Its Redfin Purchase With STUNNING $9.4BB Take Over of Mr. Cooper

So multiple outlets are reporting that Rocket is set to absorb the nation’s largest mortgage servicer Mr. Cooper.
With Rocket having just recently acquired Redfin it looks like the company is poised to be an absolute behemoth in the mortgage industry.
Just like with Redfin, however, the TCPA is likely driving this initiative.
Yes, mortgage servicing can be profitable in its own right but it is MASSIVELY valuable to an originator to have a large servicing pool.
Why?
Who is more likely to NEED mortgage or refinance than folks who already have a mortgage product? And with trigger leads now widely available (probably illegal under FCRA but don’t tell the CRAs that) having a massive servicing book means you can LEGALLY call folks who just submitted an application elsewhere and convince them to stay.
This is because the DNC rules will soon allow Rocket to call all of the MILLIONS of Mr. Cooper customers it just acquired WITHOUT CONSENT.
Pretty slick, eh?
So with Redfin providing consent on the front end and with access to a massive pool of mortgage customers now bolted on to the backend Rocket can make ready use of the phones to bring customers into its ecosystem–and keep them there.
Pretty clever. And it was all brought to you by the TCPA.
People think of the statute as a profit killer. But leveraged correctly it can actually drive profits by building a moat around your customers and a barrier-to-entry for others in your vertical.
Smart money uses the law as a competitive advantage. Nicely done Rocket.

Virginia Governor Recommends Amendments to Strengthen Children’s Social Media Bill

On March 24, 2025, Virginia Governor Glenn Youngkin asked the Virginia state legislature to strengthen the protections provided in a bill (S.B. 854) passed by the legislature earlier this month that imposes significant restrictions on minors’ social media use.
The bill would amend the Virginia Consumer Data Protection Act (“VCDPA”) to require social media platform operators to (1) use commercially reasonable methods (such as a neutral age screen) to determine whether a user is a minor under the age of 16; and (2) limit a minor’s use of the social media platform to one hour per day, unless a parent consents to increase the limit. The bill would prohibit social media platform operators from altering the quality or price of any social media service due to the law’s time use restrictions.
The Governor declined to sign the bill and recommended that the legislature make the following amendments to enhance the protections in the bill: (1) raise the covered user age from 16 to 18; and (2) require social media platform operators to, in addition to the time use limitations, also disable (a) infinite scroll features (other than music or video the user has prompted to play) and (b) auto-playing videos (i.e., where videos automatically begin playing when a user navigates to or scrolls through a social media platform), absent verifiable parental consent.