FTC Alleges Fintech Cleo AI Deceived Consumers
On March 27, 2025, the Federal Trade Commission (FTC) filed a lawsuit and proposed settlement order resolving claims against Cleo AI, a fintech that operates a personal finance mobile banking application through which it offers consumers instant or same-day cash advances. The FTC alleges that Cleo deceived consumers about how much money they could get and how fast that money could be available, and that Cleo made it difficult for consumers to cancel its subscription service.
Pointing to those allegations, the FTC alleges Cleo (1) violated Section 5 of the Federal Trade Commission Act (FTC Act) by misrepresenting that consumers would receive—or would be likely to receive—a specific cash advance amount “today” or “instantly” and (2) violated the Restore Online Shoppers’ Confidence Act (ROSCA) by failing to conspicuously disclose all material transaction terms before obtaining consumers’ billing information and by failing to provide simple mechanisms to stop recurring charges.
“Cleo misled consumers with promises of fast money, but consumers found they received much less than the advertised hundreds of dollars promised, had to pay more for same day delivery, and then had difficulty canceling,” said Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection.
The FTC cites to consumer complaints in support of its action against Cleo, including one stating: “There’s no other way for me to say it. I need my money right now to pay my rent. I have no other option I can’t wait 3 days. I can’t wait 1 day I need it now. I would never have used Cleo if I would have thought I would ever be in this situation.”
The FTC’s Allegations
In its complaint, filed in the U.S. District Court for the Southern District of New York, the FTC alleges that Cleo violated Section 5 of the FTC Act by:
“Up To” Claims. Advertising that its customers would receive “up to $250 in cash advances,” and then, only afterthe consumer subscribes to a plan and Cleo sets the payment date for the subscription, is the consumer informed of the cash advance amount they can actually receive. For “almost all consumers, that amount is much lower than the amount promised in Cleo’s ads.”
Undisclosed Fees. Advertising that its customers would obtain cash advances “today” or “instantly,” when Cleo actually charges an “express fee”—sometimes disclosed in a footnote—of $3.99 to get the cash same-day, and, even then, the cash may not arrive until the next day.
In addition, the FTC’s complaint alleges that Cleo violated Section 4 of ROSCA by:
Inadequate Disclosures. Failing to clearly and conspicuously disclose all material terms before obtaining customers’ billing information.
Inadequate Cancellation Mechanisms. Failing to permit consumers with an outstanding cash advance to cancel their subscriptions through the app.
Proposed Consent Agreement
The FTC’s proposed consent order would be in effect for 10 years and require that Cleo pay $17 million to provide refunds to consumers harmed by the company’s practices. The consent order would restrict Cleo from misleading consumers about material terms of its advances and require that it obtain consumers’ express, informed consent before imposing charges. More specifically, the consent order:
Prohibits Cleo from misrepresenting the amount of funds available to a consumer, when funds will be available, any applicable fees (including the nature, purpose, amount, or use of a fee), consumers’ ability to cancel charges, or the terms of any negative option feature.
Requires Cleo to clearly and conspicuously disclose, prior to obtaining the consumer’s billing information, all material terms, including any charges after a trial period ends, when a consumer must act to prevent charges, the amount the consumer will be charged unless steps are taken to prevent the charge, and information for consumers to find the simple cancellation mechanism.
Requires Cleo provide a simple mechanism for a consumer to cancel the negative option feature, avoid being charged, and immediately stop recurring charges. Such cancellation method must be through the same medium the consumer used to consent to the negative option feature.
The Commission voted 2-0 to issue the Cleo complaint and accept the proposed consent agreement.
Takeaways
The FTC has increased enforcement activities for negative options, such as last year’s enforcement action against Dave, Inc., another cash advance fintech company, which we wrote about previously. This attention on negative options, and consumers’ ability to easily cancel negative options, may provide insight into the FTC’s regulatory agenda, given that the remainder of its Click-to-Cancel Rule takes effect on May 14, 2025.
The FTC recently filed a brief in defense of its Click-to-Cancel Rule, vigorously defending the FTC’s rulemaking against trade association challenges consolidated in the Eighth Circuit. The FTC’s brief puts an end to speculation that the Commission may rethink or roll back the rule given the recent administration change and shifts in FTC leadership.
Businesses should be preparing to adopt changes to implement the Click-to-Cancel Rule, to the extent not already in process. The FTC’s complaint against Cleo should also serve as a reminder that businesses that employ “up to” claims, complex fee structures, or negative option offers should be careful to monitor their conduct in light of developments within the FTC and the other federal and state agencies that police advertising and marketing practices.
CFTC Withdraws Pair of Advisories on Heightened Review Approach to Digital Asset Derivatives [Video]
On March 28, the staff of the Commodity Futures Trading Commission (CFTC) issued two press releases announcing the withdrawal of two previous advisories that reflected the agency’s heightened review approach to digital asset derivatives.
These announcements appear to mark the end of the CFTC’s heightened review of digital asset products. The CFTC rules certainly still apply, but this seems to be a deliberate move by the CFTC to start treating digital asset derivatives like other CFTC-regulated products. It also gives a glimpse of how the CFTC would regulate digital asset spot transactions if Congress gives it the authority to do so.
The first advisory the CFTC withdrew was Staff Advisory No. 18-14, Advisory with Respect to Virtual Currency Derivative Product Listings, which was issued on May 21, 2018. The withdrawal is effective immediately. That advisory provided certain enhancements that CFTC-regulated entities were asked to follow when listing digital asset derivatives. These included enhanced market surveillance, closer coordination with the CFTC, reporting obligations, risk management and outreach to members and market participants. That advisory was withdrawn in its entirety, with the CFTC staff citing its increased experience with digital asset derivatives and that the digital asset industry has increased in market growth and maturity.
The second advisory the CFTC staff withdrew was Staff Advisory No. 23-07, Review of Risks Associated with Expansion of DCO Clearing of Digital Assets, issued on May 30, 2023. It stated that CFTC staff would focus on the heightened risks of digital asset derivatives to system safeguards, fiscal settlement procedures and conflicts of interest.
EU: New European Consumer Protection Guidelines for Virtual Currencies in Video Games
On March 21, 2025, ahead of a consultation and call for evidence on the EU’s Digital Fairness Act, the Consumer Protection Cooperation (CPC) Network[1] highlighted the pressing need for improved consumer protection in the European Union, particularly regarding virtual currencies in video games. This move comes in response to growing concerns about the impact of gaming practices on consumers, including vulnerable groups such as children. The CPC Network has defined a series of key principles and recommendations aimed at ensuring a fairer and more transparent gaming environment. These recommendations are not binding and without prejudice to applicable European consumer protection laws[2] but they will likely guide and inform the enforcement of consumer protection agencies on national level across the EU.
What Are the Key Recommendations for In-Game Virtual Currency?
The CPC Network’s recommendations are designed to enhance transparency, prevent unfair practices, and protect consumers’ financial well-being. These principles are not exhaustive but cover several crucial areas:
Clear and Transparent Price Indication: The price of in-game content or services must be shown in both in-game currency and real-world money, ensuring players can make informed decisions about their purchases. (Articles 6(1)(d) and 7 of the UCPD, and Article 6 (1) (e) of the CRD)
Avoiding Practices That Obscure Pricing: Game developers should not engage in tactics that obscure the true cost of digital content. This includes practices like mixing different in-game currencies or requiring multiple exchanges to make purchases. The goal is to avoid confusing or misleading players.(Articles 6 (1) (d) and 7 of the UCPD, and Article 6 (1) (e) of the CRD)
No Forced Purchases: Developers should not design games that force consumers to spend more money on in-game currencies than necessary. Players should be able to choose the exact amount of currency they wish to purchase.(Articles 5, 8 and 9 of the UCPD)
Clear Pre-Contractual Information: Prior to purchasing virtual currencies, consumers must be given clear, easy-to-understand information about what they are buying. This is particularly important for ensuring informed choices.(Article 6 of the CRD)
Respecting the Right of Withdrawal: Players must be informed about their right to withdraw from a purchase within 14 days, particularly for unused in-game currency. This is crucial for ensuring consumers’ ability to cancel transactions if they change their mind.(Articles 9 to 16 of the CRD)
Fair and Transparent Contractual Terms: The terms and conditions for purchasing in-game virtual currencies should be written clearly, using plain language to ensure consumers fully understand their rights and obligations.(Article 3 (1) and (3) of the UCTD)
Respect for Consumer Vulnerabilities: Game developers must consider the vulnerabilities of players, particularly minors, and ensure that game design does not exploit these weaknesses. This includes providing parental controls to prevent unauthorized purchases and ensuring that any communication with minors is carefully scrutinized.(Articles 5-8 and Point 28 of Annex I of the UCPD)
These principles reflect the growing concern by European regulators of exploitation of consumers, particularly vulnerable groups such as children, in the gaming world. The European Consumer Organisation (BEUC) has strongly supported these measures, which aim to provide a safer, more transparent gaming experience for players.
Enforcement Actions and Legal Proceedings
On the same day, coordinated by the European Commission the CPC Network initiated legal proceedings against the developer of on online game. This action, driven by a complaint from the Swedish Consumers’ Association, addresses concerns about the company’s marketing practices, particularly those targeting children. Allegations include misleading advertisements urging children to purchase in-game currency, aggressive sales tactics such as time-limited offers, and a failure to provide clear pricing information.
A Safer Gaming Future
This enforcement action, along with the introduction of new principles, is part of the European Commission’s stated objective to ensure better consumer protection within the gaming industry. The Commission aims to emphasize the importance of transparency, fairness, and the protection of minors within gaming platforms.
What Should Video Game Companies and Gambling Operators Do Next?
In light of these new developments, video game companies and gambling operators especially those offering virtual currencies are well advised to review their practices to ensure ongoing compliance with existing EU consumer protection laws.
Failure to align with the above principles does not automatically mean that consumer laws are infringed but as the recent enforcement action shows could result in investigations and enforcement actions under the CPC Regulation or national laws. If gaming content is available across multiple EU countries, a coordinated investigation may be triggered, with the possibility of fines up to 4% of a company’s annual turnover.
To further support the industry, the European Commission is organising a workshop to allow gaming companies to present their strategies for aligning with the new consumer protection standards. This will provide a valuable opportunity for companies to share their plans and address any concerns related to these proposed changes. If you would like to know more, please get in touch.
FOOTNOTES
[1] The CPC Network is formed by national authorities responsible for enforcing EU consumer protection legislation under the coordination of the European Commission.
[2] Reference is made to Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 on unfair commercial practices (UCPD); the Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights (CRD); the Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts (UCTD).
China Regulator Proposes Amendments to Cybersecurity Law
On March 28, 2025, the Cyberspace Administration of China issued draft amendments to China’s Cybersecurity Law (“Draft Amendment”) for public comment until April 27, 2025. The Draft Amendment aims to harmonize relevant provisions of the Personal Information Protection Law (“PIPL”), Data Security Law (“DSL”) and Law of Administrative Penalties, all of which were issued after the Cybersecurity Law came into effect in 2021.
The Draft Amendment amends the liability provisions of the Cybersecurity Law as follows:
Legal liability for network operation security: (1) classifies massive data leakage incidents, loss of partial functions of critical information infrastructure (“CII”) and other serious consequences that jeopardize network security as violations of the Cybersecurity Law and increases the range of fines set forth in the DSL for such violations; (2) imposes liability for the sale or provision of critical network equipment and specialized cybersecurity products that do not meet the Cybersecurity Law’s requirements for security certification and security testing; and (3) clarifies penalties for CII operators that use network products or services that have either not undergone or passed security review.
Legal liability for security of network information: (1) increases the penalty range for failure to report to the competent authorities, or failure to securely dispose of, information that is prohibited by applicable law to be published or transmitted; and (2) clarifies penalties for violations of the Cybersecurity Law that have particularly serious impacts and consequences.
Legal liability for security of personal information and important data: Amends the Cybersecurity Law to incorporate the PIPL’s and DSL’s penalty structure for violations of the law involving the security of personal information and other important data.
Mitigation of penalties: Adds provisions to mitigate, alleviate or withhold penalties for violations of the Cybersecurity Law where: (1) the network operator eliminates or mitigates the harmful consequences of the violation; (2) the violation is minor, timely corrected and does not result in harmful consequences; or (3) it is a first time violation that is timely corrected and results in minor harmful consequences. The Draft Amendment also clarifies that the competent authorities are responsible for formulating the corresponding benchmarks for administrative penalties.
NEW HAMPSHIRE DEEPFAKE SCANDAL TCPA LAWSUIT: Court Refuses To Dismiss Claims Against Platforms That Allegedly Aided In Sending The AI/Deepfake Calls Impersonating President Biden
Hi TCPAWorld! Remember last year when that political consultant from Texas hired the New Orleans magician to sound like Joe Biden in order to make calls using AI technology to New Hampshire voters in an attempt to convince them not to vote?
Well, that saga continues!
So for some background here, Steve Kramer, a political consultant, used AI technology to create a deepfake recording of President Joe Biden’s voice. Days before the New Hampshire primary, nearly 10,000 voters received a call in which the AI voice falsely suggested that voting in the primary would harm Democratic efforts in the general election. To further the deception, Kramer spoofed the caller ID to display the phone number of Kathleen Sullivan, a well-known Democratic leader. Voice Broadcasting Corporation and Life Corporation enabled the call campaign, providing the technology and infrastructure necessary to deliver the calls.
Steve Kramer, Voice Broadcasting Corporation, and Life Corporation in the US District Court of New Hampshire were sued on March 14, 2024, for violations of the TCPA (as well as violations of the Voting Rights Act of 1965 and New Hampshire statutes regulating political advertising) by the League of Women Voters of the United States, the League of Women Voters of New Hampshire, and three individuals who received those calls. League of Women Voters of New Hampshire et al v. Steve Kramer et al, No. 24-CV-73-SM-TSM.
Broadcasting Corporation and Life Corporation filed a motion to dismiss arguing 1) they did not “initiate” the at-issue calls and 2) these calls did not violate the TCPA because they were “’political campaign-related calls,’ which are permitted when made to landlines, even without the recipient’s prior consent.”
The court denied their motion on 3/26/25 finding that the plaintiffs adequately alleged a plausible claim for relief under the TCPA. League of Women Voters of New Hampshire et al v. Steve Kramer et al, No. 24-CV-73-SM-TSM, 2025 WL 919897 (D.N.H. Mar. 26, 2025).
The TCPA makes it unlawful “to initiate any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party.” While the TCPA does not specifically define what it means to “initiate” a call, the FCC has established clear guidance. According to In the Matter of the Joint Petition filed by Dish Network, Federal Communications Commission Declaratory Ruling, 2013 WL 1934349 at para. 26 (May 9, 2013), a party “initiates” a call if it takes the steps necessary to physically place it or is so involved in the process that it should be deemed responsible.
In this case, the court assumed, without deciding, that neither Voice Broadcasting nor Life Corporation physically placed the calls. But that didn’t absolve them. The court turned to the totality of the circumstances to determine whether the companies were sufficiently involved to bear liability.
The allegations were that Voice Broadcasting didn’t merely act as a passive service provider. Instead, it actively collaborated with Kramer to refine the message and even suggested adding a false opt-out mechanism that directed recipients to call Kathleen Sullivan’s personal phone number. Life Corporation, in turn, allegedly facilitated the delivery of thousands of the calls using its telecommunications infrastructure. The court found that these facts were more than enough to justify holding the companies accountable under the TCPA.
Quoting the FCC’s guidance, the court explained that companies providing calling platforms cannot simply “blame their customers” for illegal conduct. Liability attaches to those who “knowingly allow” their systems to be used for unlawful purposes. Voice Broadcasting and Life Corporation had the means to prevent the deepfake calls— but they didn’t. As the court explained, “Even if one were to assume that neither Voice Broadcasting nor Life Corp. actually ‘initiated’ the Deepfake Robocalls, they might still be liable for TCPA violations, depending upon their knowledge of, and involvement in, the scheme to make those illegal calls.”
As for the defendants’ second argument, that the calls were political and therefore exempt from the TCPA’s consent requirements, the court acknowledged that political campaign calls using regulated technology, such as the AI-voice technology used in the alleged calls, to landlines are generally permissible, even without prior express consent. However, this exemption is not a free pass. The calls must comply with other key provisions of the TCPA, including the requirement to provide a functional opt-out mechanism.
Here, instead of providing a legitimate way for recipients to opt-out, the alleged calls instructed the recipients to call Kathleen Sullivan’s personal phone number. This sham opt-out mechanism not only failed to meet TCPA standards but also contributed to the deception. The court had no trouble rejecting the claim that this constituted compliance: “Little more need be said other than to note that such an opt-out mechanism plainly fails to comply with the governing regulations and is not, as defendants suggest, ‘adequate.’”
And in case you are all wondering about Mr. Kramer himself, a default was entered against Kramer on 8/29/24.
The entire story behind these calls has been something to watch. This is definitely a case to keep an eye on!
Georgia Regulates Third Party Litigation Financing in Senate Bill 69
On February 27, 2025, by a vote of 52 to 0, the Georgia Senate passed Senate Bill 69, titled “Georgia Courts Access and Consumer Protection Act.”
If signed into law, the bill would regulate third-party litigation financing (“TPLF”) practices in Georgia where an individual or entity provides financing to a party to a lawsuit in exchange for a right to receive payment contingent on the lawsuit’s outcome. This bill represents another effort by states to restrain the influence of third-party litigation financiers and increase transparency in litigations.
Senate Bill 69 sets forth several key requirements. First, a person or entity engaging in litigation funding in Georgia must register as a litigation financier with the Department of Banking and Finance and provide specified information, including any affiliation with foreign persons or principals. Such filings are public records subject to disclosure.
Second, the bill restricts the influence of a litigation financier in actions or proceedings where the financier provided funding. For example, a litigation financier cannot direct or make decisions regarding legal representation, expert witnesses, litigation strategy, or settlement, which are reserved only for the parties and their counsel. A litigation financier also cannot pay commissions or referral fees in exchange for a referral of a consumer to the financier, or otherwise accept payment for providing goods or services to a consumer.
Third, the bill renders discoverable the existence, terms, and conditions of a litigation financing agreement in the underlying lawsuit. Although mere disclosure of information about a litigation financing agreement does not make such information automatically admissible as evidence at trial, it opens the door to that possibility.
Fourth, the bill delineates specific requirements for the form of a litigation financing agreement and mandates certain disclosures about the consumer’s rights and the financier’s obligations. A financier’s violation of the bill’s provisions voids and renders unenforceable the litigation financing agreement. Willful violations of the bill’s provisions may even lead to a felony conviction, imprisonment, and a fine of up to $10,000.
Fifth, the bill holds a litigation financier “jointly and severally liable for any award or order imposing or assessing costs or monetary sanctions against a consumer arising from or relating to” an action or proceeding funded by the financier.
According to a Senate press release, Senate President Pro Tempore John F. Kennedy, who sponsored the legislation, lauded the Senate’s passage of the bill as enhancing transparency and protection for consumers. He commented that “[Georgia’s] civil justice system should not be treated as a lottery where litigation financiers can bet on the outcome of a case to get a piece of a plaintiff’s award” and that “SB 69 establishes critical safeguards for an industry that continues to expand each year.” He further stressed the need to “level the playing field and ensure that [Georgia’s] legal system serves the people—not powerful financial interests.” Since passing the Senate, the bill has also proceeded through the House First and Second Readers.
Georgia’s proposed legislation is largely in line with recent proposed or enacted TPLF legislation in other states. In October 2024, the New Jersey Senate Commerce Committee advanced Senate Bill 1475, which similarly requires registration by a consumer legal funding company, restricts the actions and influence of a consumer legal funding company, and mandates certain disclosures in a consumer legal funding contract, among other things. Indiana and Louisiana also enacted TPLF legislation codified respectively at Ind. Code §§ 24-12-11-1 to -5 (2024), and La. Stat. Ann. §§ 9:3580.1 to -.7 and 9.3580.11 to .13 (2024). West Virginia expanded its TPLF laws by enacting legislation codified at W. Va. Code §§ 46A-6N-1, -4, -6, -7, and -9 (2024). But different from these legislations, Georgia’s proposed legislation explicitly provides for the possibility of felony consequences for willful violations of its provisions.
TPLF has also reverberated at the federal level. In October 2024, the United States Supreme Court’s Advisory Committee on Civil Rules reportedly proposed to create a subcommittee to examine TPLF. H.R. 9922, the Litigation Transparency Act of 2024, was also introduced in the United States House of Representatives that same month and would require disclosure of TPLF in civil actions.
But while some argue that TPLF regulation would bring greater transparency and reduce frivolous litigation, others protest that such regulation would harm litigants with less resources. Either way, litigants would be well-served to monitor important developments regarding TPLF at both the state and federal levels.
ESG Update: Corporate Directors May Be Obligated to Assess Political Risk
Right now, much about the world is uncertain. Risks posed by political changes dominate the headlines and also weigh heavily on many decisions made by corporations, their advisors, and their stakeholders.
Businesses, of course, want to succeed even in chaotic environments. Success requires appropriate planning, and planning can help lead to predictability. Good corporate governance — making sure directors have appropriate information to timely assess compliance with legal obligations and fulfill duties they owe to the business, its employees, and stakeholders — can help mitigate downside impacts to businesses.
Delaware law obligates corporate directors to, among other things, take steps sufficient to assess corporate legal compliance. What has come to be known as “Caremark liability” attaches when directors fail to adequately oversee the company’s operations and compliance with the law. Below we frame out what Caremark liability is, how it applies to evaluating a politically uncertain environment, and outline six steps companies can take to appropriately manage risk.
Caremark Liability Defined
Caremark liability takes its name from the 1996 decision In re Caremark International Inc. Derivative Litigation, which established that directors of a Delaware corporation have a duty to ensure that appropriate information and reporting systems are in place within the corporation.
Caremark stems from an action where shareholders of Caremark International alleged that they were injured when Caremark employees violated various federal and state laws applicable to health care providers, resulting in a federal mail fraud charge against the company. In a subsequent plea agreement, Caremark agreed to reimburse various parties approximately $250 million. Caremark shareholders filed a derivative action against the company’s directors alleging that the directors breached their duty of care to shareholders by failing to actively monitor corporate performance.
Key points of Caremark liability under Delaware law include:
Duty of Oversight: Directors must make a good faith effort to oversee the company’s operations and ensure compliance with applicable laws and regulations.
Establishing Systems: Directors are expected to implement and monitor systems that provide timely and accurate information about the corporation’s compliance with legal obligations.
Breach of Duty: To establish a breach of Caremark duties, plaintiffs must show that directors either utterly failed to implement any reporting or information system or controls, or, having implemented such a system, consciously failed to monitor or oversee its operations.
High Threshold for Liability: Proving a breach of Caremark duties requires evidence of bad faith or a conscious disregard by directors of their duties.
Good Faith Effort: Directors are generally protected if they can demonstrate that they made a good faith effort to fulfill their oversight responsibilities, even if the systems in place were not perfect.
Caremark liability emphasizes the importance of proactive and diligent oversight by directors to prevent corporate misconduct and to demonstrate that directors are acting in good faith. Cases following Caremark emphasize that liability only attaches when directors disregard their obligations to companies, not when their business decisions result in “unexceptional financial struggles.”
Caremark claims remain difficult to plead but remain viable and, therefore, may lead to significant defense costs.
Is Caremark “ESG litigation”?
Yes. Since the November 2024 election, discussions of environmental, social, and governance (ESG) activities have been commonplace, with discussions of whether corporations should walk back prior commitments dominating the headlines. Caremark claims are distinct from claims frequently lumped together as “ESG litigation.” These “ESG litigation” claims typically involve either “greenwashing”-style product marketing claims (for examples, see here and here) or claims that investment managers, by factoring in ESG investment criteria, deprived investors of appropriate returns (two recent decisions are here and here). Caremark focuses on the “G” in ESG; it speaks directly to corporate governance and directors’ duties to monitor and oversee in good faith a corporation’s compliance with laws.
While the nomenclature of corporate governance may be shifting away from “ESG,” corporate officers remain obligated to oversee corporate operations and ensure compliance with the law. Caremark claims can be used to assess their efforts.
Corporate Governance and Political Risk
Political uncertainty in the United States is affecting regulated entities ranging from Fortune 100 corporations to law firms and from mom-and-pop importers to universities. Recent US Supreme Court decisions including Trump v. United States and Loper Bright v. Raimondo have fundamentally reshaped relations both between the branches of government and between the government and the regulated community.
Over time, members of the regulated community have increasingly faced pressure not just to comply with the law but also to take positions on political issues outside their immediate economic environment. While corporations may have systems in place to monitor risk incident to product liability or supply chain issues, they may not be monitoring risks related to the whipsawing of political positions on issues such as diversity, equity, and inclusion (DEI), the challenges posed by a dramatically slimmed (and thus less responsive) bureaucracy, or recissions of expected government funding.
These political issues can generate corporate risk. Good corporate governance practices can help cabin new corporate risks, thereby minimizing the potential for financial impacts on the corporation. Practices which could be evaluated include:
Ensure appropriate data-gathering and compilation. Political policies do not arise in a vacuum. Internal and external policy advisors, trade associations, and business contacts can help track potential political risks.
Review and assess policy positions and evaluate whether they continue to be appropriate on a regular basis. At the federal level, we have seen DEI-related activities move from being universally lauded to potential reasons for imposition of federal civil or criminal liability. Executive Order 14173, issued on January 21, directed the US Attorney General to develop an enforcement plan to target private sector DEI programs believed to be unlawful. Actions like designating corporate personnel tasked with understanding points of emphasis in government enforcement and mapping them across a corporate footprint may be appropriate.
Evaluate what corporate efforts are appropriate to use in marketing efforts in the current political environment. Recent years have seen sustainability reports become key tools to influence stakeholders ranging from consumers to employees. Businesses which previously leaned into social issues or community involvement in the ESG-era may want to deemphasize aspirational goals and/or provide additional data on their factual conclusions, practices, and achievements.
Review and assess places where rollbacks in federal, state, or local government spending could impact the viability of business operations. Investments reliant on federal grants or subsidies need to be reviewed.
Review corporate compliance programs in light of federal priorities. The US Department of Justice has listed initial federal compliance priorities including terrorism financing, money laundering, and international restraints on trade. As above, taking a systematic approach to understanding and evaluating points where corporate activities could be impacted by enforcement priorities may be appropriate.
Finally, the regulated community should conduct a thorough census of regulations or statutory laws that have the potential to negatively impact corporate operations. They should assess whether any impediments can be addressed through a forward-looking government relations strategy, especially given current efforts to streamline regulations and government operations, particularly related to environmental and energy issues. (For more, see here and here.)
When directors fail to consider and weigh political factors and shifts in governmental initiatives and program enforcement such as those listed above, stakeholders may ask why the board made no effort to make sure it was informed about an issue so intrinsically critical to the company’s business operation.
Legal AI Unfiltered: Legal Tech Execs Speak on Privacy and Security
With increasing generative AI adoption across the legal profession, prioritizing robust security and privacy measures is critical. Before using any generative AI tool, lawyers must fully understand the underlying technology, beginning with thorough due diligence of legal tech vendors.
In July 2024, the American Bar Association issued Formal Opinion 512, which provides some guidance on the proper review and use of generative AI in legal practice. The opinion underscores some of the ABA Model Rules of Professional Conduct that are implicated by lawyers’ use of generative AI tools. This includes the duty to deliver to competent representation, keep client information confidential, communicate generative AI use to clients, properly supervise subordinates in their use of generative AI, and to only charge reasonable fees.
Even before deploying generative AI tools, however, lawyers must understand a vendor’s practices. This includes verifying vendor credentials and fully reviewing policies related to data storage and confidentiality.
According to Formal Opinion 512, “all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies.” Lawyers may also need to consult IT and cybersecurity professionals to understand terminology and assess any potential risks.
In practice, this means carefully reviewing vendor contract terms related to a vendor’s limitation of liability, understanding if a vendor’s tool “trains” on your client’s data, assessing data retention policies (before, during, and after using the tool), and identifying appropriate notification requirements in the event of a data breach.
To further explore these ethical guidelines in practice, we spoke with legal technology executives about the security and privacy measures they implement, as well as best practices for lawyers when evaluating and vetting legal tech vendors.
What security measures do you take to protect client data?
Troy Doucet, Founder @ AI.Law
Enterprise-expected security measures including SOCII, HIPAA, and robust encryption at rest and in transit for data. We also follow ABA guidance on AI, including confidentiality, not training our models on our users’ data, and making it clear that we do not own the data users input.
Jordan Domash, Founder & CEO @ Responsiv
The foundation must be traditional security and privacy controls that have always been important an enterprise software. On top of that, we’ve built a de-identification process to strip out PII and corporate identifiable content before processing by an LLM. We also have a commitment to not have access to or train on client questions and content.
Michael Grupp, CEO & Co-founder @ BRYTER
We have an entire team focused on security and compliance so the answer is of course, all of them: SOC 2 Type II, ISO27001, GDPR, CCPA, EU AI Act etc. And, BRYTER does not use client data to develop, train or fine-tune the AI models we use.
Gil Banyas, Co-Founder & COO @ Chamelio
Chamelio safeguards client data through industry-standard encryption, SOC 2 Type II certified security controls, and strict access management with multi-factor authentication. We maintain zero data retention arrangements with third-party LLMs and employ continuous security monitoring with ML-based anomaly detection. Our comprehensive security framework ensures data remains protected throughout its entire lifecycle.
Khalil Zlaoui, Founder & CEO @ CaseBlink
Client data is encrypted in transit and at rest, and is not used to train AI models. We enforce a strict zero data retention policy – no user data is stored after processing. A SOC 2 audit is nearing completion to certify that our security and data handling practices meet industry standards, and customers can request permanent deletion of their data at any time.
Dorna Moini, CEO & Founder @ Gavel
Gavel was built for legal documents, so our security standards exceed those typical of software platforms. We use end-to-end encryption, private AI environments, and enterprise-grade access controls—backed by SOC II databases and third-party audits. Client data is never used for training, and our retention policies give firms full control, ensuring compliance and peace of mind.
Ted Theodoropoulos, CEO @ Infodash
Infodash is built on Microsoft 365 and Azure and deployed directly into each customer’s own tenant, which means we host no client data whatsoever. This unique architecture ensures that law firms always maintain full control over their data. Microsoft’s enterprise-grade security includes encryption at rest and in transit, identity management via Azure Active Directory, and compliance with certifications like ISO/IEC 27001 and SOC 2.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Wisedocs uses services that implement strict access controls, including role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to prevent unauthorized access to your data. Our organization employs configurable data retention policies as agreed upon with our clients. Wisedocs has achieved our Soc 2 Type 2 attestation, as well as established information security and privacy program in accordance with SOC 2, HIPPA, PIPEDA, PHIPA, as well as annual risk assessments and continual vulnerability scans.
Daniel Lewis, CEO @ LegalOn
Security and privacy are top priorities for us. We are SOC 2 Type II, GDPR, and CCPA compliant, follow industry-standard encryption protocols, and use state-of-the-art infrastructure and practices to ensure customer data is secure and private.
Gila Hayat, CTO & Co-Founder @ Darrow
Darrow is working mostly on the open web realm, utilizing as much as publicly available data as possible, surfacing potential matters from the open web. Our clients confidentiality and privacy is a must, therefore we adhere to security standards and regulations, and collect minimal data as possible to maintain trust. We take client confidentiality and privacy very seriously.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
We exclusively use reputable, secure providers and AI models that never store or log data, with no human review or monitoring permitted. All vendors are contractually bound to ensure data is never retained or used for training in any form. This, in combination with ISMS certifications and adherence to industry standards, ensures robust data security and privacy.
Gary Sangha, CEO @ LexCheck Inc.
We are SOC 2 compliant and follow rigorous cybersecurity standards to ensure client data is protected. Our AI tools do not retain any personally identifiable information (PII), and all data processing is handled securely within Microsoft Word, leveraging Azure’s built-in data protection. This ensures client data remains encrypted, confidential, and under the highest level of enterprise-grade security.
Tom Martin, CEO & Founder @ Lawdroid
As a lawyer myself, I understand the fiduciary responsibility we have to handle our client data responsibly. At LawDroid, we use bank-grade data encryption, do not train on your data, and provide you with fine grain control over how long your data is retained. We also just implemented browser-side masking of personally identifiable information to prevent it from ever being seen.
Lawyers are very concerned about data privacy. What would you tell a lawyer who doesn’t use legal-specific AI tools due to privacy concerns?
Troy Doucet, Founder @ AI.Law
You have control over what you input into AI, so do not input data that you do not feel comfortable inputting. AI products vary in their functionality too, meaning different levels of concern. For example, asking AI about the difference between issue and claim preclusion is a low-risk event, versus mentioning where Jonny buried mom and dad in the woods.
Jordan Domash, Founder & CEO @ Responsiv
You’re right to be skeptical and critically consider a vendor before giving them confidential or privileged information! The risk is vendor-specific – not with the category. The right vendor designs the platform with robust data privacy measures in mind.
Michael Grupp, CEO & Co-founder @ BRYTER
We have been working with the biggest law firms and corporates for years, and we know that trust is earned, not given. This means that first, we try to be over-compliant – so this means agreements with providers to protect attorney-client privilege. Second, we make compliance transparent. Third, we provide references to those who are already advanced in the journey.
Gil Banyas, Co-Founder & COO @ Chamelio
Adopting new technology inevitably involves some privacy trade-offs compared to staying offline, but this calculated risk enables lawyers to leverage significant competitive advantages that AI offers to legal practice. Finding the right risk-reward balance means embracing innovation responsibly by selecting vendors who prioritize security, maintain zero data retention policies, and understand legal confidentiality requirements. Success comes from implementing AI tools strategically with appropriate safeguards rather than avoiding valuable technology that competitors are already using to enhance client service.
Khalil Zlaoui, Founder & CEO @ CaseBlink
Not all AI tools treat data the same, and legal-specific platforms like ours are built with strict safeguards and guardrails. Data is never used to train models, and everything is encrypted, access-controlled, and siloed. Only clients can access their own data. They retain full ownership and control at all times, with the ability to keep information private even across internal teams.
Dorna Moini, CEO & Founder @ Gavel
With consumer AI tools, your data may be stored, analyzed, or even used to train models—often without clear safeguards. Professional-grade and legal-specific tools like Gavel are built with attorney-client confidentiality at the core: no data sharing, no training on your client data inputs, and full control over retention. Avoiding AI entirely isn’t safer—it’s just riskier with the wrong tools (and that’s not specific to AI!).
Ted Theodoropoulos, CEO @ Infodash
Legal-specific platforms like Infodash are purpose-built with confidentiality at the core, unlike general-purpose consumer AI tools. These solutions are built with the privacy requirements of legal teams in mind. With new competitors like KPMG entering the market, delaying AI adoption poses a real competitive risk for firms.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Legal-specific AI tools are designed to be both secure and transparent, helping legal professionals understand and trust how AI processes their data while maintaining strict privacy controls. With human-in-the-loop (HITL) oversight, AI becomes a tool for efficiency rather than a risk, ensuring that outputs are accurate and reliable. By adopting AI solutions that follow strict security protocols such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance standards, legal teams can confidently leverage technology while maintaining control over their data through role-based access control (RBAC), multi-factor authentication (MFA), and configurable data retention policies.
Daniel Lewis, CEO @ LegalOn
Ask questions about how your data may be used — will it touch generative AI (where, without the right protections, your content could display to others), or non-generative AI? If it’s being processed by LLMs like OpenAI, understand whether your data is being used to train those models and if it’s being used in non-generative AI use cases, understand how. The use of your data might make the product you use better, so consider the risk/benefit trade-offs.
Gila Hayat, CTO & Co-Founder @ Darrow
Pro-tip for privacy preservation and worry-free experimentation with various AI tools: Have a non-sensitive or redacted document or use-case ready that you know the answers that you wouldn’t expect – and benchmark the various tools against it to have a fair comparison and no stress over leaking random work documents.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
Make sure to use a trusted vendor where no model training or fine-tuning is happening on client input.
Gary Sangha, CEO @ LexCheck Inc.
Lawyers should first understand what information they are actually sharing when using legal specific AI tools, often it is not personally identifiable information or sensitive client data. In many cases, you are not disclosing anything subject to confidentiality, especially when working with redlined drafts or standard contract language. That said, if you are sharing sensitive information, it is important to review your firm’s protocols, but depending on what you are sharing, it may not be a concern.
Tom Martin, CEO & Founder @ Lawdroid
Lawyers should be concerned about data privacy. But, steering away from legal-specific AI tools due to privacy concerns would be a mistake. If anything, legal AI vendors take greater security precautions than consumer-facing tools, given our exacting customer base: lawyers.
For security and privacy purposes, what should lawyers and law firms know about a legal AI vendor before using their product?
Troy Doucet, Founder @ AI.Law
Knowing what they do to protect data, how they use your data, certifications they have, and encryption efforts are smart. However, knowing what your privacy and security needs are before using the product is probably the best first step.
Jordan Domash, Founder & CEO @ Responsiv
I’d start with a traditional security and privacy review process like you’d run for any enterprise software platform. On top of that, I’d ask: Do they train on your data? Do they have access to your data? What is your data retention policy?
Michael Grupp, CEO & Co-founder @ BRYTER
Even the early-adopters and fast-paced firms ask their vendors three questions: Where is the client data stored? Do you use the firm’s data, or client data, to train or fine-tune your models? How is legal privilege protected?
Gil Banyas, Co-Founder & COO @ Chamelio
Before adopting legal AI tools, lawyers should verify the vendor has strong data encryption, clear retention policies, and SOC 2 compliance or similar third-party security certifications. They should understand how client data flows through the system, whether information is stored or used for model training, and if data sharing with third parties occurs. Additionally, they should confirm the vendor maintains appropriate legal expertise to understand attorney-client privilege implications and offers clear documentation of privacy controls that align with relevant bar association guidance.
Dorna Moini, CEO & Founder @ Gavel
I did a post on what to ask your vendors here: https://www.instagram.com/p/C9h5jVYK5Zc/. Lawyers need clear answers on what happens to their data and how it’s being used. When choosing a vendor, it’s also important to understand output accuracy and the AI product roadmap as it relates to legal work – you are engaging in a marriage to a software company you know will continue to improve for your purposes.
Ted Theodoropoulos, CEO @ Infodash
Firms should ask where and how data is stored, whether it’s isolated by client, and if it’s used for training. Look for vendors that run on secure environments like Microsoft Azure and support customer-managed encryption keys. Transparency around data flows and integration with existing infrastructure is essential.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Lawyers and law firms should ensure that any legal AI vendor follows strict security protocols, such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance, along with role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to protect sensitive legal data. They should ensure the AI vendor is not using third party models or sharing data with AI model providers and the deployment of their AI is secure and limited. Additionally, firms should assess whether the AI system includes human-in-the-loop (HITL) oversight to mitigate hallucinations and organizational risks, ensuring accuracy and reliability in legal workflows.
Gila Hayat, CTO & Co-Founder @ Darrow
When choosing a legal AI vendor, it’s important to make sure it follows top-tier security standards and has a solid track record when it comes to protecting data.Don’t forget the contract: make sure it includes strong confidentiality terms so your clients’ data stays protected and compliant. Trusting the human and knowing the team: the legal tech scene is tight and personal, hop on a call with one of the team members to make sure you’re doing business with a trustworthy partner.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
You should understand whether a vendor’s AI models are trained on user data, this is a critical distinction. Vendors that fine-tune or improve their models using client input may pose significant privacy risks, especially if sensitive information is involved. It’s important to evaluate whether specially trained or fine-tuned models offer enough added value to justify the potential trade-off in privacy.
Gary Sangha, CEO @ LexCheck Inc.
Lawyers and law firms should understand what information they are sharing through the AI tool, as it is often personally identifiable information or subject to confidentiality. They should confirm whether the vendor is compliant with frameworks like SOC-2 which ensures rigorous controls for data protection and ensure that data is encrypted and securely processed. Reviewing how the tool handles data protection helps ensure it aligns with the firm’s security and privacy policies.
Tom Martin, CEO & Founder @ Lawdroid
Lawyers need to ask questions: 1) Do you employ encryption? 2) Do you train on data I submit to you? 3) Do you take precautions to mask PII? 4) Can I control how long the data is retained?
By carefully evaluating security credentials, vendor practices, and model usage policies, lawyers can responsibility and confidently employ generative AI tools to improve their delivery of legal services. As these technologies evolve, best practices for security and implementation will also evolve, making it important for lawyers to continue following industry updates and new best practices.
New Guidelines Establishing the Requirements and Procedures That Must Be Observed to Obtain Permission to Advertise Prepackaged Food and Non-Alcoholic Beverages
Following our newsletter dated March 31, 2020 “The new Mexican Official Standard for the labelling of pre-packaged food and non-alcoholic beverages” and other newsletters regarding labelling of products, after five years of the publication of this Mexican Official Standard, on March 11, 2025, the Guidelines regarding advertising of prepackaged food and non-alcoholic beverages were published in the Official Gazette and entered into force on March 12, 2025.
These Guidelines appear to now restrict the advertising of these types of products, imposing advertisers, advertising agencies and media, the obligation to obtain a permit/approval for advertising the products on open television, restricted television, movie theaters, internet and other digital platforms.
Any product is subject to approval by the Federal Comision Against Sanitary Risks (COFEPRIS) when their label includes one or more warning seals of the front labeling system.
The main restrictions, among others, are the following:
It is forbidden to use animated characters, pets or interactive games directed at children to promote the consumption of the products.
To compare the products with natural ones.
To compare with similar products regarding their composition or nutritional contents.
To suggest physical or intellectual abilities from its consumption.
To promote excessive consumption of the product.
To suggest that the products may modify body proportions.
The requirements for obtaining the permit/approval to advertise the products are to fill in a format, pay government fees and attach the “operation notice” (authorization) of the product.
Once submitted the application, COFEPRIS has a term of 20 working days to approve the advertisement and/or 10 days to issue a requirement. Applicant has a term of 5 days to reply or else, the approval will be dismissed.
Although, we consider all these requirements to be an unnecessary burden to the industry, this Guidelines provide definitions of terms such as, “pets”, “celebrities”, “children’s characters”, “digital downloads”, “cartoons” and “indirect advertising”, that were missing in the Mexican Official Standard for the labelling of pre-packaged food and non-alcoholic beverages.
Tick-Tock, Don’t Get Caught: Navigating TCPA’s Quiet Hours
In recent months, businesses across various industries have been hit with a wave of lawsuits targeting alleged violations of the Telephone Consumer Protection Act’s (“TCPA”) call time rules. Plaintiffs are increasingly claiming that text messages, often sent just minutes outside the allowable hours, violate the Federal Communication Commission’s (“FCC”) rules and entitle them to substantial compensation. These lawsuits are creating challenges for businesses that rely on telemarketing and short message service (“SMS”) programs, even when they have received prior consent from their customers.
Understanding the TCPA’s Statutory and Regulatory Framework
The TCPA, enacted in 1991, was designed to protect consumers from unwanted telemarketing calls. Over time, its reach has expanded to cover text messages, making businesses that engage in text message marketing campaigns subject to compliance. One key area of regulation is the TCPA’s call time rules, found in the Do-Not-Call (“DNC”) regulations issued by the FCC. These rules prohibit telephone solicitations to residential subscribers before 8:00 AM or after 9:00 PM local time at the called party’s location.
Under the TCPA, a “telephone solicitation” is defined as a call or message made for the purpose of encouraging the purchase or rental of, or investment in, property, goods, or services. Importantly, the statute and regulations carve out several exceptions, including for calls or messages made to individuals who have given prior express consent to be contacted.
The penalties for violating the TCPA can be severe. Violations can result in statutory damages ranging from $500 to $1,500 per call or message, depending on whether the violation was willful. These potential damages create significant exposure for businesses that rely on telemarketing or SMS outreach, particularly when multiple calls or messages are at issue.
Recent Wave of Lawsuits and Why the Claims Are Unmeritorious
Despite the FCC’s long-standing guidance and the clear statutory language regarding consent, plaintiffs have increasingly filed lawsuits alleging that text messages sent outside the 8:00 AM – 9:00 PM window violate the TCPA’s call time restrictions. Many of these lawsuits focus on minor deviations from the permissible time window, such as texts sent just minutes before 8:00 AM or shortly after 9:00 PM.
What makes these lawsuits particularly problematic is that in many cases, the plaintiffs had previously opted into the SMS programs and expressly consented to receive marketing messages. Under the plain language of the TCPA and FCC regulations, such consent removes the text message from the definition of a “telephone solicitation” and, by extension, exempts it from the call time restrictions. This means that businesses with valid consent should not be subject to these lawsuits.
However, plaintiffs are exploiting the uncertainty created by the lack of clear FCC guidance on whether the call time rules apply to text messages where consent has been provided. They argue that, regardless of consent, any text message sent outside the permissible hours violates the TCPA, leaving businesses vulnerable to litigation and potential class action exposure.
The FCC Petition for Declaratory Ruling
In response to this growing litigation trend, an industry group recently filed a petition with the FCC, seeking a declaratory ruling that the TCPA’s call time restrictions do not apply to text messages sent to individuals who have given prior express consent. The petition highlights the plain language of the statute and regulations, arguing that consent should exempt businesses from the call time rules and shield them from the growing number of predatory lawsuits.
The petition also requests clarification or waiver of the rule requiring knowledge of the recipient’s location for compliance, arguing that current standards are unworkable and lead to abusive litigation practices. The petitioners emphasize that the TCPA’s unique combination of strict liability, statutory damages, and private right of action make it ripe for lawsuit abuse, with opportunistic litigators targeting legitimate businesses.
While this petition represents a positive step towards clarifying the law, the FCC’s rulemaking process can be lengthy. In the meantime, businesses must continue to operate in a landscape where uncertainty about the applicability of the call time rules remains. It could be months, if not longer, before the FCC issues a ruling, and during this time, we expect plaintiffs’ attorneys to continue targeting businesses with TCPA lawsuits.
Recommendations for Reducing Risk
Until the FCC provides clear guidance on the issue, businesses should take proactive steps to mitigate the risk of being targeted by TCPA quiet hour lawsuits. Here are several recommendations to help ensure compliance and reduce exposure:
Observe Call Time Windows: Despite the legal uncertainties surrounding the applicability of the call time rules to text messages, businesses should err on the side of caution and adhere to the 8:00 AM – 9:00 PM window for sending marketing messages. This simple step can help reduce the likelihood of being sued.
Review and Update Consent Mechanisms: Businesses should review their SMS consent processes to ensure that they are obtaining clear and unambiguous consent from consumers. This includes updating terms and conditions to include disclosures about the potential timing of messages and ensuring that consumers understand the nature of the messages they will receive.
Implement Robust Compliance Procedures: Businesses should implement internal procedures to monitor the timing of their telemarketing and SMS campaigns. Consider using software that can automate the scheduling of messages.
Document Consent Thoroughly: If a lawsuit arises, being able to produce clear documentation that demonstrates a consumer’s consent to receive text messages will be critical in defending against the claim. Businesses should maintain detailed records of when and how consent was obtained.
Conclusion
The recent surge in TCPA lawsuits alleging violations of the call time restrictions highlights the need for businesses to stay informed and proactive in their compliance efforts. While we believe that many of these lawsuits are unmeritorious, businesses should still remain cautious. By observing the 8:00 AM – 9:00 PM call time window, reviewing consent mechanisms, and implementing strong compliance procedures, businesses can reduce their risk of being targeted by predatory lawsuits.
We will continue to monitor litigation in the courts and the FCC’s response to the pending petition, and provide updates as new developments arise. In the meantime, please reach out if you have any questions or need assistance in reviewing your telemarketing and SMS programs to ensure compliance with the TCPA.
MAKING SMART TCPA MOVES: Rocket Mortgage Follows Up Its Redfin Purchase With STUNNING $9.4BB Take Over of Mr. Cooper
So multiple outlets are reporting that Rocket is set to absorb the nation’s largest mortgage servicer Mr. Cooper.
With Rocket having just recently acquired Redfin it looks like the company is poised to be an absolute behemoth in the mortgage industry.
Just like with Redfin, however, the TCPA is likely driving this initiative.
Yes, mortgage servicing can be profitable in its own right but it is MASSIVELY valuable to an originator to have a large servicing pool.
Why?
Who is more likely to NEED mortgage or refinance than folks who already have a mortgage product? And with trigger leads now widely available (probably illegal under FCRA but don’t tell the CRAs that) having a massive servicing book means you can LEGALLY call folks who just submitted an application elsewhere and convince them to stay.
This is because the DNC rules will soon allow Rocket to call all of the MILLIONS of Mr. Cooper customers it just acquired WITHOUT CONSENT.
Pretty slick, eh?
So with Redfin providing consent on the front end and with access to a massive pool of mortgage customers now bolted on to the backend Rocket can make ready use of the phones to bring customers into its ecosystem–and keep them there.
Pretty clever. And it was all brought to you by the TCPA.
People think of the statute as a profit killer. But leveraged correctly it can actually drive profits by building a moat around your customers and a barrier-to-entry for others in your vertical.
Smart money uses the law as a competitive advantage. Nicely done Rocket.
Virginia Governor Recommends Amendments to Strengthen Children’s Social Media Bill
On March 24, 2025, Virginia Governor Glenn Youngkin asked the Virginia state legislature to strengthen the protections provided in a bill (S.B. 854) passed by the legislature earlier this month that imposes significant restrictions on minors’ social media use.
The bill would amend the Virginia Consumer Data Protection Act (“VCDPA”) to require social media platform operators to (1) use commercially reasonable methods (such as a neutral age screen) to determine whether a user is a minor under the age of 16; and (2) limit a minor’s use of the social media platform to one hour per day, unless a parent consents to increase the limit. The bill would prohibit social media platform operators from altering the quality or price of any social media service due to the law’s time use restrictions.
The Governor declined to sign the bill and recommended that the legislature make the following amendments to enhance the protections in the bill: (1) raise the covered user age from 16 to 18; and (2) require social media platform operators to, in addition to the time use limitations, also disable (a) infinite scroll features (other than music or video the user has prompted to play) and (b) auto-playing videos (i.e., where videos automatically begin playing when a user navigates to or scrolls through a social media platform), absent verifiable parental consent.