Regulatory Scrutiny on Potential MNPI in the Credit Markets

Over the past year, regulatory scrutiny of the credit markets has intensified, with the SEC investigating the potential use of material nonpublic information (“MNPI”) relating to credit instruments. The SEC brought a number of enforcement actions against investment advisers involving the failure to maintain and enforce written MNPI policies involving trading in distressed debt and collateralized loan obligations, even in the absence of insider trading claims. We anticipate that these investigations of trading in private credit instruments and related MNPI policies will continue, as SEC enforcement staff has increased their focus on these markets. 
Although insider trading investigations typically involve equity securities, in 2024 the Commission scrutinized ad hoc creditor committee participants and took action against distressed debt managers relating to MNPI. Many fund managers investing in distressed corporate bonds collaborate with financial advisors to form ad hoc creditors’ committees, aiming to explore beneficial debt restructuring opportunities prior to bankruptcy. Managers often avoid receiving MNPI to avoid prolonged trading restrictions on company bonds. For example, a manager may wish to remain unrestricted until formally entering a non‑disclosure agreement (“NDA”) with the company and will notify external financial advisors and other committee members that it should only receive material prepared on the basis of public information. In other cases, managers will rely on information barriers, organizing their businesses into “public” and “private” sides. The SEC Staff has identified these situations as involving a heightened MNPI risk, emphasizing the need for clear written procedures to handle MNPI and mitigate risks of leakage or inadvertent receipt.  While industry participants may struggle to draw specific compliance guidelines from these cases, the key takeaway is that the SEC expects heightened procedures for creditor committee participation and, more generally, consultants or advisers who may have access to MNPI. 
The SEC also focused on MNPI when trading securities issued by collateralized loan obligation vehicles (“CLOs”). Last year the SEC settled a case against a New York‑based private fund and CLO manager targeting the steps it took to ensure its analysts and advisers were not themselves misusing MNPI. The fund manager traded tranches of debt and equity securities issued by CLOs it directly managed as well as those managed by third parties. The SEC alleged that as a participant in an ad hoc lender group, the fund manager had become aware of negative developments that concerned a particular borrower, and privately sold CLO equity tranches while in possession of this confidential information. The CLO manager allegedly failed to consider the materiality of the negative information to the sold tranches before trading. While the SEC did not specifically allege insider trading, in part due to the firm obtaining internal compliance approval pre‑sale, the matter led to a settlement focusing on the fund manager’s failure to establish and enforce appropriate policies on the use and misuse of MNPI.  As emphasized in other distressed debt and similar MNPI cases, MNPI policies and practices should be tailored to the nature of a firm’s business. The failure to address information flow in these situations may lead to SEC scrutiny of the trading itself and the adviser’s policies under Section 204A of the Advisers Act, which requires investment advisers to establish, maintain and enforce written policies to prevent misuse of MNPI, as well as Section 206(4) and Rule 206(4)-7 (the Compliance Rule). The SEC has been investigating trading in the credit markets and shown a willingness to bring these cases even in the absence of any alleged insider trading, although the Commission recently voted to dismiss the one litigated matter. Interestingly, both Republican SEC Commissioners, despite philosophical objections to enforcement settlements under the Compliance Rule, voted to approve the Section 204A charges in the creditors committee matter, and one voted to approve such claim in the CLO matter. Even with the change in administration, the SEC staff will continue to scrutinize these issues and look at similar risks in the credit markets.
Read more of our Top Ten Regulatory and Litigation Risks for Private Funds in 2025.
Robert Pommer, Seetha Ramachandran, Nathan Schuur, Robert Sutton, Jonathan M. Weiss, William D. Dalsen, Adam L. Deming, Adam Farbiarz, and Hena M. Vora contributed to this article

“Payment Handler”: A Nonce Term Without Instructions

The US Court of Appeals for the Federal Circuit affirmed a district court’s ruling that a software term was a “nonce” term that invoked 35 U.S.C. § 112, sixth paragraph (i.e., a means-plus-function claim element). The Court further found that the patent specification did not recite sufficient corresponding structure, rendering the claim element indefinite. Fintiv, Inc. v. PayPal Holdings, Inc., Case No. 23-2312 (Fed. Cir. Apr. 30, 2025) (Prost, Taranto, Stark JJ.)
Fintiv sued PayPal for infringing four patents related to cloud-based transaction systems, also known as “mobile wallet platforms,” “mobile financial services platforms,” or “electronic payment systems.” During claim construction, the district court ruled that the terms “payment handler” and “payment handler service” were indefinite. The court concluded that both terms were means-plus-function limitations governed by § 112, sixth paragraph. Although the claims did not use the word “means,” the district court found that PayPal had demonstrated that the terms were drafted in a format consistent with traditional means-plus-function language, effectively substituting “payment handler” for the word “means.” The court also found that the patent specifications failed to disclose corresponding structure capable of performing the claimed functions. As a result, the court held the claims invalid for indefiniteness and entered final judgment. Fintiv appealed.
Fintiv argued that the district court erred in concluding that the payment handler terms invoked § 112(f) and that the specifications failed to disclose the structure for the claimed functions. The Federal Circuit disagreed.
The Federal Circuit analyzed the “payment-handler” terms, which did not explicitly use the word “means.” Under § 112(f), there is a rebuttable presumption that a claim term does not invoke means-plus-function treatment unless the challenger can show that the term is a nonce term that lacks “sufficiently definite structure” or only recites a function without providing enough structure to perform that function. Fintiv contended that the payment handler terms, both individually and collectively, identified the required structure. However, the Court found that PayPal had successfully rebutted the presumption since the payment handler terms recited functions without reciting sufficient structure to perform those functions. The Court agreed with the district court that the term “handler” did not convey sufficient structure to a person of ordinary skill in the art.
Having determined that the payment handler terms invoked § 112(f), the Federal Circuit sought to identify the corresponding structure described in the specifications for performing the payment handler function but found none. The Court concluded that “without an algorithm to achieve these functionalities – and, more generally, given the specifications’ failure to disclose adequate corresponding structure – we hold the payment-handler terms indefinite.”

BIGLAW LOSES ANOTHER TCPA CERTIFICATION: Court Certifies Rare TCPA Revocation Class Against Money Source And It Its Getting Pretty Clear What’s Happening Here

Serious question.
When is the last time a #biglaw firm defeated a certification effort in a TCPA case?
But when was the last time any firm in the AmLaw 200 has done it? I can’t think of one in the last year, but maybe I am forgetting about something.
Regardless, it certainly wasn’t yesterday as another #biglaw firm just lost a critical certification motion in the incredibly rare setting for a text string revocation class action.
That should literally never happen.
To certify such a case the court would have to determine that notes from the defendant’s system are so similar that they can be adjudicated as a revocation in a single bucket. That’s definitionally impossible unless agents are trained to use specific coding– but that was not the allegation.
Nonetheless, in Hill v. Money Source, 2025 WL 1331702 (D. Az. May 7, 2025) the Court certified the TCPA class action because Plaintiff claimed–and defendant apparently did not deny– that it lacked a policy of honoring oral consent.
Pause.
I give away a lot of free tips tricks and advice around here. Let me give everyone– including the struggling biglaw litigators out there– a huge tip.
Defeating class certification starts with explaining why the Defendant has a policy of TCPA compliance. That’s why violations are exceptions. And exceptions are unusual mistakes that can only be found on individualized evidence. Everything else is window dressing.
But apparently Money Source didn’t even address the Plaintiff’s central argument– that it lacked a policy of honoring revocation.
Now, perhaps it couldn’t because perhaps it is true that Money Source did not comply with the law as a matter of policy.
Pause again.
If that is true then wise counsel would settle the case. You don’t go to certification when your client has screwed up and doesn’t even have a viable TCPA compliance effort in place.
So either biglaw screwed up by failing to raise a policy central to the certification effort, or it screwed up in not counseling its client to resolve the case.
Just my opinion, of course.
But my opinion gets even stronger when you consider the actual class definition does not even contain the text string component. Here is the class definition that was certified:
All persons throughout the United States or its territories (1) to whom Defendant placed, or caused to be placed, a call, (2) directed to a number assigned to a cellular telephone service, (3) in connection with which Defendant used an artificial or prerecorded voice, (4) after the called party requested that Defendant stop placing telephone calls using an artificial or prerecorded voice to their cellular telephone, as recorded in Defendant’s business records, (5) from four years prior to the filing of the Complaint through the date of class certification.
Wow, that is wild.
The class definition does not refer to any specific words or statements that were made, just a vague assertion the called party request Defendant “stop placing telephone calls using an artificial or prerecorded voice.” To my eye then that requires an absolute quote of this phrase in the notes, and I can guarantee those words aren’t there. Otherwise this class is meaningless and contains no members, or is too uncertain to be based on objective criteria.
Now I have not looked at the briefing to see if these arguments were made or not but it doesn’t seem like it. Notably the Court could not figure out how many people were in the class and actually chided Defendant for that:
“[p]resumably Defendant could simply tell the Court how many members would fall within a class constructed along the lines identified in its Response brief, as Defendant possesses all the requisite information to conclusively resolve this issue. Defendant has not done so.” 
It is the Plaintiff’s burden to introduce evidence of numerosity not the Defendant’s. And given the class definition no one would know for sure who is in the class– it is just too vague.
In my view this class should never have been certified and it is up to the Defendant to make that clear to the Court. Regardless, at the end of the day Money Source controls its litigation strategy, not its lawyers, so some fault lands on their shoulders as well but it is just wild to me that this case ended up the way it did.
On the other hand, the Court did thank the parties for doing “excellent briefing” and providing “a great volume of on-point authority,” which is also kind of funny.
I mean what else is #biglaw known for other than spending a ton of money doing a bunch of useless briefing? Hahaha.
I cannot believe TCPA freeform text cases are getting certified again. Thought those days were gone after Abbas pulled it off one time back in 2012. Seriously, it has been 13 years since one of these cases was certified (although Molina might be on the path to a similar result right now.) An at least when Abbas did it he embedded the text string data into the class definition so that it made sense to certify it. Sigh.
Anyhoo pretty clear lessons here:

Make sure you have a policy of honoring revocation received via any reasonable means; 
You CANNOT refuse to honor oral revocation unless you have a CONTRACT that affords express written consent as a MATERIAL TERM of that agreement (and even then there is a split of authority on the issue);
The FCC’s new revocation rules expand further the manner in which revocation may be given by a consumer so make sure you have these things accounted for;
Hire #biglaw to defend you at your peril (except Skadden and Squire– they’re really good).

As to number 4, it really is important you ask attorneys if they have defeated certification in TCPA class action cases. There are remarkably few lawyers who have. And if they merely tell you “we handle these cases all the time” or “we’ve handled a ton of these” they might just be an outfit that litigates and then settles.
Do your homework folks, because your choice of TCPA class litigation counsel is legitimately one of the most important decisions your company will ever make.

AI Circuit Breakers in Legal Contracts: A Safeguard for Business

As artificial intelligence becomes increasingly integrated into business operations, IT contracts covering the provision of AI systems are evolving to include critical safeguards. One emerging concept is the AI circuit breaker, a contractual mechanism that provides for an intervention, or override, where an AI system exhibits undesirable or harmful behavior. 
When contracting for AI, businesses should look to proactively include these safeguards in their contracts to mitigate against the risks of AI-driven processes causing unintended harm.
What Is an AI Circuit Breaker?
Borrowing from engineering, an AI circuit breaker triggers a pause or override when an AI system acts unpredictably, exceeds acceptable risk levels, or falls below a minimum performance threshold. This ensures that businesses remain in control of automated processes, mitigating against unintended consequences.
AI circuit breakers take multiple forms including:

Automated Intervention: an automated circuit breaker within the AI system, which does not require human intervention, that detects issues and provides for an override or intervention in certain predefined circumstances.
Human Intervention: a contractual right for a party (whether the provider, customer or both) to intervene and take certain actions, such as interrupting or stopping the AI system, in certain predefined circumstances.

Why Are AI Circuit Breakers Necessary?
Circuit breakers can benefit both providers and customers as they seek to mitigate the risks associated with the deployment of AI systems, including:

to ensure regulatory compliance;
to mitigate the risk of inaccuracies and AI “hallucinations”;
to detect and address inequality and bias;
to identify and mitigate the risk of security breaches; and
to identify potentially infringing output.

Particular benefits of circuit breakers include retaining control and human oversight over AI systems and providing contractual certainty; traditional contractual rights to suspend and terminate services are unlikely to offer sufficient clarity regarding the rights and obligations of each party if an AI system begins to exhibit undesirable or harmful behaviour.
Drafting and Negotiating AI Circuit Breakers
Key considerations when drafting and negotiating circuit breakers include:

Trigger Conditions: defining specific scenarios where the circuit breaker activates such as if the AI system produces an unacceptable error rate, displays clear signs of bias, infringes third party rights or breaches applicable laws.
Consequences of Trigger Activation: to what extent will a party have the ability to interrupt, suspend and/or potentially terminate depending on the nature of the event giving rise to the trigger?
Remediation: if the AI system is interrupted, the parties will need to address the actions to be undertaken (and costs of doing so), including responsibility for determining the cause of the failure, whether the AI system should be rolled back and how the issues will be resolved, including through redevelopment and retraining.
Payment: impact on any related payment commitments, including payment commitments whilst the AI system is subject to any suspension/it is rolled back.
Restart: conditions for lifting any suspension, including completion of audits and testing.
Liability Allocation: responsibility for AI-related errors and liability when the circuit breaker is triggered. 

Summary
The very nature of AI is that it continually ‘learns’ and evolves, often in an opaque manner meaning that providers and deployers of AI systems may not fully understand the power and capability of the AI technology at the outset of any deployment.
AI circuit breakers can provide an important safety net in respect of AI system deployments. As AI continues to shape the business and legal landscape, incorporating these safeguards can help providers and deployers of AI systems mitigate AI-driven risks through implementing appropriate guardrails, maintaining oversight and accountability and clearly defining responsibilities and rights in the event of undesirable or harmful behaviour.

District Court Upholds Browsewrap Agreements in Pennsylvania Wiretap Class Action

Online retailer Harriet Carter Gifts recently obtained summary judgment from the district court in a class action under Pennsylvania wiretap law. At the heart of this case is the interpretation and application of the Pennsylvania Wiretapping and Electronic Surveillance Control Act of 1978 (WESCA), a statute designed to regulate the interception of electronic communications. The court’s primary task was to determine whether the actions of Harriet Carter Gifts and NaviStone constituted an unlawful interception under this law.
In 2021, the district court sided with the defendants, granting summary judgment because NaviStone was a direct party to the communications, and thus, no interception occurred under WESCA. However, the Third Circuit Court of Appeals overturned this decision. The appellate court clarified that there is no broad direct-party exception to civil liability under WESCA. Consequently, the case was remanded to determine “whether there is a genuine issue of material fact about where the interception occurred.”
On remand, the district court examined whether Popa could be deemed to have consented to the interception of her data by NaviStone through the privacy policy posted on Harriet Carter’s website. The court focused on whether the privacy policy was sufficiently conspicuous to provide constructive notice to Popa.
The enforceability of browsewrap agreements, which are terms and conditions posted on a website without requiring explicit user consent, was another critical aspect of the case. The court found that Harriet Carter’s privacy policy was reasonably conspicuous and aligned with industry standards. The court noted that the privacy policy was linked in the footer of every page on the Harriet Carter website, labeled “Privacy Statement,” and was in white font against a blue background. This placement was consistent with common industry practices in 2018 when the violation was alleged, which typically involved placing privacy policies in the footer of websites.
This led the court to conclude that Popa had constructive notice of the terms, reinforcing the notion of implicit consent. Notably, the court found implicit consent without any evidence that Popa had actual knowledge of the terms of the privacy statement. Rather, the court found a reasonably prudent person would be on notice of the privacy statement’s terms. 
Based on these findings, the court granted summary judgment in favor of the defendants. The court determined that Popa’s WESCA claim failed because she had implicitly consented to the interception by NaviStone, as outlined in Harriet Carter’s privacy statement. 
The case of Popa vs. Harriet Carter Gifts, Inc. and NaviStone, Inc. emphasizes the necessity for clear and accessible privacy policies in the digital era. It also brings attention to the complex legal issues related to user consent and the interception of electronic communications.

From Blocks to Rights: Privacy and Blockchain in the Eyes of the EU data Protection Authorities

On April 14, 2025, the European Data Protection Board (EDPB) released guidelines detailing how to process personal data using blockchain technologies in compliance with the General Data Protection Regulation (GDPR) (Guidelines 02/2025 on processing of personal data through blockchain technologies). These guidelines highlight certain privacy challenges and provide practical recommendations.
Challenges Under the GDPR
Blockchain’s immutability conflicts with rights to data rectification and deletion (Articles 16 and 17 GDPR). Its decentralized nature makes it difficult to comply with GDPR principles like data minimization, storage limitation (Article 5) and data protection by design (Article 25). International data transfers are also complicated, prompting the EDPB to recommend using standard contractual clauses for node participation to ensure Chapter V compliance.
Key Recommendations for Organizations
In order to minimize risks and ensure GDPR compliant data processing when using blockchain, the EDPB establishes certain rules for organizations to follow.
Roles and Responsibilities
Roles must be clearly defined based on service nature, governance and relationships. The EDPB makes a special mention of nodes in public permissionless blockchains. Nodes in public blockchains may be considered data controllers. A legal entity (e.g., a consortium) is encouraged when nodes jointly determine processing purposes.
Technical and Organizational Measures
Organizations should assess:
Whether personal data will be stored
If so, why is the blockchain needed
The type of blockchain to be used (public only if necessary)
The adequate technical safeguards to be implemented
Public blockchains should be avoided unless essential. Personal data should only be identifiable if necessary and justified via a Data Protection Impact Assessment (DPIA). The techniques the EDPB suggests limiting the identifiability of the personal data include:
Encryption – Protects data, but remains personal under GDPR.
Hashing – Offers security, but risks remain if keys are compromised.
Cryptographic commitments – Securely obscure data when original inputs are deleted.
GDPR Principles and Data Subject Rights
Deletion and objection – Due to blockchain’s permanence, erasure may require deleting parts of the chain or anonymizing data. Off-chain storage of personal data is preferred.
Data retention – If data isn’t needed for the blockchain’s full life, it shouldn’t be stored on-chain unless anonymized.
Security – Suggested safeguards include emergency protocols, breach notifications and protections against 51% attacks and rogue participants.
Rectification – If rectification requires deletion, standard erasure methods apply. Otherwise, new transactions must correct prior data without altering old entries.
Automated decisions – Controllers must meet Article 22 GDPR requirements even if a smart contract has executed.
Next Steps
Public consultation is open until June 9, 2025. The final version is expected to remain largely consistent with the draft, offering essential guidance for GDPR-compliant blockchain use.
This article was co-authored by Damian Perez-Taboada

The European Commission’s Guidance on Prohibited AI Practices: Unraveling the AI Act

The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.
The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects of the AI Act, which comes to provide much-needed legal certainty to all authorities, providers and deployers of AI systems/models in navigating the regulation.
It refines the scope of general concepts (such as “placing on the market”, “putting into service”, “provider” or ” deployer”) and exclusions from the scope of the AI Act, provides a definition of others not expressly included in the AI Act (such as “use”, “national security”, “purposely manipulative techniques” or “deceptive techniques”), as well as takes a position on the allocation of responsibilities of providers and deployers using a proportionate approach (establishing that these responsibilities should be assumed by whoever is best positioned in the value chain).
It also comments on the interplay of the AI Act with other EU laws, explaining that while the AI Act applies as lex specialis to other primary or secondary EU laws with respect to the regulation of AI systems, such as the General Data Protection Regulation (GDPR) or EU consumer protection and safety legislation, it is still possible that practices permitted under the AI Act are prohibited under those other laws. In other words, it confirms that the AI Act and these other EU laws complement each other.
However, this complementarity is likely to pose the greatest challenges to both providers and deployers of the systems. For example, while the European Data Protection Board (EDPB) has already clarified in its Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (adopted in December 2024) that the “intended” purposes of AI models at the deployment stage must be taken into account when assessing whether the processing of personal data for the training of said AI models can be based on the legitimate interest of the providers and/or future deployers. The European Commission clarifies in Section 2.5.3 of the CGPAIP that the AI Act does not apply to research, testing (except in the real world) or development activities related to AI systems, or AI models before they are placed on the market or put into service (i.e. during the training stage). Similarly, the CGPAIP provides some examples of exclusions from prohibited practices (i.e., permitted practices) that are unlikely to find a lawful basis in the legitimate interests of providers and/or future users of the AI system.
The prohibited practices:

Subliminal, purposefully manipulative or deceptive techniques (Article 5(1)(a) and Article 5(1)(b) AI Act)This prohibited practice refers to subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons, or exploit vulnerabilities due to age, disability or a specific socio-economic situation.
The European Commission provides examples of subliminal techniques (visual and auditory subliminal messages, subvisual and subaudible queueing, embedded images, misdirection and temporal manipulation), as well as explains that the rapid development of related technologies, such as brain-computer interfaces or virtual reality, increases the risk of sophisticated subliminal manipulation.
When referring to purposefully manipulative techniques (to exploit cognitive biases, psychological vulnerabilities or other factors that make individuals or groups of individuals susceptible to influence), it clarifies that for the practice to be prohibited, either the provider or the deployer of the AI system must intend to cause significant (physical, psychological or financial/ economic) harm. While this is consistent with the cumulative nature of the elements contained in Article 5(1)(a) of the AI Act for the practice to be prohibited, it could be read as an indication that manipulation of an individual (beyond consciousness) where it is not intended to cause harm (for example, for the benefit of the end user or to be able to offer a better service) is permitted. The CGPAIP refers here to the concept of “lawful persuasion”, which operates within the bounds of transparency and respect for individual autonomy.
With respect to deceptive techniques, it explains that the obligation of the provider to label “deep fakes” and certain AI-generated text publications on matters of public interest, or the obligation of the provider to design the AI system in a way that allows individuals to understand that they are interacting with an AI system (Article 50(4) AI Act) are in addition to this prohibited practice, which has a much more limited scope.
In connection with the interplay of this prohibition with other regulations, in particular, with the DSA, the European Commission recognizes that dark patterns are an example of manipulative or deceptive technique when they are likely to cause significant harm.
It also provides that there should be a plausible/reasonably likely causal link between the potential material distortion of the behavior (significant reduction in the ability to make informed and autonomous decisions) and the subliminal, purposefully manipulative or deceptive technique deployed by the AI system. 
Social scoring (Article 5(1)(c) AI Act)The CGPAIP defines social scoring as the evaluation or classification of individuals based on their social behavior, or personal or personality characteristics over a certain period of time, clarifying that a simple classification of people on said basis would trigger this prohibition and that the concept evaluation is inclusive of “profiling” (in particular to analyze and/or make predictions on interests or behaviors), that leads to detrimental or unfavorable treatment in unrelated social contexts, and/or unjustified or disproportionate treatment.
Concerning the requirement that it leads to detrimental or unfavorable treatment, it is established that such harm may be caused by the system in combination with other human assessments, but that at the same time, the AI system must play a relevant role in the assessment. It also provides that the practice is prohibited even if the detrimental or unfavorable treatment is produced by an organization different from the one that uses the score.
The European Commission states, however, that AI systems can lawfully generate social scores if they are used for a specific purpose within the original context of the data collection and provided that any negative consequences from the score are justified and proportionate to the severity of the social behavior. 
Individual Risk Assessment and Prediction of Criminal Offences (Article 5(1)(d) AI Act)When interpreting this prohibited practice, the European Commission outlines that crime prediction and risk assessment practices as such are not outlawed, but only when the prediction of a natural person committing a crime is made solely on the basis of a profiling of said individual, or on assessing their personality traits and characteristics. In order to avoid circumvention of the prohibition and ensure its effectiveness, any other elements being taken into account in the risk assessment will have to be real, substantial and meaningful for them to be able to justify the conclusion that the prohibition does not apply (excluding therefore AI systems to support the human assessment based on objective and verifiable facts directly linked to a criminal activity, in particular when there is human intervention). 
Untargeted Scraping of Facial Images (Article 5(1)(e) AI Act)The European Commission clarifies that the purpose of this prohibited practice is the creation or enhancement of facial recognition databases (a temporary, centralized or decentralized database that allows a human face from a digital image or video frame to be matched against a database of faces) using images obtained from the Internet or CCTV footage, and that it does not apply to any scraping AI system tool that can be used to create or enhance a facial recognition database, but only to untargeted scraping tools.
The prohibition does not apply to the untargeted scraping of biometric data other than facial images, or even if it is a database that is not used for the recognition of persons. For example to generate images of fictitious persons and clarifies that the use of databases created prior to the entry into force of the AI Act, which are not further expanded by AI-enabled untargeted scraping, must comply with applicable EU data protection rules. 
Emotion Recognition (Article 5(1)(f) AI Act)This prohibition concerns AI systems that aim to infer the emotions (interpreted in a broad sense) of natural persons based on their biometric data and in the context of the workplace or educational and training institutions, except for medical or security reasons. Emotion recognition systems that do not fall under this prohibition are considered high-risk systems and deployers will have to inform the natural persons exposed thereto of the operation of the system as required by article 50(3) of the AI Act.
The European Commission refers here to certain clarifications contained in the AI Act regarding the scope of the concept of emotion or intention, which does not include, for example, physical states such as pain or fatigue, nor readily apparent expressions, gestures or movements unless they are used to identify or infer emotions or intentions. Therefore, a number of AI systems used for safety reasons would already not fall under this prohibition.
Similarly, the notions of workplace, educational and training establishments must be interpreted broadly. There is also room for member states to introduce regulations that are more favorable to workers with regard to the use of AI systems by employers.
It also clarifies that authorized therapeutic uses include the use of CE marked medical devices and that the notion of safety is limited to the protection of life and health and not to other interests such as property. 
Biometric Categorization for certain “Sensitive” Characteristics (Article 5(1)(g) AI Act)This prohibition is for biometric categorization (except where purely ancillary to another commercial service and strictly necessary for objective technical reasons) that individually categorize natural persons on the basis of their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
The European Commission clarifies that this prohibition, however, does not cover the labelling or filtering of lawfully acquired biometric datasets (such as images), including for law enforcement purposes (for instance, to guarantee that data equally represents all demographic groups). 
Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes (Article 5(1)(h) AI Act)The European Commission devotes a substantial part of the CGPAIP to the development of this prohibited practice, which refers to the use of real-time RBI systems in publicly accessible areas for law enforcement purposes. Exceptions, based on the public interest, are to be determined by the member states, through local legislation.

The CGPAIP concludes with a final section on safeguards and conditions for the application of the exemptions to the prohibited practices, including the conduct of Fundamental Rights Impact Assessments (FRIAs), which are defined as assessments aimed at identifying the impact that certain high-risk AI systems, including RBI systems, may have on fundamental rights, and which, it is clarified, do not replace the existing Data Protection Impact Assessment (DPIA) that data controllers (i.e., those responsible for processing personal data) must conduct and have a broader scope (covering not only the fundamental right to data protection but also all other fundamental rights of individuals) and which complement, inter alia, the required DPIA, the registration of the system or the need for prior authorization.

EDPB Adopts Opinion on UK Adequacy Decision Extension

On March 5, 2025, the European Data Protection Board (the “EDPB”) published Opinion 06/2025 on the extension of the European Commission Implementing Decisions under the GDPR and the Law Enforcement Directive on the adequate protection of personal data in the United Kingdom (the “Opinion”).
The Opinion was requested by the European Commission as the two current adequacy decisions are due to expire on June 27, 2025. For more background on the adequacy decisions, see our previous blog post here.
The extension of the adequacy decisions for a further six months gives the UK government time to enact the draft Data (Use and Access) Bill (the “DUA Bill”) which proposes several amendments to UK data protection law. The DUA Bill is currently in the House of Commons and likely to be finalised in the coming months. For further information on the changes to UK data legislation proposed by the DUA Bill, see here.
The EDPB clarified that the Opinion does not assess the UK data protection framework. Both the European Commission and the EDPB have stated their intentions to re-assess the UK legal framework following finalization of the proposed legislative changes in the UK, namely, the amends that shall be introduced upon the DUA Bill becoming enacted.

FINRA Facts and Trends: May 2025

FINRA’s Modernization Pitch: New Initiatives Aimed at Updating FINRA Rules and Easing Regulatory Burdens
On April 21, 2025, FINRA unveiled FINRA Forward, a broad review of its rules and regulatory framework that is intended to modernize existing rules regarding member firms and associated persons.
As part of its initiatives, FINRA is inviting significant engagement from industry members, seeking comments and feedback from member firms, investors, trade associations and other interested parties in an effort to update and adapt FINRA’s rules and regulatory standards to better suit the modern environment and the latest technologies used by member firms. 
In a blog post, FINRA’s President and CEO, Robert Cook, wrote that FINRA “must continuously improve its regulatory policies and programs to make them more effective and efficient.” 
FINRA has identified three goals of its FINRA Forward initiative: (1) modernizing FINRA’s rules; (2) empowering member firm compliance; and (3) combating cybersecurity and fraud risks. FINRA’s focus on modernizing rules will seek to eliminate unnecessary burdens on member firms, and to modernize requirements and facilitate innovation. With respect to compliance, FINRA aims to better protect investors and safeguard markets by enhancing the ways in which FINRA supports its member firms’ compliance efforts. Finally, FINRA is expanding its cybersecurity and fraud prevention activities.
The FINRA Forward initiatives were first previewed in Regulatory Notice 25-04, published on March 12, 2025, which identified two areas of initial focus for FINRA’s modernization effort: capital formation and the modern workplace. At the same time, FINRA requested comments for other areas FINRA should consider as part of its review. Following on the heels of Regulatory Notice 25-04, FINRA soon afterward published a series of more detailed Notices: Regulatory Notice 25-05, regarding associated persons’ outside activities; Regulatory Notice 25-06, which addresses capital formation; and Regulatory Notice 25-07, on the modern workplace.
We focus below on FINRA’s request for comments, in Regulatory Notice 25-07, on modernizing FINRA rules, guidance and processes for the organization and operation of member workplaces. This initiative promises a “broad rule modernization review” and “significant changes” to FINRA’s rules and regulatory framework “to support the evolution of the member workplace.” This modernization effort has the potential to significantly change regulatory compliance programs for virtually every FINRA member firm.
While noting that commenters “should not be limited by the topics and questions FINRA identifies,” FINRA has highlighted the following rules, guidance and processes on which it welcomes comments from industry members.
Branch Offices and Hybrid Work
FINRA is contemplating additional updates to FINRA Rule 3110 (the Supervisory Rule) in its continued effort to account for technological advances and hybrid working arrangements.
Just last year, FINRA amended Rule 3110 in response to changes in work arrangements, allowing members to designate eligible private residences as “residential supervisory locations” (RSLs) and to be treated as non-branch locations. FINRA also launched a voluntary, three-year pilot program that permits eligible members the flexibility to satisfy their inspection obligations under Rule 3110 without requiring an on-site visit to the office or location.
Now, in response to further feedback from members, FINRA is questioning whether the branch office and office of supervisory jurisdiction (OSJs) designations remain relevant in an age of digital monitoring, cloud-based systems and virtual meetings. Additionally, FINRA is asking if there are ways the Central Registration Depository (CRD) system and Form BR can be revised to “better align FINRA, other SRO and state requirements for broker-dealers with the uniform branch office definition and registration and designation of offices and locations.”
Registration Process and Information
Associated persons and member firms submit information to SROs and state and federal regulators through Uniform Registration Forms that communicate this information to FINRA’s CRD system. Notice 25-07 is now seeking comments on proposed changes to the process and systems currently in place to address modern technologies and workplaces, as well as the substance or presentation of the information provided to the public. These contemplated changes could impact such forms as Form BD and Forms U4 and U5.
Qualifications and Continuing Education
All securities professionals seeking registration must demonstrate the necessary qualifications by passing a standardized professional examination. After they are registered, these professionals are required to maintain their qualifications by completing a Continuing Education (CE) program that is designed to ensure that registered persons stay current as industry standards and rules evolve.
From time to time, FINRA has adapted its qualification and CE requirements to account for improvements in technology, learning theory and assessment methodologies. For example, in 2022 FINRA responded to the increased frequency of job changes and member restructures by implementing the Maintaining Qualifications Program (MQP) for individuals that have decided to temporarily step out of roles that require active registration. Members have taken advantage of the MQP since its launch in 2022, with approximately 38,000 individuals participating in or having participated in the program.
Now, FINRA is seeking input on further improving the CE program and exam framework to better serve the industry. Among other things, FINRA is asking whether registered persons should be allowed to take certain qualification exams without needing firm sponsorship. More generally, FINRA is also asking whether new technologies can be leveraged to identify appropriate candidates for positions that require registration and whether it should consider any changes to the CE and MQP to ensure that it is meeting the needs of its members.
Delivery of Information to Customers
As digital communication becomes the standard in customer engagement, FINRA is examining whether its existing rules on document delivery and account transfers still make sense in an increasingly paperless world.
The current rules, based on guidance issued decades ago, require firms to obtain informed customer consent before delivering documents electronically — an approach that can be cumbersome given how digitally savvy most investors now are. At the same time, rules around account transfers — especially those using “negative consent” (where the transfer proceeds unless the customer objects) — remain narrow, even though business needs often demand flexibility. For instance, when firms exit a business line or shift clearing arrangements, obtaining affirmative consent from every affected customer may be logistically impossible, creating risk and delay.
FINRA is now asking whether more flexible, principles-based standards should replace rigid prescriptive ones, particularly regarding negative consent scenarios. FINRA is also exploring whether its rules should be better aligned with those in the investment advisory world, where digital delivery and flexible account transfer protocols are more common. These questions are particularly relevant as firms use mobile apps, dynamic disclosures and automated notifications to streamline service. At the same time, FINRA must ensure that these innovations don’t compromise privacy, security or investor understanding. By opening this discussion, FINRA aims to support a future-ready regulatory environment where efficiency and investor protection go hand in hand.
Recordkeeping and Digital Communications
With the explosion of digital communication channels, broker-dealers face growing challenges in complying with recordkeeping requirements under both SEC Rule 17a-4 and FINRA rules. From emails and instant messages to Zoom meetings, AI chatbots, and interactive websites, the range of communications that may be deemed “business-related” has expanded dramatically.
“Off-channel” communications — messages sent through unauthorized apps or platforms — remain a major concern. These communications pose compliance and enforcement risks, and can lead to gaps in supervisory oversight. Similarly, as firms adopt generative AI tools for customer engagement or internal operations, questions arise about whether those communications need to be retained and how best to do so. For example, should chatbot responses or AI-generated meeting summaries be archived the same way as emails? What about dynamic content on websites that changes based on user behavior? The goal is to ensure that recordkeeping rules remain effective and clear in a multiplatform, mobile-first environment.
Rather than creating new obstacles to innovation, FINRA wants to promote best practices and enable compliance through thoughtful modernization. Member feedback will be vital in identifying pain points, sharing successful approaches following the recent off-channel communication enforcement actions by the SEC, and recommending specific rule or guidance updates that strike the right balance between oversight and adaptability.
Compensation Arrangements
FINRA draws attention to two particular types of compensation arrangements where existing rules and regulatory frameworks may be ripe for change.
First, compensation arrangements related to personal services entities (PSEs) have recently come under scrutiny. PSEs are legal entities, such as limited liability companies, that are often formed by registered representatives as a vehicle to receive compensation for the representatives’ services, while also achieving tax benefits and other benefits. The problem with this compensation arrangement is that, under existing guidance, receipt of transaction-based compensation traditionally has been a strong indicator of broker-dealer activity. As a result, member firms often have concerns about paying transaction-based compensation directly to PSEs, because of uncertainty as to whether doing so could require the PSE to register as a broker-dealer and/or violate the member firm’s duty to maintain supervisory control over the securities-related compensation paid directly to registered representatives.
Second, there has been a recent rise in programs that pay continuing commissions to retired registered representatives or their beneficiaries. Under existing rules (in particular Rule 2040(b), however, such compensation arrangements are valid only if the registered representative contracts for such continuing commissions before an unexpected life event that renders the representative unable to work.
FINRA is seeking comments on potential regulatory or rulemaking changes that could facilitate these types of compensation arrangements, or ease the regulatory burden associated with them, while still continuing to preserve effective broker-dealer supervision.
Fraud Protection
FINRA is considering changes to existing rules designed to help prevent fraud. 
In particular Rule 2165 currently allows firms to place temporary holds on account activity — in the accounts of a “specified adult” — when the firm reasonably believes that financial exploitation of that adult has occurred or is occurring. But these temporary holds are subject to time limits, which can sometimes constrain the firm’s ability to protect customer assets, such as when the firm is unable to convince the customer that financial exploitation is occurring, or when an investigation has not yet concluded. Thus, FINRA is exploring whether to expand the time limits, or to extend the application of Rule 2165 beyond “specified adults.”
FINRA is also seeking comments on ways to potentially modify or enhance Rule 4512, which requires firms to make efforts to identify a “trusted contact person” for all non-institutional accounts — i.e., a person whom the firm can contact when a customer is unavailable or becomes incapacitated.
Leveraging FINRA Systems to Support Member Compliance
Member firms can currently employ FINRA’s systems in a variety of ways to support their compliance efforts, including relying on the CRD system, FINRA’s verification process, the Financial Professional Gateway, and the newly launched Financial Learning Experience to satisfy certain of their regulatory requirements.
As part of the FINRA Forward initiatives, FINRA is seeking feedback on other ways that FINRA can use its systems to reduce costs and burdens on member firms.
FINRA is requesting that all comments be submitted by June 13, 2025, and comments will be posted publicly on FINRA’s website as they are received.

RACKETEERING?: High Volume TCPA Plaintiff’s Attorney Survives RICO Challenge on Appeal– But Are Other Lawyers Bringing “Sham” Litigation?

Yesterday the Fourth Circuit Court of Appeals ended one of the most interesting stories in the history of TCPAWorld.
Back in 2017-2018 TCPA cases weren’t all filed as class actions like they are now. Many were brought as individual suits and many were brought in the context of debt collection. And there was probably no more popular target for these suits than student-debt collector Navient.
Now Navient was allegedly blasting people with calls using an autodialer long after they asked for calls to stop. This was before the days of Facebook and the ATDS claims were very popular–especially in the Ninth Circuit Court of Appeals footprint following the disastrous Marks opinion.
While there were multiple firms pursuing Navient for alleged TCPA violations, the man responsible for a huge number of these filings was a guy named Jeff Lohman. 
Lohman was an unlikely TCPAWorld villain. The guy sprang up seemingly out of nowhere one day with a huge volume of cases. But his focus on Navient drew the company’s ire– especially at the cases kept resulting in fairly quick settlements.
What really pissed Navient off, however, is that once Lohman stepped into represent the borrower they would stop paying Navient. And since the borrower was represented by Lohman Navient really didn’t have any recourse– it couldn’t call the borrower for payment or communicate with the borrower in anyway to encourage the borrower to come current.
Navient soon recognized that if Lohman’s practice continued to grow and people learned they could get out of paying back their student loan just by suing over the frequent phone calls Navient made they would have a serious problem. And the fact that borrowers were seemingly using vague and scripted language to opt out of future calls– language Navient’s agents were seemingly not trained to identify and honor– seemingly meritorious TCPA cases were very easy to set up.
Rather than just train its agents to be on the hunt for this sort of tactic and to honor DNC requests, Navient went on the warpath.
It sued Lohman in a federal RICO case arguing that he was engaging in racketeering and a conspiracy with his own clients by encouraging them not to pay Navient in order to set up TCPA lawsuits.
Now this was a very serious issue for Lohman. If he had lost it could have cost him millions. Regardless, the suit basically chased him out of the practice of law and he ended up opening a pool hall as a result. (Seriously, he talks all about it in his Deserve to Win podcast interview.)
But he still had to defend himself to avoid a seven figure judgment.
And he did. Sort of..
The case went all the way to a jury trial where Navient actually won a judgment of over a million dollars against Lohman! But in a huge turn of events, in post-verdict proceedings the lower Court ended up throwing out the jury verdict and handing Lohman the win.
Indeed the lower Court found Lohman had done nothing wrong and it was Navient’s conduct that caused its own loss. As the lower court found:
The problem with Navient’s argument is that it was Navient’s conduct violating the TCPA that caused its damages.
Eesh.
But Navient wasn’t done.
Hoping to resurrect its jury trial victory Navient appealed the case to the Fourth Circuit Court of Appeals arguing that Lohman’s TCPA suits were a “sham” and were not protected by First Amendment protections.
The appellate court disagreed but did open up a very interesting argument for those facing high-volume TCPA litigation today.
In Navient Solutions, LLC v. Jeff Lohman, 2025 WL 1299003 (4th Cir. 2025) the appellate court handed down a published opinion holding those who bring TCPA litigation can be sued for it– but only if their case is a “sham.”
The standard here is interesting:
When adopting the California Motor standard, this Court explicitly rejected the argument that a series of actions should be analyzed under the two-step test we use to assess a single action. See Waugh Chapel, 728 F.3d at 363–34. Under this strict two step analysis, a court first considers the suit’s objective merits and “[o]nly if challenged litigation is objectively meritless may a court examine the litigant’s subjective motivation.” Pro. Real Estate Investors, Inc. v. Columbia Pictures Indus., 508 U.S. 49, 60–61 (1993); see also IGEN Int’l, 335 F.3d at 312 (“even litigation that is deceitful, underhanded, or morally wrong will not defeat immunity unless it satisfies the objective baselessness requirement”). 
As Waugh Chapel instructs, to properly assess the TCPA actions, we ask whether defendants “indiscriminately filed (or directed) a series of legal proceedings without regard for the merits and for the purpose of violating [the] law.” 728 F.3d at 364. This question prompts a “holistic evaluation” of “the subjective motive of the litigant and the objective merits of the suits” as well as “other signs of bad-faith litigation.” Id. 
Hmmmm.
While the appellate court found Lohman’s cases were NOT a sham– even Navient’s own briefing conceded the ATDS issue was an open issue at the time and Lohman’s cases may have merit, which was the entire reason Navient couldn’t shake them– cases being brought today by other TCPA litigators may very well meet this definition.
Take the rash of out-of-time limitation SMS suits–hundreds!–being pursued by Hindi’s office right now. 
Per the R.E.A.C.H. reply filed the other day, most of these cases are settling rapidly. They seemingly have no staying power and are being brought just to extort…er, extract.. a quick settlement. And there does not seem to be any objective merit to most these cases– the CFR timing limits plainly apply only to telephone solicitations, which does not include calls made with consent.
While some of Hindi’s cases might have been brought because consent was lacking, it feels like his office has “indiscriminately filed (or directed) a series of legal proceedings without regard for the merits and for the purpose of violating [the] law.”
Then again, his office has argued that consent to receive calls must be “specific” and not “general,” so perhaps this slender reed of advocacy will protect him from a “sham” litigation determination, but I am not so sure. Haven’t seen anything suggesting this argument holds water and if the FCC comes out and categorically rejects it I could imagine counter litigation might be on the horizon.
Hindi is not alone, of course. As TCPA class litigation filing volume hits the stratosphere a number of attorneys– both old school filers and new filers alike– seem to be taking a “high volume” model.
Perhaps these cases are being well investigated and are all meritorious. They’d better be. With the new Navient ruling in hand I know many TCPA defendants will finally feel empowered to start fighting back.
For Jeff Lohman, however, the victory here must feel pretty sweet. His conduct was rather blasé by today’s standards– heck, even during his own heyday he wasn’t close to the highest volume filers like Sergei Lemberg. So walking away with the W probably feels great.
Then again, its a “dose of your own medicine” sort of situation. Lohman has felt the very real sting of litigation cost he had imposed upon others– both in terms of lost money and time. Its a sting that too many in TCPAWorld have felt for years now…
In the end the Navient/Lohman lesson is one for all to heed. Litigation isn’t a game and the federal courts aren’t a playground. Let all who enter there understand the stakes and have a legitimate desire to see justice done.
And for those who are just looking to make a quick buck by filing high-volume litigation with no real merit, watch out. Your time may be running out.
Chat soon.

California Privacy Protection Agency Fines Retailer $345,000 for Alleged CCPA Privacy Rights Violations

On May 6, 2025, the California Privacy Protection Agency (“CPPA”) announced that it had issued an Order requiring clothing retailer Todd Snyder, Inc. (the “Company”) to change its business practices and pay a $345,178 fine to resolve alleged violations of the California Consumer Privacy Act (“CCPA”).
The CPPA alleged that the Company had violated the CCPA by:

failing to oversee and properly configure the technical infrastructure of its privacy rights portal, resulting in a failure to process consumer requests to opt out of the sale or sharing of their personal information for 40 days; 
requiring consumers to submit more information than necessary to process their privacy rights requests, including requiring consumers to submit a photograph of themselves holding an identity document to submit a request; and
requiring consumers to verify their identity before they could opt out of the sale or sharing of their personal information.

The CPPA alleged that the Company’s opt-out tool was improperly configured and that the Company “would have known that consumers could not exercise their CCPA rights if the company had been monitoring its website.” The Company instead “deferred to third-party privacy management tools without knowing their limitations or validating their operation.” In announcing the Order, Michael Macho, head of the CPPA’s Enforcement Division, echoed the sentiment that companies should not solely rely on third-party privacy compliance tools, stating that “businesses should scrutinize their privacy management solutions to ensure they comply with the law and work as intended, because the buck stops with the businesses that use them,” and that “using a consent management platform doesn’t get you off the hook for compliance.”
In addition to paying a $345,179 fine, the Order requires the Company to:

Develop, implement and maintain opt-out of sale/sharing policies, procedures, methods and technical measures that:

do not require consumers to verify such requests or provide more information than is necessary to process the requests;
comply with the CCPA and its implementing regulations, including requirements relating to opt-out preference signals;
identify disclosures of personal information that constitute a “sale” or “sharing” of personal information under the CCPA to ensure the Company appropriately processes opt-out requests;
monitor the effectiveness and functionality of the Company’s methods for submitting opt-out requests; and
apply opt-out preference signals.

Not require consumers to provide more information than is necessary to process verifiable consumer privacy requests (g., access, deletion, correction);
Develop, implement and maintain procedures to ensure that all personnel handling personal information are informed of the Company’s requirements under the CCPA relevant to their job functions; and
Maintain a contract management and tracking process to ensure that all contractual terms required by the CCPA are in place with external recipients of personal information.

The full Order is available here.

Cyberspace Administration of China Cracks Down on Improper Use of Minors’ Images

Since the beginning of 2025, the Cyberspace Administration Authority (“Authority”) has continued to strengthen the protection of minors on the Internet and clean up illegal and undesirable information that uses the images of minors, as well as removing non-compliant accounts.
The Authority has requested that platform operators increase their efforts to identify and combat signs of violations and rigorously examine the content of information involving minors posted on their platform. The Authority has taken measures that include banning accounts and canceling profit-making privileges for accounts, shutting down more than 11,000 accounts on the basis of legal violations.