Fourth Circuit Rejects Rehearing in ACH Fraud Suit Alleging Violations of KYC Rules and NACHA Operating Standards

On April 22, the Fourth Circuit declined to reconsider a panel ruling that found a credit union could not be held liable for a scam in which fraudsters diverted over $560,000 from a metal fabricator through unauthorized ACH transfers. The denial leaves intact a March 2025 decision overturning the district court’s earlier ruling in favor of the plaintiff.
The dispute stems from a 2018 incident in which the company received a spoof email claiming to be from a supplier and directing the company to reroute payments to a new bank account. Relying on the instructions, the company made four ACH transfers to an account at the credit union, identifying the supplier as the beneficiary. However, the funds were deposited into an account belonging to another individual who had also been unknowingly involved in the fraud.
In its original complaint, the plaintiff alleged that the credit union failed to comply with Know Your Customer (KYC) regulations and anti-money laundering (AML) procedures by not verifying the identity or eligibility of the account holder. The complaint also asserted that the credit union violated the NACHA Operating Rules by accepting commercial ACH transfers—coded for business transactions—into a personal account. These claims were framed as failures to implement basic security protocols and to recognize clear mismatches in the payment data.
The panel held that the credit union lacked actual knowledge that the account was being used for fraudulent purposes and therefore could not be held liable under applicable law. In a concurring opinion, however, one judge noted that the record may contain evidence suggesting the credit union obtained actual knowledge of the misdescription before the final two transfers.
Putting It Into Practice: Even though the credit union ultimately avoided liability, the action is a good example of the lengths plaintiffs’ attorneys are going to hold banks liable for fraud related to spoofing. Unfortunately, Regulation E provides no avenue for relief for consumers where they are tricked into transferring money knowingly to another account. And the CFPB’s lawsuit against major banks related to similar conduct, where claims under the CFPA were alleged, was dropped earlier this year. 

CFPB Shifts Supervision and Enforcement Priorities; Staff Reduction Stayed by Court

On April 16, the CFPB released an internal memo outlining major shifts in its supervision and enforcement priorities, signaling a retreat from several areas of regulatory activity. The next day, the Bureau issued formal reduction-in-force (RIF) notices to numerous employees, notifying them of termination effective June 16.
The supervision memo directs a significant reallocation of the Bureau’s focus and resources. Examinations are to be reduced by 50%, with an emphasis on collaborative resolutions, consumer remediation, and avoiding duplicative oversight. The CFPB will shift attention back to depository institutions, moving away from nonbanks that have increasingly been subject to Bureau exams in recent years. Enforcement will prioritize matters involving tangible consumer harm, particularly in areas of mortgage servicing, data furnishing under the FCRA, and debt collection under the FDCPA. The memo explicitly deprioritizes supervision of student lending, digital payments, remittances, and peer-to-peer platforms, and restricts the Bureau’s use of statistical evidence to support fair lending cases, limiting such actions to those involving intentional discrimination and identifiable victims.
The RIF notices cite structural realignment and policy shifts as the basis for the cuts and inform employees that the decision does not reflect performance or conduct. Following the issuance of the RIF notices, plaintiffs in ongoing litigation against the CFPB filed an emergency motion, arguing that the RIF appeared to violate an existing preliminary injunction. After an emergency hearing on April 18, Judge Amy Berman Jackson of the D.C. District Court ordered the CFPB to suspend its reduction-in-force and maintain employees’ access to the agency’s systems while legal proceedings continue, raising concerns that allowing the layoffs to move forward could permanently damage the Bureau’s ability to meet its legal obligations. The court set a follow-up hearing for April 28.
Putting It Into Practice: The current administration’s push to downsize the CFPB continues. While paused for the moment, a Bureau of only 200 employees will have a dramatic impact on the enforcement of the country’s federal financial services laws.
Listen to this post 

CFPB Drops Suit Against Credit Card Company Alleging TILA Violations and Deceptive Marketing Practices

On April 23, the CFPB voluntarily dismissed with prejudice its lawsuit, filed in September 2024, against a Pennsylvania-based credit card company that had been accused of unlawfully marketing a high-cost, limited-use membership program to subprime consumers.
The complaint alleged that the company violated the Consumer Financial Protection Act (CFPA) and the Truth in Lending Act (TILA) and its implementing Regulation Z. The Agency asserted the following violations:

Misleading marketing of a “general-purpose” credit card. The company allegedly represented that it offered a standard credit card when the product could only be used at the company’s own online store.
Excessive fees in violation of TILA and Regulation Z. The card carried annual charges amounting to roughly 60% of the card’s credit limit, exceeding the 25% cap permitted during the first year of account opening.
Limited consumer use and value. Despite charging substantial fees, the program offered minimal utility—only 6% of customers used the card and just 1–3% used any ancillary benefits.
Deceptive cancellation and refund process. The company claimed cancellations could be completed in under a minute but instead subjected consumers to extended calls and repeated sales pitches before granting partial refunds.
Unreasonable barriers to exit constituted abusive conduct. The CFPB alleged the company exploited consumers’ inability to easily exit the program or secure refunds, thereby taking unreasonable advantage of financially vulnerable individuals.

Putting It Into Practice: The dismissal is the latest in a series of reversals by the CFPB under its current leadership (previously discussed here and here). While the agency appears to be retreating from certain nonbank UDAAP cases, the statutory obligations under the CFPA and TILA remain unchanged. Companies marketing credit products to subprime consumers should closely review how their offerings are presented, how fees are structured, and how cancellation processes are administered.
Listen to this post

Ohio AG Sues Mortgage Lender for Illegal Broker Steering Scheme

On April 17, Ohio Attorney General Dave Yost announced that the state has filed a lawsuit against a wholesale mortgage lender, alleging that the company engaged in a statewide scheme to mislead borrowers and inflate mortgage costs through deceptive broker steering practices. The AG’s office is seeking a jury trial on all claims.
The complaint alleges that the lender violated the Ohio Consumer Sales Practices Act, the Ohio Residential Mortgage Lending Act, and the Ohio Corrupt Practices Act. The lawsuit accuses the lender of conspiring with brokers to funnel borrowers into high-cost loans under the guise of independent shopping, despite internal agreements that allegedly prohibited brokers from presenting more affordable alternatives.
Specifically, the allegations include:

Restricting broker competition. The lender contractually prohibited referrals to two major competitors, even when cheaper options were available.
Marketing misrepresented broker independence. Brokers used lender-supplied marketing materials that described them as “independent” despite contractual restrictions limiting their ability to shop around.
Rewarding loyalty with exclusive perks. Brokers who funneled loans to the lender received increased exposure on borrower search engines, faster underwriting times, and access to promotional products, all tied to volume metrics.
Charging borrowers significantly higher costs. The AG asserts that borrowers working with high-funneling brokers paid hundreds more per loan than those using brokers who independently shopped the market.

Putting It Into Practice: Ohio’s lawsuit continues a trend of increased state-level enforcement targeting financial services practices, particularly in the mortgage space (previously discussed here). Lenders and brokers who operate in this space should ensure their compliance procedures align with best practices, especially when it comes to referrals. 
Listen to this post

North Dakota Expands Data Security Requirements and Issues New Licensing Requirements for Brokers

On April 11, North Dakota enacted HB 1127, overhauling its regulatory framework for financial institutions and nonbank financial service providers. The law amends multiple chapters of the North Dakota Century Code and creates a new data security mandate for financial corporations—a category that includes non-depository entities regulated by the Department of Financial Institutions (DFI). It also expands the licensing requirement for brokers to include “alternative financing products,” potentially impacting a broad array of fintech providers.
The law introduces sweeping data protection obligations for nonbank financial corporations through new requirements created in Chapter 13-01.2. Specifically, covered entities must:

Implement an information security program. This includes administrative, technical, and physical safeguards, based on a written risk assessment.
Designate a qualified individual. Each financial corporation must designate a qualified individual responsible for overseeing the security program and report annually to its board or a senior officer.
Conduct regular testing. Annual penetration tests and biannual vulnerability assessments are mandatory unless continuous monitoring is in place.
Secure consumer data. Encryption of data in transit and at rest is required unless a compensating control is approved. Multifactor authentication is also mandatory.
Notify regulators of breaches. A data breach involving 500 or more consumers must be reported to the Commissioner within 45 days.

The bill also amends North Dakota’s broker licensing laws to authorize the DFI to classify certain alternative financing arrangements as “loans.”
Putting It Into Practice: Of the many amendments here, North Dakota’s expansion of licensing requirements for brokers of alternative financing products may have the biggest impact for institutions, especially fintechs.

FTC Publishes Final COPPA Rule Amendments

On April 22, 2025, the Federal Trade Commission published in the Federal Register final amendments to the Children’s Online Privacy Protection Act Rule (the “Rule”). The Rule will go into effect 60 days from publication, on or about June 21, 2025, with a compliance deadline of April 22, 2026. The Rule retains many of the proposed amendments first announced in January 2025 as a result of a Notice of Proposed Rulemaking issued by the FTC in 2024 (the “2024 NPRM”), with certain differences.
Key updates to the Rule include:

Updated definitions: The Rule adds or updates several defined terms, including:

Contact information: The Rule adds to the definition of “online contact information”: mobile phone numbers, “provided the operator uses it only to send a text message.” Under COPPA, operators can use a child or parent’s contact information to provide notice and obtain parental consent without first obtaining consent to the collection of the contact information. According to the FTC, the amendment was intended to give operators another way to initiate the process of seeking parental consent quickly and effectively.
Personal information: The Rule updates the definition of “personal information” to include:

Biometric identifier: The Rule adds to the definition of “personal information”: “a biometric identifier that can be used for the automated or semi-automated recognition of an individual, such as fingerprints; handprints; retina patterns; iris patterns; genetic data, including a DNA sequence; voiceprints; gait patterns; facial templates; or faceprints[.]” Notably, the Rule does not include “data derived from voice data, gait data, or facial data,” which is language that was proposed in the 2024 NPRM.
Government-issued identifier: The Rule adds to the definition of “personal information”: “[a] government-issued identifier, such as a Social Security, [S]tate identification card, birth certificate, or passport number[.]”

Mixed audience website or online service: The FTC first developed this category in the 2013 COPPA Rule amendments, as a subset of “child-directed” websites and online services, but did not define the term. The Rule defines the term as “a website or online service that is directed to children under the criteria set forth in paragraph (1) of the definition of website or online service directed to children, but that does not target children as its primary audience, and does not collect personal information from any visitor prior to collecting age information or using another means that is reasonably calculated, in light of available technology, to determine whether the visitor is a child.” The updated definition further requires that “[a]ny collection of age information, or other means of determining whether a visitor is a child, must be done in a neutral manner that does not default to a set age or encourage visitors to falsify age information.”
Website or online service directed to children: The Rule expands the factors the FTC will consider with respect to whether a website or service is “directed to children,” to include marketing or promotional materials or plans, representations to consumers or third parties, reviews by users or third parties and the ages of users on similar websites or services.

Enhanced direct notice content requirements: The Rule expands the content required in an operator’s direct notice to parents for the purpose of obtaining parental consent where required under COPPA.

Use of personal information: The direct notice must disclose how the operator intends to use the child’s personal information (in addition to the existing requirements to include the categories of the child’s personal information to be collected and the potential opportunities for the disclosure of the child’s personal information).
Third-party disclosures: Where the operator discloses children’s personal information to third parties, the direct notice must specify: (1) the identities or specific categories of the third parties (including the public, if such data is made publicly available), (2) the purposes for such disclosure, and (3) that the parent can consent to the collection and use of the child’s personal information without consenting to the disclosure of such personal information to third parties, except to the extent such disclosure is integral to the website or online service.

Enhanced privacy notice content requirements: The Rule also expands the content required in an operator’s privacy notice displayed on the operator’s website.

Internal operations: The privacy notice must disclose: (1) the specific internal operations for which the operator has collected a persistent identifier and (2) how the operator ensures that such identifier is not used or disclosed to contact a specific individual or for any other purpose not permitted under COPPA’s “support for the internal operations” consent exception.
Audio files: If applicable, a description of how the operator collects audio files containing a child’s voice solely to respond to the child’s specific request and not for any other purpose, and a statement that the operator immediately deletes such audio files thereafter.

Verifiable parental consent methods: The Rule adds three approved methods for verifying a parent’s identity for purposes of obtaining parental consent:

Knowledge-based authentication, provided that (1) the authentication process uses dynamic, multiple-choice questions with an adequate number of possible answers and (2) the questions are difficult enough that a child under 13 could not reasonably accurately answer them.
Government-issued identification, provided that the photo ID is verified to be authentic against an image of the parent’s face using facial recognition technology (and provided that the ID and images are promptly deleted after the match is confirmed).
Text message to the parent coupled with additional steps to confirm the parent’s identity (e.g., a confirmation text to the parent following receipt of consent). (Note that this option is available only under certain enumerated circumstances).

Limited exception to parental consent for the collection of audio files containing a child’s voice: The Rule allows operators to collect audio files containing a child’s voice (and no other personal information) solely to respond to a child’s request without providing direct notice or obtaining parental consent. This exception applies only if the operator does not use the information for any other purpose, does not disclose it, and deletes the data immediately after responding to the request. This amendment codifies a 2017 FTC enforcement policy statement regarding the collection and use of children’s voice recordings.
Limits on data retention and publication of data retention policy: The Rule prevents operators from retaining children’s personal information indefinitely. The Rule specifies that operators may not retain children’s personal information for longer than necessary to fulfill the specific documented purposes for which the data was collected, after which the data must be deleted. Operators also must establish, implement and maintain a written data retention policy that specifies (1) the purposes for which children’s personal information is collected, (2) the specific business need for retaining such data, and (3) a timeline for deleting the data. The data retention policy must be published in the operator’s privacy notice required under COPPA.
Written information security program: The Rule requires operators to establish, implement and maintain a written information security program that contains safeguards appropriate to the sensitivity of the children’s personal information collected and the operator’s size, complexity, and nature and scope of activities. Specifically, operators must, in connection with the written information security program, (1) designate personnel to coordinate the program, (2) at least annually, identify and assess internal and external risks to the security of children’s personal information, (3) implement safeguards to address such identified risks, (4) regularly test and monitor the effectiveness of such safeguards, and (5) at least annually, evaluate and modify the information security program accordingly.
Vendor and third-party due diligence requirements: Before disclosing children’s personal information to other operators, service providers or third parties, the Rule requires operators to “take reasonable steps” to ensure that such entities are “capable of maintaining the confidentiality, security, and integrity” of such data. Operators also must obtain written assurances that such entities will use “reasonable measures to maintain the confidentiality, security, and integrity” of the information.
Increased Safe Harbor transparency: By October 22, 2025, and annually thereafter, FTC-approved COPPA Safe Harbor programs are required to identify in their annual reports to the Commission each operator subject to the self-regulatory program (“subject operator”) and all approved websites or online services, as well as any subject operator that left the program during the time period covered by the annual report. The Safe Harbor programs also must outline their business models in greater detail and provide copies of each consumer complaint related to a member’s violation of the program’s guidelines. The report also must describe each disciplinary action taken against a subject operator and a description of the process for determining whether a subject operator is subject to discipline. In addition, by July 21, 2025, Safe Harbor programs must publicly post (and update every six months thereafter) a list of all current subject operators and, for each such operator, list each certified website or online service. Further, by April 22, 2028, and every three years thereafter, Safe Harbor programs must submit to the FTC a report detailing the program’s technological capabilities and mechanisms for assessing subject operators’ fitness for membership in the program.

New Federal Agency Policies and Protocols for Artificial Intelligence Utilization and Procurement Can Provide Useful Guidance for Private Entities

On April 3, 2025, the Office of Management and Budget (“OMB”) issued two Memoranda (Memos) regarding the use and procurement of artificial intelligence (AI) by executive federal agencies.
The Memos—M-25-21 on “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” and M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government”—build on President Trump’s Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence.”
The stated goal of the Memos is to promote a “forward-leaning, pro-innovation, and pro-competition mindset rather than pursuing the risk-adverse approach of the previous administration.” They aim to lift “unnecessary bureaucratic restrictions” while rendering agencies “more agile, cost-effective, and efficient.” Further, they will, presumably, “deliver improvements to the lives of the American public while enhancing America’s global dominance in AI innovation.” The Memos rescind and replace the corresponding M-24-10 and M-24-18 memos on use and procurement from the Biden era.
Although these Memos relate exclusively to the activities of U.S. federal agencies with regard to AI, they contain information and guidance with respect to the acquisition and utilization of AI systems that is transferable to entities other than agencies and their AI contractors and subcontractors with respect to developing and deploying AI assets. In this connection, the Memos underscore the importance of responsible AI governance and management and, interestingly, in large measure mirror protocols and prohibitions found in current state AI legislation that governs use in AI by private companies.
Outlined below are the salient points of each Memo that will be operationalized by the relevant federal agencies throughout the year.
Memorandum M-25-21 (The“AI Use Memo”)
The new AI Use Memo is designed to encourage agency innovation with respect to AI while removing risk-adverse barriers to innovation that the present administration views as burdensome. Thus, the policies appear to frame AI less as a regulatory risk but more as an engine of national competitiveness, efficiency, and strategic dominance. Nonetheless, a number of important points from the former Biden-era AI directives have been retained and further developed. The AI Use Memo retains the concept of an Agency Chief AI Officer, yet in the words of the White House, these roles “are redefined to serve as change agents and AI advocates, rather than overseeing layers of bureaucracy.” It continues a focus on privacy, civil rights, and civil liberties, yet as STATNews points out, the Memos omit some references to bias. Other key points include a strong focus on American AI and a track for AI that the administration views as “high-impact.”
Scope
The AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies”—exclusive of, for example, regulatory actions prescribing law and policy; regulatory or law enforcement; testing and research. It does not apply to the national security community, components of a national security system, or national security actions.
Covered Agencies
The AI Use Memo applies to all agencies defined in 44 U.S.C. 3502(1), meaning executive and military departments, government corporations, government controlled corporations, or other establishment in the executive branch, with some exceptions.
Innovation
The AI Use Memo focuses on three key areas of 1) Innovation, 2) Governance, and 3) Public Trust, and contains detailed guidance on:

AI Strategy: Within 180 days, agencies must develop an AI Strategy “for identifying and removing barriers to their responsible use of AI and for achieving enterprise-wide improvements in the maturity of their applications.” Strategy should include:

Current and planned AI use cases;
An assessment of the agency’s current state of AI maturity and a plan to achieve the agency’s AI maturity goals;

Sharing of agency data and AI assets (to save taxpayer dollars);
Leveraging the use of AI products and services;
Ensuring Responsible Federal Procurement: In Executive Order 14275 of April 15, 2025, President Trump announced plans to reform the Federal Acquisition Regulation (FAR) that establishes uniform procedures for acquisitions across executive departments and agencies. E.O. 14275 directs the Administrator of the Office of Federal Procurement Policy, in coordination with the FAR Council, agency heads, and others, to amend the FAR. This will impact how federal government contractors interface with respect to AI and general procurement undertaking and obligations. With regards to effective federal procurement, the AI Memo instructs agencies to

Treat relevant data and improvements as critical assets for AI maturity;
Evaluate performance of procured AI;
Promote competition in federal procurement of AI.

Building an AI-ready federal workforce (training, resources, talent, accountability).

Governance
The AI Use Memo strives to improve AI governance with various roles and responsibilities, including:

Chief AI Officers: Appoint in each agency within 60 days, with specified duties;
Agency AI Governance Board: Convene in each agency within 90 days;
Chief AI Officer Council: Convene within 90 days;
Agency Strategy (described above): Develop within 180 days;
Compliance Plans: Develop within 180 days, and every two years thereafter until 2036;
Internal Agency Policies: Update within 270 days;
Generative AI Policy: Develop within 270 days;
AI Use Case Inventories: Update annually.

Public Trust: High-Impact AI Categories and Minimum Risk Management Practices
A large portion of the AI Use Memo is devoted to fostering risk management policies that ensure the minimum number of requirements necessary to enable the trustworthy and responsible use of AI and to ensure these are “understandable and implementable.”
Agencies are required to implement minimum risk-management practices to manage risks from high-impact AI use cases by:

Determining “High-Impact” Agency Use of AI: The AI Use Memo sets out on pp. 21-22 a list of categories for which AI is presumed to be high impact. In the definition section, such use is high -impact “when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” This includes AI that has a significant effect on:

Civil rights, civil liberties or privacy;
Access to education, housing, insurance, credit, employment, and other programs;
Access to critical infrastructure or public safety; or
Strategic assets or resources.

Implementing Minimum Risk Management Practices for High-Impact AI: Agencies must document implementation within 365 days, unless an exemption or waiver applies. The guidelines follow closely with National Institute of Standards and Technology (NIST) risk management framework (RMF) as well as some state AI laws, notwithstanding that the AI Use Memo excludes specific references to the RMF as particular guidance.
With respect to high-impact AI, agencies must:

Conduct pre-deployment testing;
Complete AI impact assessment before deploying, documenting

Intended purpose and expected benefit;
Quality and appropriateness of relevant data and model capability;
Potential impacts of using AI;
Reassessment scheduling and procedures;
Related costs analysis; and
Results of independent review.

Conduct ongoing monitoring for performance and potential adverse impacts;
Ensure adequate human training and assessment;
Provide additional human oversight, intervention, and accountability;
Offer consistent remedies or appeals; and
Consult and incorporate feedback from end users and the public.

Memorandum M-25-22 (The “AI Procurement Memo”)
Memorandum M-25-22, entitled “Driving Efficient Acquisition of Artificial Intelligence in Government” (the “AI Procurement Memo”) applies to AI systems or services acquired by or on behalf of covered agencies and is meant to be considered with related federal policies. It shares the same applicability as the AI Use Memo, adding that it does not apply to AI used incidentally by a contractor during the performance of a contract.
Covered AI
The AI Procurement Memo applies to “data systems, software, applications, tools, or utilities” that are “established primarily for the purpose of researching, developing, or implementing [AI] technology” or “where an AI capability ‘is integrated into another system or agency business process, operational activity, or technology system.’” It excludes “any common commercial product within which [AI] is embedded, such as a word processor or map navigations system.”
Requirements
Under the AI Procurement Memo, agencies are required to:

Update agency policies;
Maximize use of American AI;
Privacy: Establish policies and processes to ensure compliance with privacy requirements in law and policy;
IP Rights and Use of Government Data: Establish processes for use of government data and IP rights in procurements for AI systems or services, with standardization across contracts where possible. Address:

Scope: Scoping licensing and IP rights based on the intended use of AI, to avoid vendor lock-in (discussed below);
Timeline: Ensuring that “components necessary to operate and monitor the AI system or service remain available for the acquiring agency to access and use for as long as it may be necessary”;
Data Handling: Providing clear guidance on handling, access, and use of agency data or information to ensure that the information is only “collected and retained by a vendor when reasonably necessary to serve the intended purposes of the contract”;
Use of Government Data: Ensure that contracts permanently prohibit the use of non-public inputted and outputted results to further train publicly or commercially available AI algorithms absent explicit agency consent.
Documentation, Transparency, Accessibility: Obtain documentation from vendors that “facilitates transparency and explainability, and that ensures an adequate means of tracking performance and effectiveness for procured AI.”

Determine Necessary Disclosures of AI Use in the Fulfillment of Government Contracts: Agencies should be cognizant of risks posed by unsolicited use of AI systems by vendors.

AI Acquisition Practices Throughout Acquisition Lifestyle
Agencies should identify requirements involved in the procurement, including convening a cross-functional team and determining the use of high-impact AI; conduct market research and planning; and engage in solicitation development, which includes AI use transparency requirements regarding high-impact use cases, provisions in the solicitation to reduce vendor lock in, and appropriate terms relating to IP rights and lawful use of government data.
Selection and Award
When evaluating proposals, agencies must test proposed solutions to understand the capabilities and limitations of any offered AI system or service; assess proposals for potential new AI risks, and review proposals for any challenges. Contract terms must address a number of items including IP rights and government data, privacy, vendor lock-in protection, and compliance with risk management practices as described in M-25-21, above.
Vendor Lock-In; Contract Administration and Closeout
Many provisions in the memo, including those in the “closeout” section, guard against dependency on a specific vendor. For example, if a decision is made not to extend a contract for an AI system or service, agencies “should work with the vendor to implement any contractual terms related to ongoing rights and access to any data or derived products resulting from services performed under the contract.”
M-25-22 notes that OMB will publish playbooks focused on the procurement of certain types of AI, including generative AI and AI-based biometrics. Additionally, this Memo directs the General Services Administration (“GSA”) to release AI procurement guides for the federal acquisition workforce that will address “acquisition authorities, approaches, and vehicles,” and to establish an online repository for agencies to share AI acquisition information and best practices, including language for standard AI contract clauses and negotiated costs.
Conclusion
These Memos clearly recognize the importance of an AI governance framework that will operate to ensure AI competitiveness while balancing the risks of AI systems that are engaged to affect agency efficiencies and drive government effectiveness—a familiar balance for private companies that use or consider using AI. As the mandates within the Memos are operationalized over the coming months EBG will keep our readers posted with up-to-date information. 
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

TCPA CLASS ACTIONS MORE THAN DOUBLE!!: The Pace of TCPA Filings in 2025 Continues to Skyrocket

So I was talking to a very well known TCPA Plaintiff’s attorney yesterday who told me his filing pace of TCPA class cases was “higher than ever.”
Spoke to another Plaintiff’s lawyer last week who said he had hired two attorneys recently and was looking to hire two more.
There is no question the Plaintiff’s bar is scaling their TCPA operations and there is no end in sight to it. But the results are astounding.
2024 was the peak year for TCPA class actions– with more class actions filed last year than any other year in the TCPA’s history.
2025, however, is set to blow 2024 away.
In the first three months of 2024 there were 239 TCPA class actions filed.
In 2025?
507.
That’s more than double the filings from 2024 so far YTD. 
That’s means TCPA class litigation is up over 112% year over year and April’s numbers look to be catastrophically high again.
Indeed, nearly 80% of all TCPA cases are now being filed as class actions. This is compared to 2-5% of other consumer cases that are filed as class cases. (Thanks to WebRecon for the data sets, btw.)
There is no question, therefore, that the TCPA is the single biggest litigation to American businesses out there right now and it continues to be the biggest cash cow in history for the Plaintiff’s bar.
And with other law firms flat giving false advise with respect to the FCC’s new revocation rules–my goodness– it looks like TCPA class actions will continue to spike.
PROTECT YOURSELF FOLKS.

Cybersecurity: Salt Typhoon’s Persistence is a Cruel Lesson for Smaller Providers

In December 2024, the White House’s Deputy National Security Adviser for Cyber and Emerging Technology confirmed that foreign actors, sponsored by the People’s Republic of China, infiltrated at least nine U.S. communications companies. The attacks, allegedly conducted by China’s state-sponsored Salt Typhoon hacking group, compromised sensitive systems, and exposed vulnerabilities in critical telecommunications infrastructure.
All communications service providers across the U.S. are at risk to this threat, especially those located near a U.S. military facility. To combat this threat, it is important for communications service providers to adopt and implement cybersecurity best practices in alignment with the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework 2.0 and/or the Cybersecurity and Infrastructure Security Agency’s (CISA) Cross-Sector Cybersecurity Performance Goals.
In response to the Salt Typhoon threat, in January of this year, the FCC adopted a Declaratory Ruling and a Notice of Proposed Rulemaking to affirm and increase the cybersecurity obligations of communications service providers. The Declaratory Ruling clarifies that Section 105 of the Communications Assistance for Law Enforcement Act (CALEA) creates legal obligation for telecommunications carriers to secure their networks against unlawful access and interception. Telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks. Such carriers must work to prevent any unauthorized interception or access into their network (and maintain records thereof). This requires basic cybersecurity hygiene practices such as:

Implementing role-based access controls;
Changing default passwords;
Requiring minimum password strength; and
Adopting multifactor authentication.

Falling short of fulfilling this statutory obligation may include failing to patch known vulnerabilities or not employing best practices that are known to be necessary in response to identified exploits.
The Notice of Proposed Rulemaking, if adopted, would require providers to adopt and implement cybersecurity and supply chain risk management plans as well as certify compliance with these plans annually to the FCC. The proposed rule would apply to a wide array of providers including facilities-based providers, broadcast stations, television stations, cable systems, AM & FM commercial radio operators, TRS providers, satellite communications providers, and all international section 214 authorization holders. Participants of the FCC’s Enhanced A-CAM Program and NTIA’s BEAD Program are already subject to this requirement.
Ultimately, more FCC regulation is coming. At the same time, cyber incidents are increasing. Communications service providers should consider creating both a cybersecurity and supply chain risk management plan as well as a cybersecurity incident response plan. Such plans should reflect industry best practices outlined in federal guidance documents as described above.
In addition, carriers should review their cybersecurity liability insurance policies to ensure they have sufficient coverage. It’s also critical to review and update vendor and partner contracts for security and supply chain risk management clauses to include provisions for incident response, liability, and retention of information.
Finally, communications service providers should also consider engaging legal counsel to assist their efforts in ensuring that they are adequately protected.
Womble Bond Dickinson has developed a cybersecurity retainer that captures the requirements and proactive procedures necessary to meet the regulations, protect your networks and deal with the fallout of cybersecurity breach including insurance recovery and class action litigation from a cybersecurity data breach.

California Bill May Curb the Flood of “Abusive Lawsuits” Targeting “Standard Online Business Activities”

Democratic State Senator Anna M. Caballero introduced Senate Bill 690 (S.B. 690), which aims to curb “abusive lawsuits” under the California Invasion of Privacy Act (“CIPA”) based on the use of cookies and other online technologies, on February 24, 2025, and the Bill is now scheduled to be heard by the Senate Public Safety Committee on April 29, 2025.
Over the past few years, the plaintiffs’ bar has leveraged CIPA to hold businesses ransom based on their use of everyday online technologies (e.g., cookies, pixels, beacons, chat bots, session replay and other similar technology) on their websites. The plaintiffs’ bar has claimed such technologies: (1) facilitate “wiretapping” under Section 631 of CIPA; and/or (2) constitute illegal “pen registers” or “trap and trace devices” under Section 638.50 of CIPA. Nearly every business with a public-facing website has been or may soon be targeted with threats of significant liability stemming from the availability of statutory damages under CIPA. Even those businesses that comply with the comprehensive California Consumer Privacy Act of 2018 (“CCPA”), which governs the collection and use of consumer personal information, are not immune from such threats. Faced with the threat of such aggregated statutory damages under CIPA, many businesses opt to pay out settlements to mitigate potentially enterprise-threatening risk. And those rational decisions unfortunately have spawned a cottage industry responsible for an endless stream of filed and threatened CIPA litigation that seemingly has served only to enrich the plaintiffs’ bar.
S.B. 690 might spell doom for these perceived abuses and the negative consequences they have had on online commerce. Caballero states that the bill aims to “[s]top[] the abusive lawsuits against California businesses and nonprofits under CIPA for standard online business activities that are already regulated by” the CCPA.
If enacted, S.B. 690 would exempt online technologies used for a “commercial business purpose” from wiretapping and pen register/trap-and-trace liability. Notably, the definition of “commercial business purpose” broadly encompasses the use of “personal information” in a manner already permitted by the CCPA. The exclusion of such practices from CIPA’s ambit should curb the “abusive lawsuits” cited by Caballero when she unveiled S.B. 690 and provide certainty to businesses engaged in online commerce.

Insight Into DOGE’s Access to HHS’ Systems

Becker’s Hospital Review reports that the Department of Government Efficiency (DOGE) “has access to sensitive information in 19 HHS databases and systems,” according to a court filing obtained by Wired. HHS provided the information during the discovery process in the lawsuit filed by the American Federation of Labor and Congress of Industrial Organizations against the federal government, requesting restriction of DOGE’s access to federal systems.
According to Becker’s, DOGE had not previously disclosed nine of the 19 systems, which “contain various protected health information, ranging from email and mailing addresses to Social Security numbers and medical notes.”
Some of the systems included federal employees’ data and access to Medicare recipients’ personal information. For instance, one system listed is the Integrated Data Repository Cloud system, which “stores and integrates Medicare claims data with beneficiary and provider data sources.” Other listed systems include the NIH Workforce Analytics Workbench, which “tracks current and historical data on the NIH workforce, including headcounts and retirement information,” the Office of Human Resources Enterprise Human Capital Management Investment system, which “manages personnel actions and employee benefits at HHS,” and the Business Intelligence Information System, which “stores cloud-based HHS human resources and payroll data for analysis and reporting.”

Connecticut Office of the Attorney General Issues Annual Report on CTDPA Enforcement

On April 17, 2025, the Connecticut Office of the Attorney General (“OAG”) issued a report highlighting key enforcement initiatives, complaint trends and legislative recommendations aimed at strengthening the Connecticut Data Privacy Act (“CTDPA”). Highlights from the report are summarized below.
Breach Notice Review
In 2024, the OAG received 1,900 breach notifications. Each report was reviewed for compliance with state law. The OAG issued numerous warning letters to covered businesses that failed to provide timely notice, emphasizing that the 60-day statutory clock starts at the detection of suspicious activity—not when the full scope is confirmed. In serious cases, the OAG pursued Assurances of Voluntary Compliance requiring businesses to improve incident response practices and pay penalties.
Consumer Complaints
The OAG continues to receive significant complaint volumes regarding CTDPA compliance. Issues include unfulfilled data rights requests, misleading privacy notices, vague breach notifications, and misuse of public records for online profiles.
Enforcement Actions
The report highlighted enforcement actions on several violations, including the following:

Privacy Notices: The OAG conducted “sweeps” of insufficient or inadequate privacy notices and issued over two dozen cure notices. Common issues included missing CTDPA language, unclear opt-out mechanisms, and misleading limitations on consumer rights. Most businesses took corrective steps following notice.
Facial Recognition Technology: The OAG sent a cure notice to a regional supermarket due to their use of facial recognition technology (for purposes of preventing and/or detecting shoplifting). The OAG noted that businesses using facial recognition must comply with CTDPA’s protections for biometric data. The OAG clarified that crime prevention purposes do not exempt compliance.
Marketing and Advertising Practices: The OAG investigated a complaint involving a national cremation services company that mailed a targeted advertisement to a Connecticut resident shortly after receiving medical treatment. While the data used—name, age and zip code—was not classified as sensitive, the OAG expressed concern over the context and issued a cure notice. As a result, the company updated its privacy notice to disclose its use of third-party data and specify the categories of data collected. The case underscores that for the OAG, even non-sensitive data, when used in sensitive contexts, can lead to privacy harms and warrants heightened oversight.
Dark Patterns and Opt-Out Mechanisms: The OAG has significantly expanded its enforcement efforts to address manipulative design choices—commonly known as “dark patterns”—that interfere with consumer privacy rights. In a 2024 enforcement sweep, the OAG issued cure notices to businesses employing cookie banners that made it easier to consent to data tracking than to opt out.
Minors’ Online Services: The report notes that as of October 1, 2024, the CTDPA imposes new obligations on businesses that offer an “online service, product or feature” to minors under 18 years of age. Generally, these provisions require that businesses use reasonable care to avoid causing a heightened risk of harm to minors. Further, these provisions prohibit: (1) the processing of a minor’s personal data without consent for purposes of targeted advertising, profiling, or sale; (2) using a system design feature to significantly increase, sustain, or extend a minor’s time online; and (3) collecting a minor’s precise geolocation data without consent. 
Consumer Health Data: The report notes that controllers must obtain opt-in consent for processing consumer health data and ensure proper contractual safeguards when sharing such data with processors. Two telehealth companies received letters related to potential unauthorized sharing with technology platforms.
Universal Opt-Out Preference Signals: The report also notes that as of January 1, 2025, businesses must recognize browser-based opt-out signals such as GPC. The OAG has emphasized that this requirement is key to easing consumer privacy management. The OAG also notes that going forward, it will be focused on examining whether businesses are complying with the universal opt-out preference signal provisions and that the OAG expects to engage in efforts to ensure this consumer right is upheld.

CTDPA Legislative Recommendations
The OAG reiterated eight proposed legislative changes to improve the CTDPA:

Scale Back Exemptions: Limit current entity-level exemptions for GLBA and HIPAA, narrow the FCRA data-level exemption and remove the entity-level exemption for non-profit organizations.
Lower Thresholds: Remove thresholds for businesses processing sensitive or minors’ data and scale back all other thresholds for businesses processing other types of data.
Strengthen Data Minimization: Require data processed to be strictly necessary for stated purposes.
Expand Definition of “Sensitive Data”: Add a comprehensive list of “sensitive data” elements found in other state privacy laws, such as government ID numbers, union membership and neural data.
Clarify Protections for Minors: Prohibit targeted advertising and sale of minors’ data for consumers that business “knew or should have known” are minors.
Narrow Definition of “Publicly Available” Data: Refine and limit the scope of “publicly available” data.
Right to Know Specific Third Parties: Require businesses to name the specific entities receiving consumer data.
Enhance Opt-Out Preference Signal and Deletion Rights: Require all web browsers and mobile operating systems to include a setting that allows users to affirmatively send opt out preference signals and create a centralized deletion mechanism.