DOJ’s Data Security Program Final Rules Effective – Implications for Telecom Providers

On January 8, 2025, the U.S. Department of Justice (DOJ) issued its final rule to implement Executive Order 14117 aimed at preventing access to Americans’ bulk sensitive personal data and government-related data by countries of concern, including China, Cuba, Iran, North Korea, Russia, and Venezuela (Data Security Program or DSP). The regulations took effect on April 8, 2025, with additional compliance requirements for U.S. persons taking effect by October 6, 2025.
While the DSP includes an exemption for “telecommunications services” (as specifically defined in the rule), telecommunications providers must closely review their services involving data transactions with countries of concern or covered persons associated with those countries to ensure the particular service provided or transaction falls within the exemption. Non-compliance with the DSP can result in significant civil and criminal penalties, underscoring the importance for telecommunications providers to thoroughly understand and adhere to these rules, where applicable.
Scope and Applicability
The DSP sets forth prohibitions and restrictions on certain data transactions that pose national security risks. The rules are designed to be national security regulations to address identified risks to U.S. national security, rather than privacy regulations designed to protect privacy or other individual interests.
The DSP applies to U.S. persons and entities engaging in transactions that provide access to Covered Data to Countries of Concern or Covered Persons associated with those countries in specified ways. Countries of Concern currently include China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, and Venezuela, but this list is subject to future change. The DSP defines Covered Persons as entities or individuals associated with a Country of Concern, based on the following criteria:

An entity that is 50% or more owned by a Country of Concern
An entity that is organized or chartered under the laws of a Country of Concern
An entity that has its primary place of business in a Country of Concern
An entity that is 50% or more owned by a Covered Person
A foreign person, as an individual, who is an employee or contractor of a Country of Concern 
A foreign person, as an individual, who is primarily a resident in the territorial jurisdiction of a country of concern
Any entity or individual that the Attorney General designates as a Covered Person subject to broad discretion set forth in the DSP

Covered Data involves two primary categories of data: U.S. sensitive personal data and U.S. government-related data. At a high level, the new rules prohibit, restrict, or exempt certain data transactions involving Covered Data that could give countries of concern or Covered Persons access to such data, and are triggered by bulk data transfers, which can include individual transfers that over time trigger specified volume thresholds. The rules also include specified record keeping and reporting requirements, as well as a process for obtaining approval of otherwise prohibited transfers. The rules also include enforcement mechanisms with the potential for civil and criminal penalties for non-compliance.
On April 11, 2025, DOJ issued a compliance guide, along with a list of Frequently Asked Questions (FAQs) to assist entities with understanding and implementing the DSP. DOJ also announced a 90-day limited enforcement period from April 8 to July 8, 2025, focusing on facilitating compliance rather than enforcement, provided that entities are making good faith efforts as outlined in the 90-day policy.By July 8, 2025, entities must be fully compliant with the DSP, as the DOJ will begin enforcing the provisions more rigorously. By October 6, 2025, compliance with all aspects of the DSP, including due diligence, audit requirements, and specific reporting obligations, will be mandatory.
For a more detailed discussion of the persons and transactions covered under the DSP and its applicability, including definitions, see our recent alert on Navigating the New DOJ Data Security Program Compliance.
Telecommunications Services Exemption
Of note for telecommunications providers is that the DSP, Rule Section 202.509, includes a “telecommunications services” exemption. This exemption for telecommunications services states that the DSP rules: “…do not apply to data transactions, other than those involving data brokerage, to the extent that they are ordinarily incident to and part of the provision of telecommunications services,” as that term is defined under the rule. Specifically, Rule Section 202.252, the new DOJ rule definition of “telecommunications service” means:
the provision of voice and data communications services regardless of format or mode of delivery, including communications services delivered over cable, Internet Protocol, wireless, fiber, or other transmission mechanisms, as well as arrangements for network interconnection, transport, messaging, routing, or international voice, text, and data roaming.1

Of note, this exemption specifically applies to activities directly related to the technical and operational aspects of delivering telecommunications services and does not extend to ancillary services like marketing or data analytics. The Department also declined to expand the exemption to include data transactions related to IP addresses or cybersecurity services.
Importantly, the DOJ made clear the definition of telecommunications services for purposes of the DSP is unique to the DSP and is without reference to the definition in Section 153(53) of the Communications Act. The import of this is that the definition is apparently without reference to whether the service is a common carrier offering.
Examples of Exempt and Non-Exempt Transactions
Rule Section 202.509 provides examples of bulk data transfers incident to telecommunications services that fall within the exemption, and an example of a bulk data transfer by a telecommunications service provider that falls outside the exemption:
(1) Example 1. A U.S. telecommunications service provider collects covered personal identifiers from its U.S. subscribers. Some of those subscribers travel to a country of concern and use their mobile phone service under an international roaming agreement. The local telecommunications service provider in the country of concern shares these covered personal identifiers with the U.S. service provider for the purposes of either helping provision service to the U.S. subscriber or receiving payment for the U.S. subscriber’s use of the country of concern service provider’s network under that international roaming agreement. The U.S. service provider provides the country of concern service provider with network or device information for the purpose of provisioning services and obtaining payment for its subscribers’ use of the local telecommunications service provider’s network. Over the course of 12 months, the volume of network or device information shared by the U.S. service provider with the country of concern service provider for the purpose of provisioning services exceeds the applicable bulk threshold. These transfers of bulk U.S. sensitive personal data are ordinarily incident to and part of the provision of telecommunications services and are thus exempt transactions.
This example illustrates where the data sharing is integral to the core function of providing telecommunications services and facilitating international roaming, aligning with the exemption criteria.
(2) Example 2. A U.S. telecommunications service provider collects precise geolocation data on its U.S. subscribers. The U.S. telecommunications service provider sells this precise geolocation data in bulk to a covered person for the purpose of targeted advertising. This sale is not ordinarily incident to and part of the provision of telecommunications services and remains a prohibited transaction.
Here, the sale of geolocation data for advertising purposes is not directly related to the telecommunications service itself, placing it outside the scope of the exemption.
(3) Example 7. A U.S. company owns or operates a submarine telecommunications cable with one landing point in a foreign country that is not a country of concern and one landing point in a country of concern. The U.S. company leases capacity on the cable to U.S. customers that transmit bulk U.S. sensitive personal data to the landing point in the country of concern, including transmissions as part of prohibited transactions. The U.S. company’s ownership or operation of the cable does not constitute knowingly directing a prohibited transaction, and its ownership or operation of the cable would not be prohibited (although the U.S. customers’ covered data transactions would be prohibited). See 28 CFR § 202.305.
This example illustrates that while the infrastructure operation itself is not a prohibited transaction, the data transfers by customers using the international submarine cable are prohibited if they involve countries of concern. This would likely be a direct issue for the underlying customer rather than the telecommunications service provider, though providers might still consider whether it would make sense to ensure that their customer agreements include provisions insulating them from any potential exposure from such customer non-compliance.

The examples above focus on whether a particular bulk data transfer is “ordinarily incident to and part of the provision of” an exempt telecommunications service. So, for example, arrangements outside the actual provision of the service, such as the sale or sharing of customer data for marketing purposes or with application providers, would appear to be outside the scope of the exemption.
As one example, a number of major mobile carriers had location-based service programs, which were the subject of a series of FCC enforcement actions, that facilitated, through third party “location aggregators”, the sharing of user location data with application providers to enable location-based services.2 Example No. 2, above, would suggest that this type of service would not be “ordinarily incident to and part of the provision of” a carrier’s mobile data services (the telecommunications service under the DSP definition) and hence outside the exemption.
Challenges and Considerations
The harder question, however, and one that will undoubtedly be initially vexing for providers without further clarification from DOJ, is the actual scope of the “telecommunications services” definition in the rule. This is particularly true for integrated offerings by providers that clearly include telecommunications services, but also include integrated components which include bulk transfers, that standing alone might be outside the scope of the telecommunications services definition. Of significance, in adopting this definition, the DOJ stated that the definition is limited to the listed telecommunications services and does not reach services like cloud computing.
The recently issued FAQs also reinforce this point, stating the definition is “limited to communications services and does not include all internet-based services like cloud computing.” See Question 77. This begs the question of an offering by a telecommunications services provider that includes both cloud computing and associated transport services. Similarly, the provision of integrated applications offered by telecommunications services providers in conjunction with their telecommunications service offerings, would raise similar questions, particularly, as noted above, in connection with Example No. 2.
Providers should note that any data transaction not essential to the core function of telecommunications—such as partnerships involving user data for non-service-related purposes—may fall outside the exemption. Providers must differentiate between core telecommunications functions and ancillary services, ensuring that services like data analytics or marketing, which are not ordinarily incident to the core telecommunications services, are carefully evaluated for compliance.
Implications of Limitation to Telecommunications Service Exemption
While DOJ’s final rule appears to provide three straightforward examples, the issues arise about integrated service offerings such as telecommunications services that include a cloud computing or a data center component. While the telecommunications service aspect appears to be exempt, the data storage or cloud computing aspect would not be, at least if offered on a standalone basis. The same may be true for integrated application offerings in connection with application providers, most obviously, under Example No. 2 in connection with sharing location data. This necessitates a thorough review of service offerings, particularly those bundled with non-telecommunications services like cloud computing, data center services, and applications, to determine compliance with DSP regulations. Accordingly, telecommunications providers must closely examine the integrated services they provide, along with their data sharing arrangements with third parties, to determine whether the transaction may trigger prohibited or restricted data transactions involving countries of concern or Covered Persons.

1In adopting this definition, DOJ noted that commenters suggested that the definition of telecommunications services be expanded to include voice and data communications over the internet. DOJ agreed and instead of limiting the scope of “telecommunications services” to the definition in Communications Act, 47 U.S.C. 153(53) (which would have applied only to common carriers) the DOJ adopted its own definition of the term to cover present day communications for the purposes of the exemption. Under the Communications Act, telecommunications service means the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used.2See FCC Fines AT&T, Sprint, T-Mobile and Verizon Nearly $200 Million for Illegally Sharing Access to Customers’ Location Data (FCC News Release, Apr. 29, 2024); see also AT&T, File No. EB-TCD-18-00027704, Forfeiture Order at ¶¶ 8-10 (FCC 24-40, Apr. 29, 2024), vacated, AT&T v. FCC, No. 24-60223, Slip Op. at 5-6 (5th Cir. Apr. 17, 2025). 

North Dakota Expands Data Security Requirements and Issues New Licensing Requirements for Brokers

On April 11, North Dakota enacted HB 1127, overhauling its regulatory framework for financial institutions and nonbank financial service providers. The law amends multiple chapters of the North Dakota Century Code and creates a new data security mandate for financial corporations—a category that includes non-depository entities regulated by the Department of Financial Institutions (DFI). It also expands the licensing requirement for brokers to include “alternative financing products,” potentially impacting a broad array of fintech providers.
The law introduces sweeping data protection obligations for nonbank financial corporations through new requirements created in Chapter 13-01.2. Specifically, covered entities must:

Implement an information security program. This includes administrative, technical, and physical safeguards, based on a written risk assessment.
Designate a qualified individual. Each financial corporation must designate a qualified individual responsible for overseeing the security program and report annually to its board or a senior officer.
Conduct regular testing. Annual penetration tests and biannual vulnerability assessments are mandatory unless continuous monitoring is in place.
Secure consumer data. Encryption of data in transit and at rest is required unless a compensating control is approved. Multifactor authentication is also mandatory.
Notify regulators of breaches. A data breach involving 500 or more consumers must be reported to the Commissioner within 45 days.

The bill also amends North Dakota’s broker licensing laws to authorize the DFI to classify certain alternative financing arrangements as “loans.”
Putting It Into Practice: Of the many amendments here, North Dakota’s expansion of licensing requirements for brokers of alternative financing products may have the biggest impact for institutions, especially fintechs.

FTC Publishes Final COPPA Rule Amendments

On April 22, 2025, the Federal Trade Commission published in the Federal Register final amendments to the Children’s Online Privacy Protection Act Rule (the “Rule”). The Rule will go into effect 60 days from publication, on or about June 21, 2025, with a compliance deadline of April 22, 2026. The Rule retains many of the proposed amendments first announced in January 2025 as a result of a Notice of Proposed Rulemaking issued by the FTC in 2024 (the “2024 NPRM”), with certain differences.
Key updates to the Rule include:

Updated definitions: The Rule adds or updates several defined terms, including:

Contact information: The Rule adds to the definition of “online contact information”: mobile phone numbers, “provided the operator uses it only to send a text message.” Under COPPA, operators can use a child or parent’s contact information to provide notice and obtain parental consent without first obtaining consent to the collection of the contact information. According to the FTC, the amendment was intended to give operators another way to initiate the process of seeking parental consent quickly and effectively.
Personal information: The Rule updates the definition of “personal information” to include:

Biometric identifier: The Rule adds to the definition of “personal information”: “a biometric identifier that can be used for the automated or semi-automated recognition of an individual, such as fingerprints; handprints; retina patterns; iris patterns; genetic data, including a DNA sequence; voiceprints; gait patterns; facial templates; or faceprints[.]” Notably, the Rule does not include “data derived from voice data, gait data, or facial data,” which is language that was proposed in the 2024 NPRM.
Government-issued identifier: The Rule adds to the definition of “personal information”: “[a] government-issued identifier, such as a Social Security, [S]tate identification card, birth certificate, or passport number[.]”

Mixed audience website or online service: The FTC first developed this category in the 2013 COPPA Rule amendments, as a subset of “child-directed” websites and online services, but did not define the term. The Rule defines the term as “a website or online service that is directed to children under the criteria set forth in paragraph (1) of the definition of website or online service directed to children, but that does not target children as its primary audience, and does not collect personal information from any visitor prior to collecting age information or using another means that is reasonably calculated, in light of available technology, to determine whether the visitor is a child.” The updated definition further requires that “[a]ny collection of age information, or other means of determining whether a visitor is a child, must be done in a neutral manner that does not default to a set age or encourage visitors to falsify age information.”
Website or online service directed to children: The Rule expands the factors the FTC will consider with respect to whether a website or service is “directed to children,” to include marketing or promotional materials or plans, representations to consumers or third parties, reviews by users or third parties and the ages of users on similar websites or services.

Enhanced direct notice content requirements: The Rule expands the content required in an operator’s direct notice to parents for the purpose of obtaining parental consent where required under COPPA.

Use of personal information: The direct notice must disclose how the operator intends to use the child’s personal information (in addition to the existing requirements to include the categories of the child’s personal information to be collected and the potential opportunities for the disclosure of the child’s personal information).
Third-party disclosures: Where the operator discloses children’s personal information to third parties, the direct notice must specify: (1) the identities or specific categories of the third parties (including the public, if such data is made publicly available), (2) the purposes for such disclosure, and (3) that the parent can consent to the collection and use of the child’s personal information without consenting to the disclosure of such personal information to third parties, except to the extent such disclosure is integral to the website or online service.

Enhanced privacy notice content requirements: The Rule also expands the content required in an operator’s privacy notice displayed on the operator’s website.

Internal operations: The privacy notice must disclose: (1) the specific internal operations for which the operator has collected a persistent identifier and (2) how the operator ensures that such identifier is not used or disclosed to contact a specific individual or for any other purpose not permitted under COPPA’s “support for the internal operations” consent exception.
Audio files: If applicable, a description of how the operator collects audio files containing a child’s voice solely to respond to the child’s specific request and not for any other purpose, and a statement that the operator immediately deletes such audio files thereafter.

Verifiable parental consent methods: The Rule adds three approved methods for verifying a parent’s identity for purposes of obtaining parental consent:

Knowledge-based authentication, provided that (1) the authentication process uses dynamic, multiple-choice questions with an adequate number of possible answers and (2) the questions are difficult enough that a child under 13 could not reasonably accurately answer them.
Government-issued identification, provided that the photo ID is verified to be authentic against an image of the parent’s face using facial recognition technology (and provided that the ID and images are promptly deleted after the match is confirmed).
Text message to the parent coupled with additional steps to confirm the parent’s identity (e.g., a confirmation text to the parent following receipt of consent). (Note that this option is available only under certain enumerated circumstances).

Limited exception to parental consent for the collection of audio files containing a child’s voice: The Rule allows operators to collect audio files containing a child’s voice (and no other personal information) solely to respond to a child’s request without providing direct notice or obtaining parental consent. This exception applies only if the operator does not use the information for any other purpose, does not disclose it, and deletes the data immediately after responding to the request. This amendment codifies a 2017 FTC enforcement policy statement regarding the collection and use of children’s voice recordings.
Limits on data retention and publication of data retention policy: The Rule prevents operators from retaining children’s personal information indefinitely. The Rule specifies that operators may not retain children’s personal information for longer than necessary to fulfill the specific documented purposes for which the data was collected, after which the data must be deleted. Operators also must establish, implement and maintain a written data retention policy that specifies (1) the purposes for which children’s personal information is collected, (2) the specific business need for retaining such data, and (3) a timeline for deleting the data. The data retention policy must be published in the operator’s privacy notice required under COPPA.
Written information security program: The Rule requires operators to establish, implement and maintain a written information security program that contains safeguards appropriate to the sensitivity of the children’s personal information collected and the operator’s size, complexity, and nature and scope of activities. Specifically, operators must, in connection with the written information security program, (1) designate personnel to coordinate the program, (2) at least annually, identify and assess internal and external risks to the security of children’s personal information, (3) implement safeguards to address such identified risks, (4) regularly test and monitor the effectiveness of such safeguards, and (5) at least annually, evaluate and modify the information security program accordingly.
Vendor and third-party due diligence requirements: Before disclosing children’s personal information to other operators, service providers or third parties, the Rule requires operators to “take reasonable steps” to ensure that such entities are “capable of maintaining the confidentiality, security, and integrity” of such data. Operators also must obtain written assurances that such entities will use “reasonable measures to maintain the confidentiality, security, and integrity” of the information.
Increased Safe Harbor transparency: By October 22, 2025, and annually thereafter, FTC-approved COPPA Safe Harbor programs are required to identify in their annual reports to the Commission each operator subject to the self-regulatory program (“subject operator”) and all approved websites or online services, as well as any subject operator that left the program during the time period covered by the annual report. The Safe Harbor programs also must outline their business models in greater detail and provide copies of each consumer complaint related to a member’s violation of the program’s guidelines. The report also must describe each disciplinary action taken against a subject operator and a description of the process for determining whether a subject operator is subject to discipline. In addition, by July 21, 2025, Safe Harbor programs must publicly post (and update every six months thereafter) a list of all current subject operators and, for each such operator, list each certified website or online service. Further, by April 22, 2028, and every three years thereafter, Safe Harbor programs must submit to the FTC a report detailing the program’s technological capabilities and mechanisms for assessing subject operators’ fitness for membership in the program.

New Federal Agency Policies and Protocols for Artificial Intelligence Utilization and Procurement Can Provide Useful Guidance for Private Entities

On April 3, 2025, the Office of Management and Budget (“OMB”) issued two Memoranda (Memos) regarding the use and procurement of artificial intelligence (AI) by executive federal agencies.
The Memos—M-25-21 on “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” and M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government”—build on President Trump’s Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence.”
The stated goal of the Memos is to promote a “forward-leaning, pro-innovation, and pro-competition mindset rather than pursuing the risk-adverse approach of the previous administration.” They aim to lift “unnecessary bureaucratic restrictions” while rendering agencies “more agile, cost-effective, and efficient.” Further, they will, presumably, “deliver improvements to the lives of the American public while enhancing America’s global dominance in AI innovation.” The Memos rescind and replace the corresponding M-24-10 and M-24-18 memos on use and procurement from the Biden era.
Although these Memos relate exclusively to the activities of U.S. federal agencies with regard to AI, they contain information and guidance with respect to the acquisition and utilization of AI systems that is transferable to entities other than agencies and their AI contractors and subcontractors with respect to developing and deploying AI assets. In this connection, the Memos underscore the importance of responsible AI governance and management and, interestingly, in large measure mirror protocols and prohibitions found in current state AI legislation that governs use in AI by private companies.
Outlined below are the salient points of each Memo that will be operationalized by the relevant federal agencies throughout the year.
Memorandum M-25-21 (The“AI Use Memo”)
The new AI Use Memo is designed to encourage agency innovation with respect to AI while removing risk-adverse barriers to innovation that the present administration views as burdensome. Thus, the policies appear to frame AI less as a regulatory risk but more as an engine of national competitiveness, efficiency, and strategic dominance. Nonetheless, a number of important points from the former Biden-era AI directives have been retained and further developed. The AI Use Memo retains the concept of an Agency Chief AI Officer, yet in the words of the White House, these roles “are redefined to serve as change agents and AI advocates, rather than overseeing layers of bureaucracy.” It continues a focus on privacy, civil rights, and civil liberties, yet as STATNews points out, the Memos omit some references to bias. Other key points include a strong focus on American AI and a track for AI that the administration views as “high-impact.”
Scope
The AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies”—exclusive of, for example, regulatory actions prescribing law and policy; regulatory or law enforcement; testing and research. It does not apply to the national security community, components of a national security system, or national security actions.
Covered Agencies
The AI Use Memo applies to all agencies defined in 44 U.S.C. 3502(1), meaning executive and military departments, government corporations, government controlled corporations, or other establishment in the executive branch, with some exceptions.
Innovation
The AI Use Memo focuses on three key areas of 1) Innovation, 2) Governance, and 3) Public Trust, and contains detailed guidance on:

AI Strategy: Within 180 days, agencies must develop an AI Strategy “for identifying and removing barriers to their responsible use of AI and for achieving enterprise-wide improvements in the maturity of their applications.” Strategy should include:

Current and planned AI use cases;
An assessment of the agency’s current state of AI maturity and a plan to achieve the agency’s AI maturity goals;

Sharing of agency data and AI assets (to save taxpayer dollars);
Leveraging the use of AI products and services;
Ensuring Responsible Federal Procurement: In Executive Order 14275 of April 15, 2025, President Trump announced plans to reform the Federal Acquisition Regulation (FAR) that establishes uniform procedures for acquisitions across executive departments and agencies. E.O. 14275 directs the Administrator of the Office of Federal Procurement Policy, in coordination with the FAR Council, agency heads, and others, to amend the FAR. This will impact how federal government contractors interface with respect to AI and general procurement undertaking and obligations. With regards to effective federal procurement, the AI Memo instructs agencies to

Treat relevant data and improvements as critical assets for AI maturity;
Evaluate performance of procured AI;
Promote competition in federal procurement of AI.

Building an AI-ready federal workforce (training, resources, talent, accountability).

Governance
The AI Use Memo strives to improve AI governance with various roles and responsibilities, including:

Chief AI Officers: Appoint in each agency within 60 days, with specified duties;
Agency AI Governance Board: Convene in each agency within 90 days;
Chief AI Officer Council: Convene within 90 days;
Agency Strategy (described above): Develop within 180 days;
Compliance Plans: Develop within 180 days, and every two years thereafter until 2036;
Internal Agency Policies: Update within 270 days;
Generative AI Policy: Develop within 270 days;
AI Use Case Inventories: Update annually.

Public Trust: High-Impact AI Categories and Minimum Risk Management Practices
A large portion of the AI Use Memo is devoted to fostering risk management policies that ensure the minimum number of requirements necessary to enable the trustworthy and responsible use of AI and to ensure these are “understandable and implementable.”
Agencies are required to implement minimum risk-management practices to manage risks from high-impact AI use cases by:

Determining “High-Impact” Agency Use of AI: The AI Use Memo sets out on pp. 21-22 a list of categories for which AI is presumed to be high impact. In the definition section, such use is high -impact “when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” This includes AI that has a significant effect on:

Civil rights, civil liberties or privacy;
Access to education, housing, insurance, credit, employment, and other programs;
Access to critical infrastructure or public safety; or
Strategic assets or resources.

Implementing Minimum Risk Management Practices for High-Impact AI: Agencies must document implementation within 365 days, unless an exemption or waiver applies. The guidelines follow closely with National Institute of Standards and Technology (NIST) risk management framework (RMF) as well as some state AI laws, notwithstanding that the AI Use Memo excludes specific references to the RMF as particular guidance.
With respect to high-impact AI, agencies must:

Conduct pre-deployment testing;
Complete AI impact assessment before deploying, documenting

Intended purpose and expected benefit;
Quality and appropriateness of relevant data and model capability;
Potential impacts of using AI;
Reassessment scheduling and procedures;
Related costs analysis; and
Results of independent review.

Conduct ongoing monitoring for performance and potential adverse impacts;
Ensure adequate human training and assessment;
Provide additional human oversight, intervention, and accountability;
Offer consistent remedies or appeals; and
Consult and incorporate feedback from end users and the public.

Memorandum M-25-22 (The “AI Procurement Memo”)
Memorandum M-25-22, entitled “Driving Efficient Acquisition of Artificial Intelligence in Government” (the “AI Procurement Memo”) applies to AI systems or services acquired by or on behalf of covered agencies and is meant to be considered with related federal policies. It shares the same applicability as the AI Use Memo, adding that it does not apply to AI used incidentally by a contractor during the performance of a contract.
Covered AI
The AI Procurement Memo applies to “data systems, software, applications, tools, or utilities” that are “established primarily for the purpose of researching, developing, or implementing [AI] technology” or “where an AI capability ‘is integrated into another system or agency business process, operational activity, or technology system.’” It excludes “any common commercial product within which [AI] is embedded, such as a word processor or map navigations system.”
Requirements
Under the AI Procurement Memo, agencies are required to:

Update agency policies;
Maximize use of American AI;
Privacy: Establish policies and processes to ensure compliance with privacy requirements in law and policy;
IP Rights and Use of Government Data: Establish processes for use of government data and IP rights in procurements for AI systems or services, with standardization across contracts where possible. Address:

Scope: Scoping licensing and IP rights based on the intended use of AI, to avoid vendor lock-in (discussed below);
Timeline: Ensuring that “components necessary to operate and monitor the AI system or service remain available for the acquiring agency to access and use for as long as it may be necessary”;
Data Handling: Providing clear guidance on handling, access, and use of agency data or information to ensure that the information is only “collected and retained by a vendor when reasonably necessary to serve the intended purposes of the contract”;
Use of Government Data: Ensure that contracts permanently prohibit the use of non-public inputted and outputted results to further train publicly or commercially available AI algorithms absent explicit agency consent.
Documentation, Transparency, Accessibility: Obtain documentation from vendors that “facilitates transparency and explainability, and that ensures an adequate means of tracking performance and effectiveness for procured AI.”

Determine Necessary Disclosures of AI Use in the Fulfillment of Government Contracts: Agencies should be cognizant of risks posed by unsolicited use of AI systems by vendors.

AI Acquisition Practices Throughout Acquisition Lifestyle
Agencies should identify requirements involved in the procurement, including convening a cross-functional team and determining the use of high-impact AI; conduct market research and planning; and engage in solicitation development, which includes AI use transparency requirements regarding high-impact use cases, provisions in the solicitation to reduce vendor lock in, and appropriate terms relating to IP rights and lawful use of government data.
Selection and Award
When evaluating proposals, agencies must test proposed solutions to understand the capabilities and limitations of any offered AI system or service; assess proposals for potential new AI risks, and review proposals for any challenges. Contract terms must address a number of items including IP rights and government data, privacy, vendor lock-in protection, and compliance with risk management practices as described in M-25-21, above.
Vendor Lock-In; Contract Administration and Closeout
Many provisions in the memo, including those in the “closeout” section, guard against dependency on a specific vendor. For example, if a decision is made not to extend a contract for an AI system or service, agencies “should work with the vendor to implement any contractual terms related to ongoing rights and access to any data or derived products resulting from services performed under the contract.”
M-25-22 notes that OMB will publish playbooks focused on the procurement of certain types of AI, including generative AI and AI-based biometrics. Additionally, this Memo directs the General Services Administration (“GSA”) to release AI procurement guides for the federal acquisition workforce that will address “acquisition authorities, approaches, and vehicles,” and to establish an online repository for agencies to share AI acquisition information and best practices, including language for standard AI contract clauses and negotiated costs.
Conclusion
These Memos clearly recognize the importance of an AI governance framework that will operate to ensure AI competitiveness while balancing the risks of AI systems that are engaged to affect agency efficiencies and drive government effectiveness—a familiar balance for private companies that use or consider using AI. As the mandates within the Memos are operationalized over the coming months EBG will keep our readers posted with up-to-date information. 
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

TCPA CLASS ACTIONS MORE THAN DOUBLE!!: The Pace of TCPA Filings in 2025 Continues to Skyrocket

So I was talking to a very well known TCPA Plaintiff’s attorney yesterday who told me his filing pace of TCPA class cases was “higher than ever.”
Spoke to another Plaintiff’s lawyer last week who said he had hired two attorneys recently and was looking to hire two more.
There is no question the Plaintiff’s bar is scaling their TCPA operations and there is no end in sight to it. But the results are astounding.
2024 was the peak year for TCPA class actions– with more class actions filed last year than any other year in the TCPA’s history.
2025, however, is set to blow 2024 away.
In the first three months of 2024 there were 239 TCPA class actions filed.
In 2025?
507.
That’s more than double the filings from 2024 so far YTD. 
That’s means TCPA class litigation is up over 112% year over year and April’s numbers look to be catastrophically high again.
Indeed, nearly 80% of all TCPA cases are now being filed as class actions. This is compared to 2-5% of other consumer cases that are filed as class cases. (Thanks to WebRecon for the data sets, btw.)
There is no question, therefore, that the TCPA is the single biggest litigation to American businesses out there right now and it continues to be the biggest cash cow in history for the Plaintiff’s bar.
And with other law firms flat giving false advise with respect to the FCC’s new revocation rules–my goodness– it looks like TCPA class actions will continue to spike.
PROTECT YOURSELF FOLKS.

Cybersecurity: Salt Typhoon’s Persistence is a Cruel Lesson for Smaller Providers

In December 2024, the White House’s Deputy National Security Adviser for Cyber and Emerging Technology confirmed that foreign actors, sponsored by the People’s Republic of China, infiltrated at least nine U.S. communications companies. The attacks, allegedly conducted by China’s state-sponsored Salt Typhoon hacking group, compromised sensitive systems, and exposed vulnerabilities in critical telecommunications infrastructure.
All communications service providers across the U.S. are at risk to this threat, especially those located near a U.S. military facility. To combat this threat, it is important for communications service providers to adopt and implement cybersecurity best practices in alignment with the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework 2.0 and/or the Cybersecurity and Infrastructure Security Agency’s (CISA) Cross-Sector Cybersecurity Performance Goals.
In response to the Salt Typhoon threat, in January of this year, the FCC adopted a Declaratory Ruling and a Notice of Proposed Rulemaking to affirm and increase the cybersecurity obligations of communications service providers. The Declaratory Ruling clarifies that Section 105 of the Communications Assistance for Law Enforcement Act (CALEA) creates legal obligation for telecommunications carriers to secure their networks against unlawful access and interception. Telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks. Such carriers must work to prevent any unauthorized interception or access into their network (and maintain records thereof). This requires basic cybersecurity hygiene practices such as:

Implementing role-based access controls;
Changing default passwords;
Requiring minimum password strength; and
Adopting multifactor authentication.

Falling short of fulfilling this statutory obligation may include failing to patch known vulnerabilities or not employing best practices that are known to be necessary in response to identified exploits.
The Notice of Proposed Rulemaking, if adopted, would require providers to adopt and implement cybersecurity and supply chain risk management plans as well as certify compliance with these plans annually to the FCC. The proposed rule would apply to a wide array of providers including facilities-based providers, broadcast stations, television stations, cable systems, AM & FM commercial radio operators, TRS providers, satellite communications providers, and all international section 214 authorization holders. Participants of the FCC’s Enhanced A-CAM Program and NTIA’s BEAD Program are already subject to this requirement.
Ultimately, more FCC regulation is coming. At the same time, cyber incidents are increasing. Communications service providers should consider creating both a cybersecurity and supply chain risk management plan as well as a cybersecurity incident response plan. Such plans should reflect industry best practices outlined in federal guidance documents as described above.
In addition, carriers should review their cybersecurity liability insurance policies to ensure they have sufficient coverage. It’s also critical to review and update vendor and partner contracts for security and supply chain risk management clauses to include provisions for incident response, liability, and retention of information.
Finally, communications service providers should also consider engaging legal counsel to assist their efforts in ensuring that they are adequately protected.
Womble Bond Dickinson has developed a cybersecurity retainer that captures the requirements and proactive procedures necessary to meet the regulations, protect your networks and deal with the fallout of cybersecurity breach including insurance recovery and class action litigation from a cybersecurity data breach.

California Bill May Curb the Flood of “Abusive Lawsuits” Targeting “Standard Online Business Activities”

Democratic State Senator Anna M. Caballero introduced Senate Bill 690 (S.B. 690), which aims to curb “abusive lawsuits” under the California Invasion of Privacy Act (“CIPA”) based on the use of cookies and other online technologies, on February 24, 2025, and the Bill is now scheduled to be heard by the Senate Public Safety Committee on April 29, 2025.
Over the past few years, the plaintiffs’ bar has leveraged CIPA to hold businesses ransom based on their use of everyday online technologies (e.g., cookies, pixels, beacons, chat bots, session replay and other similar technology) on their websites. The plaintiffs’ bar has claimed such technologies: (1) facilitate “wiretapping” under Section 631 of CIPA; and/or (2) constitute illegal “pen registers” or “trap and trace devices” under Section 638.50 of CIPA. Nearly every business with a public-facing website has been or may soon be targeted with threats of significant liability stemming from the availability of statutory damages under CIPA. Even those businesses that comply with the comprehensive California Consumer Privacy Act of 2018 (“CCPA”), which governs the collection and use of consumer personal information, are not immune from such threats. Faced with the threat of such aggregated statutory damages under CIPA, many businesses opt to pay out settlements to mitigate potentially enterprise-threatening risk. And those rational decisions unfortunately have spawned a cottage industry responsible for an endless stream of filed and threatened CIPA litigation that seemingly has served only to enrich the plaintiffs’ bar.
S.B. 690 might spell doom for these perceived abuses and the negative consequences they have had on online commerce. Caballero states that the bill aims to “[s]top[] the abusive lawsuits against California businesses and nonprofits under CIPA for standard online business activities that are already regulated by” the CCPA.
If enacted, S.B. 690 would exempt online technologies used for a “commercial business purpose” from wiretapping and pen register/trap-and-trace liability. Notably, the definition of “commercial business purpose” broadly encompasses the use of “personal information” in a manner already permitted by the CCPA. The exclusion of such practices from CIPA’s ambit should curb the “abusive lawsuits” cited by Caballero when she unveiled S.B. 690 and provide certainty to businesses engaged in online commerce.

Insight Into DOGE’s Access to HHS’ Systems

Becker’s Hospital Review reports that the Department of Government Efficiency (DOGE) “has access to sensitive information in 19 HHS databases and systems,” according to a court filing obtained by Wired. HHS provided the information during the discovery process in the lawsuit filed by the American Federation of Labor and Congress of Industrial Organizations against the federal government, requesting restriction of DOGE’s access to federal systems.
According to Becker’s, DOGE had not previously disclosed nine of the 19 systems, which “contain various protected health information, ranging from email and mailing addresses to Social Security numbers and medical notes.”
Some of the systems included federal employees’ data and access to Medicare recipients’ personal information. For instance, one system listed is the Integrated Data Repository Cloud system, which “stores and integrates Medicare claims data with beneficiary and provider data sources.” Other listed systems include the NIH Workforce Analytics Workbench, which “tracks current and historical data on the NIH workforce, including headcounts and retirement information,” the Office of Human Resources Enterprise Human Capital Management Investment system, which “manages personnel actions and employee benefits at HHS,” and the Business Intelligence Information System, which “stores cloud-based HHS human resources and payroll data for analysis and reporting.”

Connecticut Office of the Attorney General Issues Annual Report on CTDPA Enforcement

On April 17, 2025, the Connecticut Office of the Attorney General (“OAG”) issued a report highlighting key enforcement initiatives, complaint trends and legislative recommendations aimed at strengthening the Connecticut Data Privacy Act (“CTDPA”). Highlights from the report are summarized below.
Breach Notice Review
In 2024, the OAG received 1,900 breach notifications. Each report was reviewed for compliance with state law. The OAG issued numerous warning letters to covered businesses that failed to provide timely notice, emphasizing that the 60-day statutory clock starts at the detection of suspicious activity—not when the full scope is confirmed. In serious cases, the OAG pursued Assurances of Voluntary Compliance requiring businesses to improve incident response practices and pay penalties.
Consumer Complaints
The OAG continues to receive significant complaint volumes regarding CTDPA compliance. Issues include unfulfilled data rights requests, misleading privacy notices, vague breach notifications, and misuse of public records for online profiles.
Enforcement Actions
The report highlighted enforcement actions on several violations, including the following:

Privacy Notices: The OAG conducted “sweeps” of insufficient or inadequate privacy notices and issued over two dozen cure notices. Common issues included missing CTDPA language, unclear opt-out mechanisms, and misleading limitations on consumer rights. Most businesses took corrective steps following notice.
Facial Recognition Technology: The OAG sent a cure notice to a regional supermarket due to their use of facial recognition technology (for purposes of preventing and/or detecting shoplifting). The OAG noted that businesses using facial recognition must comply with CTDPA’s protections for biometric data. The OAG clarified that crime prevention purposes do not exempt compliance.
Marketing and Advertising Practices: The OAG investigated a complaint involving a national cremation services company that mailed a targeted advertisement to a Connecticut resident shortly after receiving medical treatment. While the data used—name, age and zip code—was not classified as sensitive, the OAG expressed concern over the context and issued a cure notice. As a result, the company updated its privacy notice to disclose its use of third-party data and specify the categories of data collected. The case underscores that for the OAG, even non-sensitive data, when used in sensitive contexts, can lead to privacy harms and warrants heightened oversight.
Dark Patterns and Opt-Out Mechanisms: The OAG has significantly expanded its enforcement efforts to address manipulative design choices—commonly known as “dark patterns”—that interfere with consumer privacy rights. In a 2024 enforcement sweep, the OAG issued cure notices to businesses employing cookie banners that made it easier to consent to data tracking than to opt out.
Minors’ Online Services: The report notes that as of October 1, 2024, the CTDPA imposes new obligations on businesses that offer an “online service, product or feature” to minors under 18 years of age. Generally, these provisions require that businesses use reasonable care to avoid causing a heightened risk of harm to minors. Further, these provisions prohibit: (1) the processing of a minor’s personal data without consent for purposes of targeted advertising, profiling, or sale; (2) using a system design feature to significantly increase, sustain, or extend a minor’s time online; and (3) collecting a minor’s precise geolocation data without consent. 
Consumer Health Data: The report notes that controllers must obtain opt-in consent for processing consumer health data and ensure proper contractual safeguards when sharing such data with processors. Two telehealth companies received letters related to potential unauthorized sharing with technology platforms.
Universal Opt-Out Preference Signals: The report also notes that as of January 1, 2025, businesses must recognize browser-based opt-out signals such as GPC. The OAG has emphasized that this requirement is key to easing consumer privacy management. The OAG also notes that going forward, it will be focused on examining whether businesses are complying with the universal opt-out preference signal provisions and that the OAG expects to engage in efforts to ensure this consumer right is upheld.

CTDPA Legislative Recommendations
The OAG reiterated eight proposed legislative changes to improve the CTDPA:

Scale Back Exemptions: Limit current entity-level exemptions for GLBA and HIPAA, narrow the FCRA data-level exemption and remove the entity-level exemption for non-profit organizations.
Lower Thresholds: Remove thresholds for businesses processing sensitive or minors’ data and scale back all other thresholds for businesses processing other types of data.
Strengthen Data Minimization: Require data processed to be strictly necessary for stated purposes.
Expand Definition of “Sensitive Data”: Add a comprehensive list of “sensitive data” elements found in other state privacy laws, such as government ID numbers, union membership and neural data.
Clarify Protections for Minors: Prohibit targeted advertising and sale of minors’ data for consumers that business “knew or should have known” are minors.
Narrow Definition of “Publicly Available” Data: Refine and limit the scope of “publicly available” data.
Right to Know Specific Third Parties: Require businesses to name the specific entities receiving consumer data.
Enhance Opt-Out Preference Signal and Deletion Rights: Require all web browsers and mobile operating systems to include a setting that allows users to affirmatively send opt out preference signals and create a centralized deletion mechanism.

10th CIRCUIT EXPANDS TCPA EMERGENCY PURPOSES EXCEPTION: Calls Made to Inform Residents of Virtual Town Halls During Covid-19 Are Covered

For many of us, Covid-19 feels like a distant memory. But with the TCPA’s four-year statute of limitations, what’s in the past is rarely forgotten. The 10th Circuit Court of Appeals, in particular, has not forgotten the challenges of maintaining normalcy amidst social distancing measures and the raging pandemic.
In a recent decision, the 10th Circuit affirmed the New Mexico District Court’s dismissal of a TCPA case, holding that calls made by the City of Albuquerque to inform residents about virtual town-halls during the Covid-19 pandemic are covered by the TCPA’s emergency purposes exception. This decision notably expands scope of the emergency purposes exception – previously limited to calls conveying urgent health and safety information – to cover broader mitigation measures tied to public health emergencies.
Plaintiff Gerald Silver brought a putative class action against the City of Albuquerque, alleging that the city violated the TCPA by making pre-recorded phone calls inviting its residents to attend virtual town hall meetings during the COVID-19 pandemic. The calls were made to residents designated by the (505) area code. Silver’s complaint alleges that he received “at least seven prerecorded voice calls from the city on his cell phone” about the town halls.
Both parties agreed there was no commercial purpose to the calls, and that, during the period in which the calls were made, the federal government, the State of New Mexico, and the City of Albuquerque had all declared a state of public health emergency relating to Covid-19.
The city moved to dismiss Silver’s claim on two grounds: First, the city argued it was not subject to the TCPA because it was not a qualifying “person” under the statute; and second, the city contended that, even if it was subject to the TCPA, the calls fell under the TCPA’s exception for calls made for emergency purposes.
While the Court skirted around the issue of whether the TCPA applies to local governments, it held that Silver’s complaint did not show a violation of the TCPA. Although the TCPA generally prohibits the use of robocalls, it excepts from coverage “calls made necessary in any situation affecting the health and safety of consumers.” 47 C.F.R. § 64.1200(f)(4). The Court of Appeals undertook a two-step inquiry to determine whether the City’s calls were covered by the emergency purposes exception, looking at their (1) context and (2) content. Because the caller was a local government official, the “context” prong of the inquiry was satisfied. The “content” prong was also satisfied, because each call was informational. And because the City made the calls to inform citizens that town hall meetings would be held virtually—a mitigation measure “made necessary” by the social-distancing requirements of the pandemic—the Court of Appeals held that calls fall squarely within the exception.
“Because a virtual town hall meeting is itself a mitigation measure, any communications regarding those town hall meetings satisfy the content prong of the emergency purposes exception.”

The Court of Appeals also rejected Silver’s argument that because he had not expressed a desire to attend the town hall meetings, the phone calls were not relevant to him. The Court held that emergency purposes exception does not require that calls be tailored to an individual’s preferences, but rather, to an emergency that is relevant to the called party. Here, the emergency was the pandemic which, along with any associated mitigation measures, was relevant to all City residents.
Silver next argued that there were less intrusive means for the city to inform residents about the town halls, or, in the alternative, that the City’s calls could not have related to the pandemic because they did not explicitly mention Covid-19. Both these arguments failed to persuade the Court, because (1) the TCPA does not require calls to use specific words to invoke the emergency purposes exception, and (2) a caller is not required to use the least intrusive means available.
You can read the Court of Appeals’ Order here: Gerald Silver v. City of Albuquerque

Re: Watch What You Say Here

The Commercial Electronic Mail Act (CEMA) is a Washington State law that prohibits sending state residents a commercial email misrepresenting the sender’s identity. A commercial email promotes real property, goods, or services for sale or lease. A recent Washington Supreme Court opinion held that this prohibition includes the use of any false or misleading information in the subject line of a commercial email and is not limited to false or misleading information about the commercial nature of the message. Brown v. Old Navy, LLC, No. 102592-1 (Wash. 4/17/25).
The case arose when the plaintiffs sued Old Navy after allegedly receiving emails with false or misleading subject lines about the retailer’s promotions . The plaintiffs categorized four types of false and misleading emails from Old Navy:

Emails that announced offers available longer than stated in the subject line;
Emails that suggested an old offer was new;
Emails that suggested the end of an offer; and
Emails that stated a promotion extension.

For example, plaintiffs claimed that they received emails with subject lines including phrases like “today only” or “three days only” when sales or promotions lasted longer. The plaintiffs also pointed to emails from Old Navy about a 50% off promotion that would supposedly end that day, but continued in the following days. Plaintiffs argued that such emails violate CEMA because of false or misleading subject lines.
The applicable CEMA provision prohibits entities from sending commercial emails that “contain false or misleading information in the subject line.” RCW 19.190.020(1)(b). While plaintiffs argued that the provision refers to any information, Old Navy asserted that the prohibition is directed at statements in the subject line that mislead the recipient as to what the email is about. The Washington Supreme Court noted that the plain meaning of Subsection 1(b), and CEMA’s general truthfulness requirements, indicate that the statute applies to any information contained in an email subject line.
Old Navy also claimed that the plaintiffs’ interpretation of the subsection would punish Old Navy for “banal hyperbole.” According to the retailer, such puffery was not intended to be in CEMA’s scope. The court noted that though this issue was not within the scope of the narrow question in the case, typical puffery, including statements such as “Best Deal of the Year,” is not misrepresentation or false because “market conditions change such that a better sale is later available.” According to the court, mere puffery differs from representations of fact, such as “the duration or availability of a promotion, its terms and nature, the cost of goods, and other facts” that are important to Washington consumers when making decisions.
Though five justices signed the majority opinion, four others dissented. The dissent notes the antispam legislative intent and history behind CEMA, holding that the legislature was concerned about the “volume of commercial electronic mail being sent,” suggesting the narrower interpretation of Subsection 1(b) that Old Navy proposed. The dissent opinion points to the preceding provision of CEMA, which precludes transmitting an email that “[u]ses a third party’s internet domain name without permission of the third party, or otherwise misrepresents or obscures any information in identifying the point of origin or the transmission path of a commercial electronic mail message.” RCW 19.190.020(1)(a). According to the dissent, Subsection (1)(b) should “be read in harmony” with Subsection 1(a) and should be interpreted to address the prevention of sending emails that hide the email’s origin and promotional purpose. In support of its position, the dissent includes the example of the Washington Attorney General’s Office website, which directs consumers to “[c]arefully examine the body of the email message as it relates to the email’s subject line” and see if “it accurately describe[s] what is contained in the email” to determine whether the subject line would violate CEMA.
Companies can expect increased CEMA litigation due to this case. Those engaging in email marketing should be mindful of their subject line language. Statements about the nature of specific offers could be subject to increased scrutiny in Washington state. When choosing between general puffery and a more targeted subject about a specific offer, businesses may want to err on the more conservative side of the line (pun intended).

FTC Settles With accessiBe For Misleading Statements About WCAG Compliance

The Federal Trade Commission (FTC) announced on April 22, 2025, that it has approved a settlement entered into a Final Order with accessiBe, which claimed its plug-in product, accessWidget, “can make any website compliant with Web Content Accessibility Guidelines (WCAG).” The settlement includes the payment of $1 million and requires accessiBe to refrain from “making misleading claims.” The Commission unanimously approved the Final Order 3-0.
The FTC had filed a complaint against accessiBe Ltd alleging that “despite the company’s claims, accessWidget did not make all user websites WCAG-compliant and these claims were false, misleading, or unsubstantiated.” The complaint further alleged that it “deceptively formatted third-party articles and reviews to appear as if they were independent opinions.”
The settlement reinforces the FTC’s continued focus on misleading claims, and companies should check the accuracy of representations made on websites.