Privacy Tip #443 – Fake AI Tools Used to Install Noodlophile
Threat actors are leveraging the publicity around AI tools to trick users into downloading the malware known as Noodlophile through social media sites.
Researchers from Morphisec have observed threat actors, believed to originate from Vietnam, posting on Facebook groups and other social media sites touting free AI tools. Users are tricked into believing that the AI tools are free, and unwittingly download Noodlophile Stealer, “a new malware that steals browser credentials, crypto wallets, and may install remote access trojans like XWorm.” Morphisec observed “fake AI tool posts with over 62,000 views per post.”
According to Morphisec, Noodlophile is a previously undocumented malware that criminals sell as malware-as-a-service, often bundled with other tools designed to steal credentials.
Beware of deals that are too good to be true, and exercise caution when downloading any content from social media.
Virginia Will Add to Patchwork of Laws Governing Social Media and Children (For Now?)
Virginia’s governor recently signed into law a bill that amends the Virginia Consumer Data Protection Act. As revised, the law will include specific provisions impacting children’s use of social media. Unless contested, the changes will take effect January 1, 2026. Courts have struck down similar laws in other states (see our posts about those in Arkansas, California, and Utah) and thus opposition seems likely here as well. Of note, the social media laws that have been struck down in other states attempted to require parental consent before minors could use social media platforms. This law is different, as it allows account creation without parental consent. Instead, it places restrictions on account use for both minors and social media platforms.
As amended, the Virginia law will require social media companies to use “commercially reasonable” means to determine if a user is under 16. An example given in the law is a neutral age gate. The age verification is similar to those proposed other states’ social media laws. (And it was that requirement that was central to the court’s decision when striking down Arkansas’ law.) Use of social media by under-16s will default to one hour per day, per app. Parents can increase or decrease these time limits. That said, the bill expressly states that there is no obligation for social media companies to give those parents who give their consent “additional or special access” or control over their children’s accounts or data.
The law will limit use of age verification information to only that purpose. An exception is if the social media company is using the information to provide “age-appropriate experiences” – thought the bill does not explain what such experiences entail. Finally of note, even though these provisions may increase costs on companies, the bill specifically prohibits increasing costs or decreasing services for minor accounts.
Putting it Into Practice: We will be monitoring this law to see if the Virginia legislature has success in regulating children’s use of social media. This modification reflects not only a focus on children’s use of social media, but also continued changes to US State “comprehensive” privacy laws.
James O’Reilly contributed to this article
DOJ Criminal Division Updates (Part 1): DOJ’s New White Collar Crime Enforcement Plan
On May 12, DOJ’s Criminal Division head, Matthew G. Galeotti, issued a memo to all Criminal Division personnel, entitled “Focus, Fairness, and Efficiency in the Fight Against White-Collar Crime,” to “outline the Criminal Division’s enforcement priorities and policies for prosecuting corporate and white-collar crimes in the new administration.” The memo highlights 10 priority areas for investigation and prosecution, calls for a revision of the Division’s Corporate Enforcement and Voluntary Self-Disclosure Policy to provide increased incentives to corporations, and previews “streamlining corporate investigations” with an emphasis on fairness and efficiency as well as a reduction in corporate monitorships.
Ten Priority Areas for Investigation and Prosecution
The memo enumerates the following ten areas of focus:
Health care fraud;
Trade and customs fraud, including tariff evasion;
Fraud perpetrated through VIEs (variable interest entities);
Fraud that victimizes U.S. investors, such as Ponzi schemes and investment fraud;
Sanctions violations or conduct that enable transactions by cartels, TCOs, hostile nation-states, and/or foreign terrorist organizations;
Provision of material support to foreign terrorist organizations;
Complex money laundering, including schemes involving illegal drugs;
Violations of the Controlled Substances Act and the FDCA (Food, Drug, and Cosmetic Act);
Bribery and money-laundering that impact U.S. national interests, undermine U.S. national security, harm the competitiveness of U.S. business, and enrich foreign corrupt officials; and
Digital asset crimes, with high priority to cases involving cartels, TCOs, drug money-laundering or sanctions evasion.
These 10 areas of focus — and the order in which they are listed — echo the priorities laid out in the Trump administration’s enforcement-related executive orders and memos published to date.[1]
More broadly, Galeotti described the priorities as DOJ’s effort to “strike an appropriate balance between the need to effectively identify, investigate, and prosecute corporate and individuals’ criminal wrongdoing while minimizing unnecessary burdens on American enterprise.” Galeotti explained that “[t]he vast majority of American business are legitimate enterprises working to deliver value for their shareholders and quality products and services for customers” and therefore “[p]rosecutors must avoid overreach that punishes risk-taking and hinders innovation.” Galeotti also makes clear that DOJ attorneys “are to be guided by three core tenets: (1) focus; (2) fairness; and (3) efficiency.” He also directed the Criminal Division’s Corporate Whistleblower Awards Pilot Program be amended to reflect these priority areas of focus.[2]
Emphasis on Individuals and Leniency Toward Corporations
Galeotti emphasized the Criminal Division’s focus on prosecuting individuals and the need to further take into account the efforts put forth by corporations to remediate the actions of individual bad actors. Galeotti promised the Criminal Division would “investigate these individual wrongdoers relentlessly to hold them accountable” and directed the revision of the Division’s Corporate Enforcement and Voluntary Self-Disclosure Policy (CEP) to provide more opportunities for leniency where it is determined corporate criminal resolutions are necessary for companies that self-disclose and fully cooperate. These revisions include shorter terms for non-prosecution and deferred prosecution agreements, reduced corporate fines, and limited use and terms of corporate monitors.[3] Galeotti specifically has directed the review of terms of all current agreements with companies to determine whether they should be terminated early. DOJ has already begun terminating agreements it determined have been fully met.
Streamlining Corporate Investigations
Finally, Galeotti emphasizes the need to minimize the unnecessary cost and disruption to U.S. businesses due to DOJ’s investigations and to “maximize efficiency.”
More Efficient Investigations
While acknowledging the complexity and frequent cross-border nature of the Division’s investigations, prosecutors are instructed to “take all reasonable steps to minimize the length and collateral impact of their investigation, and to ensure that bad actors are brought to justice swiftly and resources are marshaled efficiently.” The Assistant Attorney General’s office will, along with the relevant Section, track investigations to ensure they are “swiftly concluded.”
Limitation on Corporate Monitorships
DOJ will impose compliance monitorships only when it deems them necessary and has directed that those monitorships, when imposed, should be “narrowly tailored.” Building upon a previous administration’s memorandum,[4] DOJ issued a May 12 Memorandum on Selection of Monitors in Criminal Division Matters, which provides factors for considering whether a monitorship is appropriate and guidelines to ensure a monitorship is properly tailored to address the “risk of recurrence” and “reduce unnecessary costs.” In considering the appointment of a monitor, prosecutors are to consider the:
Risk of recurrence of criminal conduct that significantly impacts U.S. interests;
Availability and efficacy of other independent government oversight;
Efficacy of the compliance program and culture of compliance at the time of the resolution; and
Maturity of the company’s controls and its ability to independently test and update its compliance program
The chief of the relevant section, as well as the Assistant Attorney General, must approve all monitorships, and the memo lays out additional details regarding the monitor’s appointment and oversight as well as the monitor selection process.
Takeaways
DOJ’s current hiring freeze and recent personnel reductions/reassignments should not be taken as a sign that white collar crime will be permitted to flourish under the current administration. Rather, Galeotti’s May 12 memo further solidifies the enforcement policies and priorities the DOJ has been previewing since day one of the Trump administration and provides more clarity on what to expect when engaging with the Criminal Division and where it will be focusing its now-more-limited resources. Companies should familiarize themselves with this memo and corresponding updates related to whistleblowers, corporate enforcement and self-disclosures, and monitorships to ensure companies are appropriately assessing their risk profile, addressing potential misconduct, and meeting government expectations.
[1] See, e.g., Executive Order 14157, Designating Cartels and Other Organizations as Foreign Terrorist
Organizations and Specially Designated Global Terrorists (Jan. 20. 2025) (Cartels Executive Order);
Memorandum from the Attorney General, Total Elimination of Cartels and Transnational Criminal
Organizations (Feb. 5, 2025) (Cartels and TCOs AG Memorandum) Executive Order 14209, Pausing Foreign Corrupt Practices Act Enforcement to Further American Economic and National Security (Feb. 10, 2025); Cartels and TCOs AG Memorandum.
2 See “DOJ Criminal Division Updates (Part 2): Department of Justice Updates its Corporate Criminal Whistleblower Awards Pilot Program”
[3] See “DOJ Criminal Division Updates (Part 3): New Reasons for Companies to Self-Disclose Criminal Conduct”
[4] March 7, 2008 Craig Morford Memorandum (addressing selection and responsibilities of a corporate monitor).
California Privacy Protection Agency Releases Updated Regulations: What’s Next?
This month, the California Privacy Protection Agency (CPPA) Board discussed updates to the California Consumer Privacy Act (CCPA) draft regulations related to cybersecurity audits, risk assessments, automatic decision-making technology (ADMT), and insurance.
The CPPA received comments on the first draft of the regulations between November 22, 2024, and February 19, 2025, and the feedback was provided at last month’s board meeting.
Based on the discussions at last month’s meeting, the CPPA made further revisions to the draft, which include the following:
Definition of ADMT: ADMT will no longer include technology that ONLY executes a decision or substantially facilitates human decision-making; the definition will only include technology that REPLACES or substantially replaces human decision-making.
Definition of Significant Decision: Risk assessments and ADMT obligations are triggered by certain data processing activities that lead to “significant decisions” that affect a consumer; the updated draft no longer includes decisions that determine “access” to certain services as triggering events. However, financial or lending, housing, education, employment, and independent contracting services constitute services that implicate whether a significant decision is being made about a consumer; insurance, criminal justice services and essential goods and services were removed from the list of services in the latest draft.
First-Party Advertising: Under the updated draft, companies are not required to conduct risk assessments or comply with the ADMT obligations simply because they profile consumers for behavioral advertising (i.e., first-party advertising does not trigger these requirements under the new draft).
ADMT Training and Personal Information: Companies will only be required to conduct a risk assessment if they process personal information to train ADMT for specific purposes.
Sensitive Location Profiling: Companies will not be required to conduct a risk assessment simply because they profile consumers through systematic observation in publicly accessible spaces; they will only have to adhere to the risk assessment requirement if the company profiles a consumer based on the individual’s presence in a “sensitive location” (i.e., healthcare facilities, pharmacies, domestic violence shelters, food pantries, housing or emergency shelters, educational institutions, political party offices, legal services offices, and places of worship).
Artificial Intelligence: The updated draft does not refer to “artificial intelligence” (AI) and AI terminology has been removed. However, AI systems would fall under the definition of ADMT and be subject to the other requirements under the updated regulations.
Cybersecurity Audits: If a company meets the risk threshold, the first cybersecurity audit must be completed as follows:
April 1, 2028, if the business’s annual gross revenue for 2026 is more than $100 million.
April 1, 2029, if the business’s annual gross revenue for 2027 is at least $50 million but no more than $100 million.
April 1, 2030, if the business’s annual gross revenue for 2028 is less than $50 million.
Thereafter, if a company meets the risk thresholds under the law, it must conduct a cybersecurity audit annually, irrespective of gross annual revenue.
Submission of Risk Assessments: Under the updated draft, companies no longer have to submit their risk assessments to the CPPA; alternatively, the company must provide an attestation and a point of contact for the company. Such documentation is due to the CPPA by April 1, 2028, for risk assessments completed in 2026 and 2027; after 2027, the documentation must be submitted by April 1 of the year following any year the risk assessment was conducted.
So, what’s next?
The CPPA initiated another public comment period, ending on June 2, 2025.
The CPPA MUST finalize the draft regulations by November 25, 2025:
If the CPPA files the final regulations by August 31, 2025, then the updates will take effect on October 1, 2025;
If the CPPA files the final regulations AFTER August 31, 2025, then the updates will take effect on January 1, 2026.
Todd Snyder Fined for Technical CCPA Violations
The California Consumer Privacy Protection Agency (CPPA) Board issued a stipulated final order against Todd Snyder, Inc., a clothing retailer based in New York, requiring the company to pay a $345,178 fine and update its privacy program to settle allegations that it violated the California Consumer Privacy Act (CCPA). Specifically, Todd Snyder must update its methods for submitting and fulfilling privacy requests and provide training to its staff about CCPA requirements. Todd Snyder is also required to maintain a contract management and tracking process so that required CCPA contractual terms are included in contracts with third parties with access to or receipt of personal information.
The CPPA alleged that Todd Snyder violated the CCPA as follows:
Its consumer privacy rights request process collected much more information than necessary to fulfill privacy requests. Specifically, the privacy portal on Todd Snyder’s website used by consumers to submit privacy rights requests required consumers to provide their first and last name, email, country of residence, and a photograph of the consumer holding the consumer’s “identity document” (such as a driver’s license or passport which is considered “sensitive information” under the CCPA), regardless of the type of privacy request. The sensitive information is unnecessary to exercise a request to opt-out of the sale and/or sharing of personal information.
It failed to oversee and properly configure its third-party consumer privacy request portal for 40 days. The Todd Snyder website utilizes third-party tracking technologies, including cookies, pixels, and other trackers that automatically send data about consumers’ online behavior to third-party companies for analytics and behavioral advertising. The CPPA alleges that the opt-out mechanism on the website was not properly configured for a 40-day period. During that period, if the consumer clicked on the cookie preferences link on the website, a pop-up appeared, but then immediately disappeared, making it impossible for the consumer to opt-out of the sale or sharing of their personal information.
The lesson here is that a company cannot pass on its privacy compliance obligations to a third-party privacy management platform; the company itself is responsible for the functionality of such platforms. Michael Macko, head of the CPPA’s Enforcement Division, stated in a press release, “Using a consent management platform doesn’t get you off the hook for compliance [. . .] the buck stops with the businesses.” Your company cannot rely on its third-party privacy management platform for compliance and expect no accountability in the event of non-compliance; you must conduct due diligence and validate that the operation is functioning and compliant with CCPA requirements.
This is likely only the start of the CPPA’s enforcement sweep. The time is now—assess your CCPA compliance program and processes, and ensure they are up to par.
5 Key Contracting Considerations for Digital Health Companies Working with AI Vendors
Artificial Intelligence (AI) is rapidly transforming digital health — from patient engagement to clinical decision-making, the changes are revolutionary. Contracting with AI vendors presents new legal, operational, and compliance risks. Digital health CEOs and legal teams must adapt traditional contracting playbooks to address the realities of AI systems handling sensitive and highly regulated health care data.
To assure optimal results, here are five critical areas for digital health companies to address in the contract negotiation process with potential AI vendors:
1. Define AI Capabilities, Scope, and Performance
Your contract should explicitly:
Describe what the AI tool does, its limitations, integration points, and expected outcomes.
Establish measurable performance standards and incorporate them into service-level agreements.
Include user acceptance testing and remedies, such as service credits or termination if performance standards are not met. This protects your investment in AI-driven services and aligns vendor accountability with your operational goals.
2. Clarify Data Ownership and Usage Rights
AI thrives on data, so clarity around data ownership, access, and licensing is essential. The contract should state the specific data the vendor can access and use — including whether such data includes protected health information (PHI), other personal information, or operational data — and whether it can be used to train or improve the vendor’s models. Importantly, your contract should ensure that any vendor use of data aligns with HIPAA, state privacy laws, and your internal policies, including restricting reuse of PHI or other sensitive health data for purposes other than the vendor providing the services to your company or other purposes permitted by law. There is much greater flexibility to license access for the vendor to use your de-identified data to train or develop AI models, if the company has the appetite for such data licensing.
You should also scrutinize broad data licenses. Be careful not to assume liability for how a vendor repurposes your data unless the use case is clearly authorized in the contract.
3. Demand Transparency and Explainability
Regulators and patients expect transparency in AI-driven health care decisions. Require documentation that explains how the AI model works, the logic behind outputs, and what safeguards are in place to mitigate bias and inaccuracies.
Beware of vendors reselling or embedding third-party AI tools without sufficient knowledge or flow-down obligations. The vendor should be able to audit or explain the tools it licenses from third parties if those AI tools are handling your company’s sensitive health care data.
4. Address Liability and Risk Allocation
AI-related liability, especially from errors, hallucinations, or cybersecurity incidents, can have sizable consequences. Ensure the contract includes tailored indemnities and risk allocations based on the data sensitivity and function of the AI tool.
Watch out for vendors who exclude liability for AI-generated content. This may be acceptable for internal tools but not for outputs that reach patients, payors, or regulators. Low-cost tools with high data exposure can pose a disproportionate liability risk, which is especially true if liability caps are tied only to the contract fees.
5. Plan for Regulatory Compliance and Change
With evolving rules from federal and state privacy regulators, vendors must commit to ongoing compliance with current and future requirements. Contracts should allow flexibility for future changes in law or best practices. This will better help ensure that the AI tools your company relies on will not fall behind the regulatory curve — or worse, expose your company to enforcement risk due to noncompliance or outdated model behavior.
Incorporating this AI Vendor Contracting Checklist into your vendor selection process will help CEOs systematically manage risks, compliance, and innovation opportunities when engaging with AI vendors.
AI Vendor Contracting Checklist:
Define AI scope, capabilities, and performance expectations.
Clarify data ownership, access, and privacy obligations.
Require transparency and explainability of AI processes.
Set clear liability, risk, and compliance responsibilities.
Establish terms for updates, adaptability, and exit strategy.
AI solutions in the health care space continue to rapidly evolve. Thus, digital health companies should closely monitor any new developments and continue to take necessary steps towards protecting themselves during the contracting process.
UK Government Publishes New Software and Cyber Security Codes of Practice
As cyber security continues to make be headline news it is timely that on 7 May 2025 the UK government published a new voluntary Software Security Code of Practice: Software Security Code of Practice – GOV.UK
This Code is designed to be complementary to relevant international approaches and existing standards and where possible reflects internationally recognized best practice including as outlined in the US Secure Software Development Framework (Secure Software Development Framework | CSRC) and the EU Cyber Resilience Act (Cyber Resilience Act (CRA) | Updates, Compliance, Training).
This Code consists of 14 principles split across 4 themes (secure design and development; build environment security; secure deployment and maintenance; and communication with customers) that software vendors are expected (but to stress the voluntary nature of this code, are not legally obliged) to implement to establish a consistent baseline of software security and resilience across the market – these principles are stated to be relevant to any type of software supplied to business customers.
“Software Vendors” are defined under this Code as organisations that develop and sell software or software services; “Software” is code, programmes and applications that run on devices including on hardware devices and via cloud/SaaS.
A self-assessment form is also made available (Software-Security-Code-of-Practice-Self-Assessment-Template.docx) which software vendors can use to assess and evidence compliance with this Code.
This Code follows on from the Cyber Governance Code of Practice and supporting tool kit published on 8 April 2025 (Cyber Governance Code of Practice – GOV.UK) to support boards and directors of medium and large organizations to govern cyber security risks. The emphasis of this Code is to support boards and directors to effectively govern and monitor cyber security within their business, but it is not intended for use by those people in a business whose role is the day-to-day management of cyber security.
As cyber security continues to be a high-profile and business critical issue for many businesses it is likely that in the coming months we may start to see compliance with these voluntary codes becoming contractual obligations imposed on suppliers.
Clickbait: Actual Scope (Not Intended Scope) Determines Broadening Reissue Analysis
The US Court of Appeals for the Federal Circuit affirmed the Patent Trial & Appeal Board’s rejection of a proposed reissue claim for being broader than the original claim, denying the inventors’ argument that the analysis should focus on the intended scope of the original claim rather than the actual scope. In re Kostić, Case No. 23-1437 (Fed. Cir. May 6, 2025) (Stoll, Clevenger, Cunningham, JJ.)
Miodrag Kostić and Guy Vandevelde are the owners and listed inventors of a patent directed to “method[s] implemented on an online network connecting websites to computers of respective users for buying and selling of click-through traffic.” Click-through links are typically seen on an internet search engine or other website inviting the user to visit another page, often to direct sales. Typical prior art transactions would require an advertiser to pay the search engine (or other seller) an upfront fee in addition to a fee per click, not knowing in advance what volume or responsiveness the link will generate. The patent at issue discloses a method where the advertiser and seller first conduct a trial of click-through traffic to get more information before the bidding and sale process. The specification also discloses a “direct sale process” permitting a seller to bypass the trial and instead post its website parameters and price/click requirement so advertisers can start the sale process immediately.
The independent claim recites a “method of implementing on an online network connecting websites to computers of respective users for buying and selling of click-through traffic from a first exchange partner’s web site.” The claim requires “conducting a pre-bidding trial of click-through traffic” and “conducting a bidding process after the trial period is concluded.” A dependent claim further requires “wherein the intermediary web site enables interested exchange partners to conduct a direct exchange of click-through traffic without a trial process.”
The patent was issued in 2013, and the inventors filed for reissue in 2019. The reissue application cited an error, stating that the “[d]ependent claim [] fails to include limitations of [the independent] claim,” where the dependent claim “expressly excludes the trial bidding process referred to in the method of [the independent] claim,” which would render it invalid under 35 U.S.C. § 112. To fix the error, the inventors attempted to rewrite the dependent claim as an independent claim that omitted a trial process.
The examiner issued a nonfinal Reissue Office Action rejecting the reissue application as a broadening reissue outside of the statutory two-year period. The examiner found that the original dependent claim is interpreted to require all steps of the independent claim, including the trial period, and further to require a direct sale without its own trial, beyond the trial claimed in the independent claim. The inventors attempted to rewrite the dependent claim as the method of independent claim with “and/or” language regarding the trial process versus direct to sale process. The amendment was rejected for the same reasons. The Board affirmed on appeal.
Whether amendments made during reissue enlarge the scope of the claim in violation of 35 U.S.C. § 251 is a matter of claim construction. The inventors argued that the proper inquiry was not whether the scope of the proposed reissue version of the dependent claim was broader than the scope of the original dependent claim but whether the scope of the proposed reissue claim was broader than the “intended scope” of original dependent claim.
The Federal Circuit rejected the inventors’ argument, finding that it contradicted the plain text of § 251(d), which prohibits reissue patents enlarging the scope of the claims, not reissue patents enlarging the intended scope of the claims. The Court further reasoned that “[l]ooking at the intended scope rather than the actual scope of the original claim would prejudice competitors who had reason to rely on the implied disclaimer involved in the terms of the original patent.” Finding that the text, history, and purpose of § 251 all counsel against reviewing the “intended scope” of claims on reissue, the Court affirmed the Board’s denial.
EDPB and EDPS Support GDPR Record-Keeping Simplification Proposal
On May 8, 2025, the European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) adopted a joint letter addressed to the European Commission regarding the upcoming proposal to simplify record-keeping obligations under the EU General Data Protection Regulation (“GDPR”). This proposal aims to amend Article 30(5) of the GDPR, simplifying the record-keeping requirements and reducing administrative burdens while maintaining robust data protection standards.
The European Commission proposed the following changes to Article 30(5) of the GDPR:
Exemptions for Small Mid-Cap Companies: Extending the derogation which currently applies to enterprises or organizations with fewer than 250 employees (including small and medium-sized enterprises or SMEs), to also cover “small mid-cap companies,” i.e., companies with fewer than 500 employees and with a defined annual turnover, as well as organizations such as non-profits with fewer than 500 employees.
Expansion of Application: Modifying the derogation so it would not apply if the processing is “likely to result in a high risk to the rights and freedoms of natural persons,” as opposed to the current provision, which only mentions processing likely to result in a “risk,” therefore broadening the ability to use the derogation.
Limiting Record-Keeping Exceptions: Removing certain exceptions to the record-keeping derogation, including references to occasional processing and possibly special categories of data.
Employment, Social Security or Social Protection Law Exception: Introducing a recital clarifying that the obligation to maintain records of processing activities would not apply to the processing of special categories of data to comply with legal obligations in the field of employment, social security or social protection law in accordance with Article 9(2)(b) of the GDPR.
In their joint letter, the EDPB and EDPS express “preliminary support to this targeted simplification initiative,” noting that they support the retention of a risk-based approach in respect of processing, and observing that “even very small companies can still engage in high-risk processing.” Both parties welcome the opportunity for a formal consultation to take place after the publication of the draft legislative change.
BIS Issues Four Key Updates on Advanced Computing and AI Export Controls
On May 13, 2025, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) announced four significant policy developments under the Export Administration Regulations (“EAR”), affecting exports, reexports, and in-country transfers of certain advanced integrated circuits (“ICs”) and related computing items with artificial intelligence (“AI”) applications. These actions reflect the Trump administration’s first moves to address national security risks associated with exports of emerging technologies, and to prevent use of such items in a manner contrary to U.S. policy. Below is a summary of each development and its practical implications.
1. Initiation of Rescission of the “AI Diffusion Rule”
As explained in a press release, BIS has begun the process to rescind the so-called “AI Diffusion Rule,” issued in the closing days of the Biden administration and slated to go into effect on May 15. That rule would have imposed sweeping worldwide controls on specified ICs and set up a three-tiered system for access to such items by countries around the world. The rescission is intended to streamline U.S. export controls and avoid “burdensome new regulatory requirements” and strain on U.S. diplomatic relations.
It will be important to monitor developments for BIS’s anticipated issuance of the formal rescission and for the control regime that BIS will likely implement in its place. In the meantime, all IC-related controls preceding the AI Diffusion Rule remain in effect.
2. New End-Use Controls for Advanced Computing Items
BIS has issued a policy statement informing the public of new end-use controls targeting the training of large AI models. Specifically, the statement provides that the EAR may impose restrictions on the export, reexport, and in-country transfer of certain advanced ICs and computing items when there is knowledge or reason to know that the items will be used for training AI models for or on behalf of weapons of mass destruction or military-intelligence end-uses in or end-users headquartered in China and other countries in BIS Country Group D:5. Furthermore, U.S. persons are prohibited from knowingly supporting such activity.
This development underscores the importance of robust due diligence and end-use screening for companies involved in exports, re-exports, and transfers of such items, especially to Infrastructure as a Service providers.
3. Guidance to Prevent Diversion: Newly Specified Red Flags
To assist industry in preventing unauthorized diversion of controlled items to prohibited end-users or end-uses, BIS has published updated guidance identifying new “red flags” that may indicate a risk of such diversion. The guidance provides practical examples and scenarios, such as unusual purchasing patterns, requests for atypical technical specifications, or inconsistencies in end-user information. Companies are encouraged to review and update their compliance programs to incorporate these new red flags and to ensure that employees are trained to recognize and respond to potential diversion risks.
4. Prohibition of Transactions Involving Certain Huawei “Ascend” Chips Under “General Prohibition Ten”
BIS has released guidance regarding the use of and transactions in certain Huawei “Ascend” chips meeting the parameters for control under Export Control Classification Number (“ECCN”) 3A090, clarifying the application to such activities of “General Prohibition Ten” under the EAR. This prohibition restricts all persons worldwide from engaging in a broad range of dealings in, and use of, specified Ascend chips that BIS alleges were produced in violation of the EAR.
Regarding due diligence in this context, BIS has provided the following guidance:
If a party intends to take any action with respect to a PRC 3A090 IC for which it has not received authorization from BIS, that party should confirm with its supplier, prior to performing any of the activities identified in GP10 to ensure compliance with the EAR, that authorization exists for the export, reexport, transfer (in-country), or export from abroad of (1) the production technology for that PRC 3A090 IC from its designer to its fabricator, and (2) the PRC 3A090 IC itself from the fabricator to its designer or other supplier.
Key Takeaways for Industry
It is important to keep in mind that the BIS actions focus on dealings in ICs and advanced computing items meeting the control parameters of ECCN 3A090 and related ECCNs. With that in mind, the following steps are recommended:
Review and update compliance programs: Impacted companies should promptly assess their export control policies and procedures in light of these developments, with particular attention to end-use and end-user screening.
Monitor regulatory changes: The rescission of the AI Diffusion Rule and the introduction of new end-use and General Prohibition Ten controls may require adjustments to licensing strategies.
Enhance employee training: Incorporate the newly specified red flags and guidance into training materials for relevant personnel.
BIS’s latest actions reflect a dynamic regulatory environment for national security regulation of advanced computing and AI technologies. Companies operating in these sectors should remain vigilant and proactive in managing compliance risks, as there are likely to be more developments in this area in the months ahead.
ANOTHER SLICE OF THE PIE: Serial TCPA Plaintiff Goes After Pizza Hut

It seems Joseph Brennan is no stranger to the drive-thru – or the courtroom. See FROM CORN DOGS TO COURTROOMS: Sonic’s Texts Might Cost More Than a Combo Meal – TCPAWorld. This time, he’s served a steaming hot complaint against Pizza Hut.
Joseph Brennan v. Pizza Hut, Inc. was originally filed in the Western District of Louisiana. Following an unopposed motion to transfer by Pizza Hut, the case was moved to the Northern District of Texas on May 13, 2025.
In his Complaint, Brennan states that he never gave Pizza Hit his phone number, never opted in to any of their rewards programs, and never had a business relationship with Pizza Hut. Yet, Brennan alleges that he received three unsolicited text messages from Pizza Hut on August 30, 2024, September 5, 2024, and September 27, 2024, despite placing his number on the National Do Not Call Registry.
Brennan also purports to represent the following class:
All persons throughout the United States (1) to whom Pizza Hut delivered, or directed to be delivered, more than one text message within a 12 month period for purposes of solicitating the sale of a Pizza Hut product, (3) where the person’s telephone number had been registered with the National Do Not Call Registry for at least thirty (30) days before Pizza Hut delivered or directed to be delivered at least two of the text messages within the 12 month period, (3) from four-years prior to the filing of the initial complaint in this action through the date notice is disseminated to a certified class, and (4) for whom Defendant claims it obtained prior express invitation or permission in the same manner as Defendant claims it obtained prior express invitation or permission from Plaintiff.
As the case heats up in Texas, we’ll be keeping a close eye on whether Brennan’s claims rise to the occasion.
Massachusetts Court Grants Motion to Dismiss “Spy Pixel” Privacy Class Action for Lack of Standing
On January 31, 2025, in Campos v. TJX Companies, Inc., No. 24-cv-11067, the District of Massachusetts granted a motion to dismiss a class action due to the plaintiff’s lack of standing. The court concluded that the named plaintiff’s claims regarding the intrusion of her privacy by “spy pixels” could not be successful because there was no injury in fact.
TJX Pixel Software and Campos’ Privacy Claims
Arlette Campos filed a putative class action against defendant TJX Companies (“TJX”) alleging that it intruded upon her privacy through promotional emails it sent to her.
Campos claimed that TJX had embedded pixel software in its promotional emails, which collect information about the email recipient, including when the email is opened and read, the recipient’s location, how long the recipient spends reading the email, and the email server the recipient uses.
Although Campos had provided TJX with her email and subscribed to their email list, she claimed that TJX collected her private information without her consent.
TJX Challenges Whether Campos Met Article III Standing Requirements
TJX filed a motion to dismiss under Rule 12(b)(1) for lack of subject matter jurisdiction, claiming that Campos lacked standing.
Article III of the Constitution requires that litigants have standing to sue. Whether a litigant has standing to sue is an inquiry of three elements: injury in fact, traceability, and redressability.
TJX challenged Campos’ standing on the basis that she did not suffer an injury in fact.
To sufficiently plead an injury in fact, a plaintiff must allege a concrete harm. Quoting from TransUnion LLC v. Ramirez, the court highlighted that “traditional tangible harms, such as physical and monetary harms, are obvious[ly] concrete.” However, based on the holding in TransUnion, the court made clear that “[i]ntangible harms can also be concrete . . . such as reputational harms, disclosure of private information, and intrusion upon seclusion.”
Thus, even though Campos did not have a traditional, tangible harm, this did not necessarily preclude a finding of concrete harm.
Court Rejects Campos’ Intrusion Upon Seclusion Claim for Her Injury
Campos pointed to the tort of intrusion upon seclusion to argue that she was injured. The Restatement Second of Torts defines intrusion upon seclusion as the intentional intrusion “upon the solitude or seclusion of another or his private affairs or concerns.”
For this claim to be actionable, the intrusion must be “highly offensive to a reasonable person,” and the matter intruded upon must be deeply private, personal, and confidential.
Based on this, the court rejected the argument that the emails would fall within the ambit of deeply personal and private information contemplated by the tort because Campos provided her email address to TJX (which the court observed as “certainly not private”), she had consented to receive promotional emails, there was “nothing particularly private about the email’s subject or other content,” and TJX authored the contents of the emails, meaning they would have been known “with or without the pixels.”
Additionally, although the court noted that opening private mail is an example of an intrusion mentioned in the Restatement, because TJX did not peer into Campos’ inbox beyond the emails it authored, there was no intrusion here.
Even for other sensitive information that the pixels collected, such as whether, when, where, and for how long Campos read the emails, the court rejected Campos’ argument that this was private and personal information meant to be protected by the tort. The court found no precedent that “reading habits” for content authored by the defendant are “the type of private, personal information that the tort was aimed at protecting under the common law.”
The court was troubled by allegations that the pixel software tracked whether the email was forwarded, which it deemed “closest to tracking ‘unrelated personal messages,’” but faulted the absence of any allegation that “pixels could track to whom the email was forwarded or the content of that forwarded message.”
Therefore, the court held that Campos failed to adequately plead this claim, and thus, failed to establish that she was injured.
Court Rejects Campos’ Analogy to Other Privacy Harms for Her Injury
Campos also argued that use of pixel technology is similar to cases arising under the Telephone Consumer Protection Act (“TCPA”), which prohibits unsolicited marketing calls and faxes, and the Video Privacy Protection Act (“VPPA”), which prohibits the sharing of video rental records. The court, however, rejected these analogies.
In TCPA cases, standing has been found where recipients did not consent to being contacted. In this case, Campos willingly subscribed to receive emails from TJX, opened and read them, and took no steps to unsubscribe. Based on this, the court held that the TCPA was inapplicable, and Campos could not meet the standing requirement by relying upon it.
Similarly, the VPPA solely contemplates the disclosure of video rental and sale records, and because Campos did not allege any such disclosure, the court held that no harm occurred that could justify applying the VPPA and it thereby could not confer standing, either.
Based on Campos’ inability to establish standing, the court granted the motion to dismiss.
Fast Forward: Article III Standing and Class Certification
In this latest class action, the named plaintiff was unable to meet Article III’s standing requirement. However, even if Campos had, she would have had to overcome another hurdle: establishing whether the vast majority of absent class members also had standing.
The Supreme Court’s holding in TransUnion stands for the proposition that every member of a class, including absent members, must establish a concrete injury under Article III to be awarded individual damages. The Supreme Court did not, however, address the issue of class certification where the class contains absent members who lack Article III standing.
The Supreme Court is poised to answer this question in Laboratory Corporation of America Holdings v. Davis, which it granted certiorari for in January 2025. Courts that have answered this question have done so differently, leading to a three-way split between circuits.
The D.C. Circuit and First Circuit permit certification of a class only if the number of uninjured members is de minimis. The Ninth Circuit permits certification even if the class includes more than a de minimis number of uninjured class members. The Eighth and Second Circuits have taken the strictest approach, rejecting certification if any members are uninjured.
Given that venue may be outcome determinative in this regard, until the Supreme Court addresses this question, defendants should scrutinize potential standing deficiencies for both class representatives and absent class members as well. The Supreme Court heard argument in Lab Corp on April 29, 2025, and should soon issue a decision that may provide important clarity to class action litigants on this question of standing.