Kalshi Acts Swiftly Across Pending Litigation After Adverse Maryland Ruling

Shortly after receiving the first adverse ruling in any of the current high-profile matters being litigated against state regulators, attorneys for KashiEx LLC (Kalshi) filed to appeal the unfavorable order from the Maryland District Court. On the same day, Kalshi also moved for summary judgment in its Nevada District Court matter, seeking to capitalize on a favorable ruling on its preliminary request for injunctive relief and setting the stage for potential escalation of this question to the US Supreme Court.
The Maryland Court delivered a blow to Kalshi’s preemption arguments, ruling that federal commodities law does not strip states of their authority to regulate gambling even when sports wagering, through what Kalshi describes as “sports-event contracts,” is conducted through CFTC-approved Designated Contract Markets. 
The Court determined that while the Dodd-Frank Act and the Commodity Exchange Act (CEA) did have a clear intention to preempt at least some state laws, that is not where the applicable preemption analysis ends, despite what Kalshi would prefer and in contrast to the New Jersey and Nevada district courts examining the same issue. Instead, the Maryland Court reasoned that it must determine if the “field” Congress intended to occupy through these acts included gambling.  
Applying the presumption against preemption in areas of traditional state regulation, the Court found that Congress lacked a “clear and manifest purpose” to preempt state gambling laws when it enacted the Dodd-Frank Act, despite the CEA’s general preemptive scope over commodities trading. The analysis focused on several key factors undermining Kalshi’s position: the CEA’s Special Rule expressly preserving state authority to determine what conduct is unlawful, the express preemption clauses in the CEA not encompassing Kalshi’s operations, and Congress showing no intent to override then-existing federal gambling statutes like the Wire Act, Indian Gaming Regulatory Act (IGRA), or Professional and Amateur Sports Protection Act (PASPA) (since struck down but in effect at the time) when drafting the amendments. 
The Court also noted that contemporary legislative statements suggested lawmakers viewed sports-event contracts as lacking commercial purpose and were concerned about potential gambling through derivatives markets, and that while the “savings clause” in the CEA’s exclusive jurisdiction provision cuts both ways, such ambiguity cuts against finding for field preemption as these types of clauses generally negate an inference that Congress intended to preempt.
The Maryland ruling sets up a potential circuit split as to the preemption issue. While Maryland (Fourth Circuit) ruled in favor of the regulators, district courts in New Jersey (Third Circuit) and Nevada (Ninth Circuit) determined that Kalshi was likely to succeed on the merits of its preemption claim. New Jersey regulators already appealed the ruling to the Third Circuit, where oral arguments are expected to be held in early September.

What You Need to Know About California’s Finalized CCPA Amendments: Part Two

On July 24, 2025, the California Privacy Protection Agency (CPPA) approved final regulations (the Rule) under the California Consumer Privacy Act of 2018 (CCPA), introducing new obligations related to automated decision-making (ADMT), mandatory risk assessments for high-risk data processing and cybersecurity. While Part One of our coverage on the Rule focused on the ADMT requirements, this section turns to the Rule’s cybersecurity and risk assessment provisions. Assuming the Office of Administrative Law (OAL) approves these regulations (expected by late August 2025 based on its 30-day review period and past practices), certain provisions of the Rule could take effect on January 1, 2026. Key cybersecurity obligations are to be implemented in phases between 2027 and 2029, based on a business’s gross revenue, with the CCPA determining that businesses with higher gross annual revenues are better able to bear the significant cost and complexity of compliance. Risk assessment reporting likely begins April 1, 2028, with businesses required to submit details for assessments completed in 2026 and 2027.
Key Highlights of the Rule
Cybersecurity:

Requires annual cybersecurity audits for businesses meeting specific revenue and data processing thresholds.
Mandates audits be conducted by qualified, independent auditors, with strict requirements for internal auditors.
Specifies a set of 18 rigorous cybersecurity controls to be assessed in the annual audit, as applicable.
Implements a phased timeline for businesses to complete their first audit based on revenue. For example, large organizations (i.e., with annual gross revenues exceeding $100 million in 2026) must complete their first audit report by April 1, 2028, covering the 2027 calendar year.

Risk Assessments:

Imposes risk assessments before use of high-risk processing of personal information, such as processing sensitive data, use for training artificial intelligence (AI) systems, use of ADMT for significant decisions, or selling or sharing of personal information.
Requires that a senior executive certify the assessment, update every three years or after material changes and retain records for at least five years.
Requires prescribed annual reporting to the CPPA starting April 1, 2028, summarizing completed assessments and certifying compliance.

Mandatory Annual Cybersecurity Audits for Businesses with “Significant Risk”
An annual cybersecurity audit requirement applies to companies that present “significant risk” to personal information security based on their size and the type of data processing. Specifically, this applies to any business with annual gross revenue over $25 million that processes personal information of 250,000 or more consumers or households or the sensitive information of 50,000 or more consumers, or any business that derives 50% or more of its annual revenue from selling or sharing consumer personal information.
The Rule establishes a phased timeline for when covered businesses must complete and submit their first audit report. The timeline is based on the business’s annual gross revenue, giving smaller businesses more time to prepare. The current version of the Rule sets the following deadlines for the first audit:  

April 1, 2028 for a business with an annual gross revenue for 2026 of $100 million or more.
April 1, 2029 for a business with an annual gross revenue for 2027 between $50 million and $100 million.
April 1, 2030 for a business with an annual gross revenue for 2028 of less than $50 million.

After April 1, 2030, the cybersecurity audit becomes an annual obligation for any business that meets the criteria at the start of each year.
Evaluating the Effectiveness of a Cybersecurity Program
The Rule prescribes a detailed set of requirements for annual cybersecurity audits to evaluate the effectiveness of a business’s cybersecurity program. Each audit must assess the establishment, maintenance and implementation of the program over a 12-month period beginning January 1 of each year and must be conducted by an independent auditor using a recognized audit framework. Businesses do not submit the full audit report to the CPPA. Instead, they must file an annual certificate of completion, signed by management, confirming that the audit was conducted. This certificate is due by April 1 of the year following the audit.
Audits may be conducted internally or externally by qualified professionals with knowledge of cybersecurity and auditing practices, so long as they maintain independence and objectivity. The Rule stipulates that internal auditors cannot participate in any business activities they may assess in current or future audits (e.g., implementing or maintaining a cybersecurity program) nor can they report to an executive who is directly responsible for the cybersecurity program. Companies that have traditionally relied on their IT or cybersecurity teams for cybersecurity audits will need to engage independent personnel or external auditors to meet the Rule’s independence requirements.
Although assessing the “effectiveness” of a cybersecurity program involves auditor judgment, such evaluation must reflect the business’s size, complexity and processing activities and take into account both state-of-the-art practices and the cost of implementation. Auditors must evaluate a defined set of 18 core controls, as applicable, against a business’s cybersecurity program. Certain controls listed in the Rule, such as access control and authentication, encryption in transit, incident response and vulnerability patching, are commonly found in mature cybersecurity programs. Other controls, such as phishing-resistant multi-factor authentication or comprehensive secure software development, may not be fully implemented by businesses, even in well-established cybersecurity programs, given their complexity, cost and skill requirements. If a business wants to rely on an existing cybersecurity assessment (e.g., one based on the NIST Cybersecurity Framework or HIPAA Security Rule), it cannot treat that assessment as an automatic substitute for the CCPA-mandated audit. Most of those frameworks are risk-based, allowing businesses flexibility in implementing safeguards, while the Rule is more prescriptive. An auditor wanting to rely on such previous assessments will need to evaluate them against the 18 specific components identified in the Rule, determine whether supplementation is needed, and ensure compliance with reporting standards and independence (which may limit the ability to use prior internal assessments prepared by IT teams).
Risk Assessments for Processing Activities with “Significant Risk”
The Rule requires businesses to complete a risk assessment before initiating any personal information processing activity that includes “significant risk” and to review or update each assessment at least once every three years or within 45 days of any material change. Activities with significant risk include processing sensitive information, using personal data to train an ADMT or biometric system, profiling individuals in sensitive contexts, deploying ADMT for a significant decision (as discussed in Part 1), or selling or sharing personal data. Once approved by the OAL, the risk assessment provisions will likely take effect on January 1, 2026. However, the Rule includes a lookback requirement: businesses must complete risk assessments for activities already in progress prior to the effective date by December 31, 2027. Rather than submitting full assessments, the Rule requires businesses to provide the CPPA with an annual certified report beginning April 1, 2028 with prescribed information. Businesses must also maintain complete documentation for each assessment, as the CPPA or California Attorney General may request copies at any time, with 30 days’ notice.
Scope of a Risk Assessment
Risk assessments must describe the activity in detail, including data categories, collection methods, retention periods and consumer disclosures; evaluate the benefits and potential harms such as discrimination or loss of control; and outline safeguards to mitigate those risks. While the CPPA dropped a ban on high-risk activities, the Rule intends to limit processing where risks to the consumer are disproportionate to benefits.
Businesses must also develop internal processes to identify activities that pose heightened privacy risks, perform structured risk assessments and maintain evidence of compliance. This marks a clear move toward preventative privacy governance, shifting away from reactive measures. Businesses that fail to complete required assessments before engaging in high-risk processing could face regulatory consequences.
What to Do Now
With the Rule introducing complex, phased compliance obligations that may require significant allocation of financial and human resources, especially for businesses that may not have a mature cybersecurity program, businesses will want to get head start on understanding their obligations under the Rule.
Key steps include:

Review the business’s current cybersecurity program against the 18 required controls and identify gaps.
Decide whether audits will be conducted internally or by external firms and confirm that internal auditors meet independence requirements.
If relying on NIST, HIPAA or other frameworks, map them to the 18 controls and plan for supplementation as needed.
In light of the lookback requirement for significant-risk activities predating the Rule’s effective date for risk assessments, establish procedures to identify such activities and implement necessary practices and procedures (e.g., strong documentation practices to ensure information required for assessments is available and complete).
Ensure executive buy-in, as certifications for both audits and risk assessments will require sign-off from senior leadership.
Budget for additional resources, especially for advanced cybersecurity controls (e.g., phishing-resistant MFA, penetration testing, zero trust).

California Finalizes CCPA Regulations on Cybersecurity Audits, Risk Assessments, and Automated Decisionmaking: Key Provisions and Implications

The California Privacy Protection Agency (“CPPA”) finalized a set of regulations under the California Consumer Privacy Act (“CCPA”) on July 24, 2025, that address cybersecurity audits, risk assessments, and automated decisionmaking technology (“ADMT”). These rules, which follow an extensive and contentious rulemaking process and public consultation, represent a significant evolution in California’s data privacy and security landscape, with broad implications for businesses operating in the state.
BACKGROUND AND RULEMAKING PROCESS 
The CPPA initiated the rulemaking process in November 2024. The regulations received substantial input from stakeholders, including technology companies, civil society, and government officials. Proposed rules around ADMT proved to be an especially thorny issue, with many commentators, including California Governor Gavin Newsom, urging the CPPA to be mindful of promulgating rules that may stifle innovation in the artificial intelligence (“AI”) field. The final rules narrow the scope of certain requirements with respect to ADMT by removing references to AI and behavioral advertising in the ADMT context, expanding the scope of when businesses can use ADMT, and scaling back when consumers may opt out of ADMT. The final regulations also phase in compliance obligations for cybersecurity audits over a number of years. 
Adoption of the final text of the regulations comes on the heels of the Trump administration’s release of “America’s AI Action Plan,” which seeks to promote innovation over regulation in the AI field. The AI Action Plan recommends federal agencies’ “AI-related discretionary funding” consider a state’s regulatory climate when making funding decisions and limit funding if the state’s regulatory regimes could hinder the effectiveness of the funding. Although an executive order responsive to that particular AI Action Plan policy recommendation has not yet been released, the new ADMT regulations may set up future disputes with the Trump administration over regulation in the AI space. For more information on the AI Action Plan, please see our Client Alert: Innovation Over Regulation—Trump Unveils America’s AI Action Plan.
KEY REGULATORY UPDATES AND REQUIREMENTS
Automated Decisionmaking Technology (“ADMT”)

Scope and Definitions: ADMT is defined as “any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.” To substantially replace human decisionmaking means to use ADMT output to make a decision without human involvement. Human involvement requires a human reviewer to: (a) know how to interpret and use the ADMT’s output to make a decision; (b) review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and (c) have the authority to make or change the decision. In general, the regulations’ requirements with respect to ADMT apply to businesses that use ADMT to make a “significant decision” about a consumer. A significant decision is one that is a decision that results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services. ADMT also expressly excludes firewalls, anti-malware, calculators, databases, spreadsheets, and certain other tools, provided they do not replace human decision making. This definition could capture agentic and other types of AI used by businesses, depending on how such AI technologies are deployed.
Notice Requirement: Businesses that use ADMT to make a significant decision must provide consumers with a pre-use notice at or before the point of collection that provides a plain language explanation of the specific purpose for which the business plans to use the ADMT.
Consumer Rights: Consumers have the right to opt out of, and access information about, ADMT used for significant decisions affecting them. Businesses are not required to provide consumers with the ability to opt out if the business provides the consumer with a method to appeal the decision to a human reviewer who has authority to overturn the decision and where the ADMT is used for admission or acceptance, or for hiring decisions and allocation of work assignments, provided it does not result in unlawful discrimination based on protected characteristics.
Risk Assessments for ADMT: Businesses must conduct risk assessments when using ADMT for significant decisions or for certain training purposes, with requirements to document the categories of personal information processed and the logic of the system.

Cybersecurity Audits

Applicability and Scope: The regulations require annual cybersecurity audits for businesses whose processing of personal information presents a “significant risk” to consumers’ privacy or security. Under the regulations, processing presents “significant risk” if, in the preceding calendar year, a business derived 50 percent or more of its annual revenue from selling or sharing personal information, or if a business had revenue exceeding $26,625,000 in annual gross revenue (indexed for inflation) and processed the personal information of 250,000 or more California residents or the sensitive personal information of 50,000 or more California residents.
Audit Standards and Independence: Cybersecurity audits must be conducted by qualified, objective, and independent professionals, either internal or external. Internal auditors must report to executive management not responsible for cybersecurity, rather than to the board of directors as previously proposed.
Audit Content: The audit must assess a comprehensive list of cybersecurity controls, including multifactor authentication, encryption, access controls, data inventory, secure configuration, patch management, vulnerability scanning, logging, and training. The auditor determines which controls are applicable, considering the business’s size, complexity, and processing activities.
Reporting and Certification: Businesses are not required to submit audit reports to the CPPA but must annually certify completion of the audit. The agency and the Attorney General retain authority to request audit reports during investigations.
Implementation Timeline: Compliance is phased in based on business size, with the earliest audits due by April 1, 2028, for the largest businesses with annual gross revenue of $100 million; April 1, 2029, for businesses with annual gross revenue between $50 million and $100 million, and by April 1, 2030, for smaller businesses.

Risk Assessments

Triggering Activities: Risk assessments are required for activities that present a “significant risk” to consumers privacy. Such activities include selling/sharing personal information, processing sensitive personal information, using ADMT for “significant decisions,” using personal information to train ADMT for certain uses, and automated processing to infer attributes in employment or educational contexts.
Assessment Requirements: Businesses must perform data inventories to identify and document the personal information processed for the activities, and the specific purposes for which such data is processed. Businesses are also required to document the benefits and negative privacy impacts relating to the processing, and safeguards used by the business in connection with the processing. Businesses required to conduct risk assessments may be able to leverage risk assessments conducted for other legal regimes, provided they meet CCPA standards.
Submission and Certification: Annual submission of risk assessment information is required, including the number of risk assessments conducted during the period covered by the submission, the categories of personal information covered by the risk assessments, and attestations under penalty of perjury. 

Other Notable Provisions

Insurance Companies: The rules clarify the application of CCPA to insurance companies, providing examples of when information is or is not subject to the Act.
Definitions and Clarifications: The regulations update definitions, including “sensitive personal information” (now including neural data), “significant decision,” and “sensitive location,” and remove or revise terms such as “artificial intelligence” and “behavioral advertising” for internal consistency.

LOOKING AHEAD 
The regulations must be approved by the California Office of Administrative Law before taking effect. The CPPA has indicated that the rules may be revisited as technology and business practices evolve. 
Businesses subject to the CCPA should review the final regulations, assess their applicability, and begin preparing for phased compliance with cybersecurity audit, risk assessment, and ADMT requirements. The new cybersecurity audit provisions will help define how companies must safeguard personal information to meet their obligations under the law to provide “reasonable” security, and businesses subject to other laws impacting AI, such as the European Union’s AI Act and the Colorado AI Act, will need to determine how to craft compliance strategies that work for the business across each applicable regulatory regime.

Supreme Shift: How McLaughlin is Reshaping the TCPA

The legal landscape for businesses facing Telephone Consumer Protection Act (“TCPA”) claims may be undergoing a seismic shift. In the wake of the Supreme Court’s decision in McLaughlin Chiropractic Assocs., Inc. v. McKesson Corp. (which we discussed in a recent alert), federal courts are no longer bound to defer to the Federal Communications Commission’s (“FCC”) interpretations of the TCPA. Building upon our own case before the U.S. Supreme Court in 2019, PDR Network LLC v. Carlton & Harris Chiropractic, Inc., McLaughlin held that courts must now independently interpret the statute’s text. This has led to decisions that challenge—and in some cases, outright reject—longstanding FCC rules. Two recent cases, Jones v. Blackstone Medical Services, LLC and the ongoing Bradford v. Sovereign Pest Control of TX, Inc. appeal, exemplify this new era of judicial scrutiny and offer important lessons for businesses defending against TCPA claims.
The McLaughlin Decision: A Turning Point for TCPA Litigation
The Supreme Court’s McLaughlin decision fundamentally altered the relationship between courts and agency interpretations in TCPA enforcement actions. The Court held that district courts are not required to follow the FCC’s interpretations of the TCPA in private litigation. Instead, courts must apply ordinary principles of statutory interpretation, giving “appropriate respect” to the agency’s views but not treating them as controlling. This ruling effectively dismantled the regime of Chevron deference that had, for decades, required courts to “defer to” (follow) FCC rules—such as the requirement for “prior express written consent” for telemarketing calls—unless those rules were directly challenged through administrative channels.
For businesses, this means the statutory text of the TCPA is now the primary authority, and courts are empowered to disregard FCC rules that go beyond what Congress enacted.
District Court Rejects FCC’s View That Texts Are “Calls”
The Jones decision from the Northern District of Illinois is a striking example of how courts are now willing to depart from FCC precedent. In Jones, plaintiffs alleged Blackstone Medical Services violated the TCPA by sending them unwanted text messages, arguing such messages are “calls” under the statute and its implementing regulations—a position long supported by the FCC.
The district court, however, conducted its own analysis of the statutory text, as required by McLaughlin and Loper Bright. The Court found that the relevant TCPA provision, 47 U.S.C. § 227(c), prohibits “telephone calls” but does not mention text messages. The Court reasoned that, at the time the TCPA was enacted in 1991, “telephone call” would not have included text messages, which were not yet in use. The Court further noted that the FCC’s orders and regulations tying text messages to “calls” were based on a different section of the statute (§ 227(b)), not the do-not-call provisions at issue in the case.
Ultimately, the Court held that text messages are not “calls” under § 227(c) of the TCPA, directly contradicting years of FCC guidance and industry practice. The Court dismissed the TCPA claims, emphasizing that it is not the role of the judiciary to expand the statute beyond its plain meaning, and leaving it to Congress to address technological developments if it so chooses.
Fifth Circuit to Decide Meaning of “Express Consent”
Meanwhile, in the Fifth Circuit, the Bradford appeal shows how courts are reexamining foundational TCPA concepts in light of McLaughlin. There, the Court is set to address a fundamental question about what constitutes “prior express consent” under the TCPA. 
The case began when Radley Bradford sued Sovereign Pest Control, alleging that the company’s automated “renewal calls” to his cell phone—reminding him to schedule inspections and encouraging renewal of his pest control service plan—violated the TCPA because they were made without the required consent.
At the district court level, Sovereign argued Bradford had provided his cell phone number as part of his service agreement and never objected to the calls, thus giving “prior express consent.” The district court agreed, finding the calls were “informational” rather than telemarketing, and that Bradford’s act of providing his number was sufficient consent under then-binding FCC interpretations. The Court granted summary judgment for Sovereign.
However, while the case was on appeal, the Supreme Court decided McLaughlin. In response, the Fifth Circuit ordered a supplemental briefing on how McLaughlin and Loper Bright affect the appeal.
The defendant business argues that, after McLaughlin, courts must look to the statutory text alone, which requires only “express consent”—not “written” consent—for any call. The defendant contends that “express consent” can be given orally or by conduct, such as providing a phone number in connection with a business relationship. The plaintiff, by contrast, maintains that renewal calls were telemarketing and that more explicit, written consent is required.
The Fifth Circuit’s forthcoming decision could clarify whether businesses can rely on customers’ provision of phone numbers as sufficient consent for automated calls, or whether additional requirements—like written consent—will persist.
Implications for Businesses
Jones and Bradford are not isolated developments. Across the country, courts are now being asked to reconsider FCC interpretations that have long governed TCPA and independently interpret the statute. Litigants are encouraging courts to revisit foundational questions, such as what the scope of “call” is under the TCPA and what “prior express consent” means.
For businesses, this new era presents both opportunities and challenges. On the one hand, companies can now rely on the plain text of the statute to challenge TCPA claims, rather than being bound by agency rules that may exceed congressional intent. On the other hand, the result in the meantime may be a patchwork of decisions, with some courts upending years of regulatory guidance and others adhering to prior interpretations, at least until higher courts weigh in. This potential for a lack of uniformity and circuit splits creates uncertainty. Indeed, the dissent in McLaughlin noted how the majority’s ruling could “disrupt even the most solid-seeming regulatory regimes,” and even expose those who took advantage of prior FCC “safe harbors” to liability. 
Businesses must closely monitor developments in the jurisdictions where they operate and be prepared to adapt compliance strategies as courts continue to reinterpret the TCPA.
Conclusion
The post-McLaughlin era is ushering in a period of rapid change and unpredictability in TCPA litigation. Courts are no longer bound to follow the FCC’s interpretations and are instead returning to the statutory text to resolve key questions about consent and coverage. The Jones decision’s rejection of FCC guidance on text messages and the Fifth Circuit’s anticipated decision in Bradford on the meaning of “express consent” are just the beginning. Businesses defending against TCPA claims should seize this moment to reassess their risk, revisit their compliance programs, and consider new defense strategies grounded in the statute’s plain language.

UK Government Publishes Commencement Dates for the UK Data (Use and Access) Act

On July 25, 2025, the UK government published the four key stages for bringing the provisions of the Data (Use and Access) Act (the “DUAA”) into effect. The DUAA received Royal Assent on June 19, 2025. The stages are as follows:

Stage 1 (August 20, 2025): On August 20, 2025, the following provisions will come into effect: technical provisions clarifying aspects of the legal framework; the new statutory objects for the Information Commissioner’s Office (“ICO”); and provisions requiring the UK government to publish an impact assessment, a report and a progress update on copyright works and artificial intelligence.
Stage 2 (3 – 4 months after Royal Assent): Provisions relating to digital verification services and measures on the retention of information by providers of internet services in connection with the death of a child, will come into effect 3 – 4 months after Royal Assent.
Stage 3 (6 months after Royal Assent): The main changes to UK data protection legislation introduced in Part 5 of the DUAA, and the provisions on information standards for health and adult social care in England, shall come into force approximately 6 months after Royal Assent.
Stage 4 (early 2026): Provisions requiring additional steps for enforcement such as the development of new technologies or appointment of new staff, are expected to come into force in early 2026. Such provisions include those regarding the restructuring of the ICO, the electronic system of registering births and deaths, and measures relating to the National Underground Register.

Read The Data (Use and Access) Act 2025 (Commencement No.1) Regulations 2025.

RACK ROOM SHOES, INC. WIRETAP ACT CLASS ACTION SURVIVES MOTION TO DISMISS: Court Finds Crime-Tort Exception Applies

Hi CIPAWorld! In the latest development in the ongoing wave of website tracker litigation, the Northern District of California court in Smith, et al., v. Rack Room Shoes, Inc., No. 24-CV-06709-RFL, 2025 WL 2210002 (N.D. Cal. Aug. 4, 2025), has allowed a putative class action against Rack Room Shoes, Inc. to proceed on claims under the federal Wiretap Act and California’s Comprehensive Computer Data and Access Fraud Act (“CDAFA”).
The court denied Rack Room’s motion to dismiss these claims and found that the plaintiffs had plausibly alleged compensable harm and tortious intent. However, the court dismissed claims under California’s Unfair Competition Law (“UCL”) and Consumers Legal Remedies Act (“CLRA”) without leave to amend.
The lawsuit centers on Rack Room’s use of embedded code from third parties, including Meta and Attentive, on its website. Plaintiffs allege that these tools, such as the Meta Pixel and Attentive Tag, intercept their communications and personally identifiable information without their consent and transmit it to the third parties.
The data allegedly shared includes URLs revealing search queries, items viewed and placed in carts and even hashed or unencrypted personally identifiable information like names, email addresses, and phone numbers.
The court had previously held in Smith I that Rack Room’s privacy policy failed to adequately disclose that third parties could collect this personally identifiable information and use it for their own commercial purposes, making any argument of user consent questionable.
Here, Rack Room argued that the claim was barred by the “party exception” under 18 U.S.C.§2511(2)(d), as it was a party to the communications with its website visitors and consented to the interception.
However, the court found that the plaintiffs plausibly alleged that the “crime-tort exception” applied. This exception negates the party exception if the interception was done “for the purpose of committing any criminal or tortious act.”
The key question is whether the defendant had an independent prohibited purpose beyond the act of interception itself.
Here, the court identified the tortious purpose as Rack Room’s intent to use and disclose its customers’ personally identifiable information for targeted advertising—directly contradicting the promises made in its own privacy policy. This subsequent use, the court reasoned, constituted a further invasion of privacy, satisfying the requirement for an independent tortious act.
The court rejected Rack Room’s argument that its primary financial motivation shielded it from liability and held that “a monetary purpose does not insulate a party from liability under the Wiretap Act, at least at the motion to dismiss stage.”
The case will now proceed to discovery on the CDAFA and Wiretap Act claims which continues the trend of courts allowing these website tracker cases to move past the pleading stage.

Salesforce Locks Down Slack Data: Time to Review Your Slack API Terms

Salesforce recently modified its Slack API Terms of Service (the “Terms”) to prohibit (i) bulk exporting of data accessible through the Slack application programming interfaces (“APIs”), (ii) the creation of persistent copies, archives, indexes or long-term data stores, and (iii) the usage of such data in large language models (“LLMs”).
The modifications to the Terms now prohibit use cases that were previously commonplace and important for Slack users. Many Slack customers rely on the ability to allow their third-party applications to copy, store and index data accessible through the Slack APIs on a long-term or permanent basis. Customers could then leverage their Slack data in other platforms, including LLMs and other artificial intelligence (“AI”) platforms, to analyze data and trends across their enterprise applications.
For example, Glean, an enterprise AI search platform that searches and analyzes data across an organization’s applications, previously offered a Slack integration that allowed Glean’s customers to search their Slack messages and build analytical tools using their enterprise Slack data. Glean informed its customers that the changes to the Terms will “hamp[er] your ability to use your data with your chosen enterprise AI platform” because access to Slack data is now permissible only “on a query-by-query basis . . . lock[ing] your data within Slack and limit[ing] your results to the scope and quality of Slack’s search technology and limited context[.]”
In addition to the complications presented for third-party usage of Slack data, customers that have developed internal LLMs using extensive Slack archives (and who have not negotiated limitations of unilateral amendments in their Salesforce agreements) must now redesign those programs or transition to Salesforce-approved integrations. 
Although Salesforce has stated that these changes were made to enhance data privacy and security for its customers, many in the industry believe these changes indicate Salesforce’s intent to leverage Slack’s extensive conversational data to develop its own proprietary AI solutions. By restricting third-party access to Slack data and prohibiting the use of Slack data within LLMs, Salesforce retains control over valuable enterprise data and gains an advantage in the AI space.
These changes are particularly important for any Slack customers that have not previously negotiated their Terms with Salesforce because, absent negotiation, Salesforce can unilaterally modify the Terms upon notice, and continued use of Slack following the effective date of such changes constitutes acceptance of the modified terms. If your organization utilizes third-party applications or LLMs to analyze your Slack data, or relies on the ability to obtain a continuous data feed from the Slack APIs, you should review the new Terms and assess the potential impact of these changes on your organization’s continued use of Slack.

UK ICO Publishes Draft Guidance on Profiling Tools for Online Safety

On July 30, 2025, the UK Information Commissioner’s Office (“ICO”) launched a consultation seeking feedback on its draft guidance concerning the use of profiling tools for online safety (the “Guidance”). The Guidance aims to assist organizations with their compliance with the UK Online Safety Act 2023 (“OSA”), the UK General Data Protection Regulation (the “UK GDPR”), and the UK Privacy and Electronic Communications Regulations 2003 (“PECR”), outlining the data protection and privacy considerations organizations should take into account when utilizing profiling tools in trust and safety systems.
The Guidance is divided into different sections, highlighting several critical issues that organizations should consider, such as:

PECR adherence: Profiling tools using storage and access technologies on user devices must comply with PECR, requiring prior consent in accordance with the standard of consent required by UK GDPR, unless exemptions apply.
Lawful basis for processing: Profiling activities must have a lawful basis under the UK GDPR, such as consent or legitimate interests, and must comply with any additional conditions for processing special category or criminal offense data.
Transparency: Clear information must be provided to users about how their data is being used in profiling processes. The Guidance recommends that organizations should regularly review their profiling tools to minimize the risk of unfair outcomes for users.
Data minimization: Organizations must define clear, specific purposes for collecting and processing data with profiling tools, ensuring only data that is necessary for such purposes is used.
Accuracy: Organizations should ensure profiling tools process accurate, up-to-date information, and allow users to challenge inaccuracies. As many profiling tools will likely utilize AI and automation, organizations should distinguish predictive outcomes from factual data and ensure they balance statistical accuracy with fairness, considering measures such as precision and recall, and the risks to users of each.
Retention: Profiling tools must not keep personal information longer than necessary. Organizations must establish retention periods and erase or anonymize personal information when it is no longer needed.
Automated decision-making: Organizations must identify if profiling tools make solely automated decisions with legal or similarly significant effects and ensure compliance with Article 22 of the UK GDPR by, for example, mapping workflows, providing transparency, and implementing safeguards such as human intervention.

Organizations have until October 31, 2025, to provide the ICO with feedback on the Guidance.  

Amendments to the FTC COPPA Rule Now in Effect

On June 23, 2025, amendments to the FTC’s Children’s Online Privacy Protection Act (COPPA) Rule became effective. The amended COPPA Rule (or, “Final Rule”) enhances obligations on many operators of websites and online services. The FTC previously amended COPPA in 2013 to address emerging technologies.
What is the FTC Children’s Online Privacy Protection (COPPA) Rule?
Congress enacted the Children’s Online Privacy Protection Act in 1998 with the intent to protect children’s privacy online and to provide parents with a mechanism to control how their children’s personal information is collected and used. COPPA applies to “Operators” of websites and online services that target children of knowingly collect personal information from children under 13 years of age.
The FTC’s original COPPA Rule became effective on April 21, 2000. The Commission published an amended Rule on January 17, 2013. The amended Rule took effect on July 1, 2013.
The primary goal of COPPA is to place parents in control over what information is collected from their young children online. The COPPA Rule was designed to protect children under age 13, while accounting for the dynamic nature of the Internet.
The Rule applies to operators of commercial websites and online services (including mobile apps and IoT devices, such as smart toys) directed to children under 13 that collect, use, or disclose personal information from children, or on whose behalf such information is collected or maintained (such as when personal information is collected by an ad network to serve targeted advertising). The COPPA Rule also applies to operators of general audience websites or online services with actual knowledge that they are collecting, using, or disclosing personal information from children under 13, and to websites or online services that have actual knowledge that they are collecting personal information directly from users of another website or online service directed to children.
Operators covered by the Rule must:

Post a clear and comprehensive online privacy policy describing their information practices for personal information collected online from children
Provide direct notice to parents and obtain verifiable parental consent, with limited exceptions, before collecting personal information online from children
Give parents the choice of consenting to the operator’s collection and internal use of a child’s information, but prohibiting the operator from disclosing that information to third parties (unless disclosure is integral to the site or service, in which case, this must be made clear to parents)
Provide parents access to their child’s personal information to review and/or have the information deleted
Give parents the opportunity to prevent further use or online collection of a child’s personal information
Maintain the confidentiality, security, and integrity of information they collect from children, including by taking reasonable steps to release such information only to parties capable of maintaining its confidentiality and security
Retain personal information collected online from a child for only as long as is necessary to fulfill the purpose for which it was collected and delete the information using reasonable measures to protect against its unauthorized access or use, and
Not condition a child’s participation in an online activity on the child providing more information than is reasonably necessary to participate in that activity.

Who is covered by the FTC COPPA Rule?
The Rule applies to operators of commercial websites and online services (including mobile apps and IoT devices) directed to children under 13 that collect, use, or disclose personal information from children. It also applies to operators of general audience websites or online services with actual knowledge that they are collecting, using, or disclosing personal information from children under 13. The Rule also applies to websites or online services that have actual knowledge that they are collecting personal information directly from users of another website or online service directed to children.
What are Key Changes to the Final COPPA Rule?
Key chanages to the COPPA Rule include, without limiation:

Additionaly examples of issues to consider with respect to whether an online platform is “directed to children”
Revisions to COPPA’s existing consent requirements
Additional methods by which verifiable parental consent may be obtained
New notice obligations for Operators
New requirements governinng information security programs and data retention policies.

There are 4 examples of the type of evidence that the FTC will consider when evaluating the “intended audience” of an online property to assess whether it meets the definition of “website or online service directed to children.” The foregoing include marketing or promotional materials; representations to consumers or to third-parties; reviews by users or third-parties; and the age of users on similar websites or services.
The Final Rule amends COPPA’s existing parental consent requirements, adding that Operators must obtain a “separate” consent for the disclosure of personal information to third-parties, with a narrow exception regarding the nature of the online service. It does not, however, set forth details regarding how and when Operators should obtain such parental consent. FTC commentary in this regarding is instructive.
There are 3 new methods that Operators can employ to obtain verifiable parental consent. The Final Rule also revises one of the COPPA Rule’s existing methods of obtaining verifiable consent.
Similar to the existing “email plus” method, the Final Rule contemplates a “text plus” mechanism of obtaining verifiable parental consent. Operators may also utilize a knowledge-based authentication method (multiple choice questions), facial recognition technology and qualifying online payments to obtain verifiable parental consent.
The Final Rule imposes on Operators two new notice requirements. The first is for Operators that rely on the “support for internal operations” exception to obtaining verifiable parental consent. The second pertains to a requirements that Operators identify third-parties that receive children’s data (in a clear and conspicuous privacy policy and via direct notice to parents).
Operators are also required pursuant to the Final Rule to establish, implement and maintain a “written information security program” for personal information subject to the COPPA Rule. Operators must designate at least one employee as the program coordinator; conduct an annual risk assessment pertaining to the “confidentiality, security and integrity” of personal information collected from children and safeguards that have been implemented; design, implement and maintain safeguards to control risks; test and monitor the safeguards; and evaluate and modify the program to address risks on an annual basis.
Pursuant to the Amended Rule, Operators are also required to establish and publish online a “written data retention policy” that discloses the purposes of collecting personal information from children; the business justification for retention of such information; and a deletion time frame for that information. Personal information may not be retained indefinitely.
Why did the FTC Amend the COPPA Rule?
Following public comments, The Federal Trade Commission decided upon amendments designed to increase transparency with respect to data collection and usage of children’s information, and to increase the obligations and restrictions related to the security and sharing of such data. Following a Notice of Proposed Rulemaking in January 2024, the final rule was published on January 16, 2025.
Notably, the amended COPPA Rule was published following a regulatory freeze policy announced by President Trump.
FTC Declines to Implement Numerous Advertising-Related Proposals
While the Final Rule illustrates the FTC’s commitment to safeguarding the use of children’s data for advertising purposes, the FTC declined to implement a number of proposals that would have further restricted advertising under the amended COPPA Rule, including, but not limited to, limits on contextual advertising and personalization.
What is the Effective Date of the Amended FTC COPPA Rule?
The amended COPPA Rule went into effect on June 23, 2025 and those subject to the amended COPPA are required to fully comply therewith by April 22, 2026. 
What is the COPPA Safe Harbor Program?
The COPPA Rule’s Safe Harbor provision allows industry groups or others to submit self-regulatory guidelines to the FTC for approval. However, in order to do so, the COPPA Rule’s protections. must be met or exceeded. Now, the COPPA Safe Harbor program requires the public disclosure of membership lists and provision of disciplinary actions. Safe Harbor programs have until October 22, 2025 to comply with applicable disclosure requirements.

US Healthcare Offshoring: Navigating Patient Data Privacy Laws and Regulations

Unlike other sectors, US healthcare businesses must reconcile cost-saving strategies with stringent compliance obligations, especially when patient data crosses national borders or is accessed overseas.

In Depth

As healthcare companies in the United States seek sustainable strategies to reduce administrative costs, offshoring administrative, non-clinical functions has emerged as an increasingly attractive option. Global labour markets offer access to skilled professionals at wages that may be lower than in the US, which enables cost efficiencies for US providers and health plans.
However, because of patient data privacy concerns, healthcare offshoring presents a unique legal and regulatory challenge. Companies must navigate the web of US state restrictions on the access or storage of patient data outside the US.
HIPAA’s Extraterritorial Flexibility
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is the primary federal law that protects patients’ health information. HIPAA establishes standards for the privacy and security of a patient’s health and related information, known as protected health information or PHI, by healthcare providers, health plans, and healthcare clearinghouses and their subcontractors who provide services on behalf of healthcare providers or health plans involving PHI. These subcontractors, known as “business associates”, are required to enter in Business Associate Agreements that contain provisions designed to protect PHI, and have independent obligations to protect PHI under HIPAA.
HIPAA doesn’t prohibit PHI from being accessed or stored outside the US, despite the potential risks. If a foreign vendor violates HIPAA or experiences a data breach, there is limited recourse unless there are strong, binding, international arbitration provisions, or the foreign vendor maintains a substantial US-based presence.
If a foreign vendor violates HIPAA or experiences a data breach, there is limited recourse.

In light of this risk, many US state governments, healthcare providers, and health plans have sought to limit or prohibit the offshoring of US patient data.
Data Localisation Clauses
The most common means by which states seek to control risk is through data localisation provisions within contracts with state agencies, and through Medicaid regulatory restrictions.
In some cases, these contractual provisions require both the storage of patient data and the performance of the services to occur in the US. For example, Wisconsin prohibits contractors and subcontractors from performing work outside the US that involves access to or disclosure of patient health and related information. Similarly, Texas’ Uniform Managed Care Contract requires Managed Care Organisations (MCOs) to provide all services within the US. It further requires that all information obtained by the MCO or a subcontractor pursuant to the Managed Care Contract be “stored and maintained within the United States”.Examples of other states with regulatory or executive order restrictions that prohibit offshoring include Arizona’s Health Care Cost Containment System programme, and executive orders in Ohio, Missouri, and New Jersey.
The most common means by which states seek to control risk is through data localisation provisions.

Legislative Prohibitions on US Healthcare Offshoring
The most direct manner in which regulation has sought to prevent the storage of patient data outside the US is state legislation that simply prohibits it.
Florida’s Electronic Health Records Exchange Act, for example, requires healthcare providers that are using certified electronic health record technology to ensure that patient information in qualified electronic health records stored in an offsite physical or virtual environment is physically maintained in the continental US, its territories, or Canada. A healthcare provider’s license is conditioned on submission of an affidavit regarding compliance with this requirement, and a failure to maintain compliance could result in professional discipline.
In Texas, Senate Bill 1188 has passed the Senate and, if enacted, will require that certain medical facilities, healthcare providers, and governmental entities ensure that electronic health record information of Texas residents is stored in the US.
Both states are attempting to balance the risks associated with exporting US patient data to other jurisdictions while also permitting access to that information abroad, as long as the storage or physical maintenance of the information remains inside the US.
Comprehensive Consumer Privacy Laws Impacting Healthcare Offshoring
A growing number of US states have enacted, or are in the process of enacting, broad-based consumer privacy laws that, although not focused on the offshoring of patient data, could spill over to prohibit or restrict those sorts of data transfers. These comprehensive privacy laws may be broad enough to encompass practices relating to the treatment of patient health information and, in some cases, specifically provide that health information is within scope.
California Civil Code Section 1798.140(ae)(2), for example, defines “sensitive personal information” to include “personal information collected and analyzed concerning a consumer’s health”. Recent bills in Connecticut, Iowa, and Montana reflect the wave of broad privacy protections that states have enacted, and the trend includes more than 20 other states.
Healthcare companies should therefore consider the applicability of comprehensive state privacy regimes to the potential offshoring of patient information.
CMS Guidance and Federal Contractual Oversight Relating to Medicare Advantage Plans
The Centers for Medicare & Medicaid Services (CMS) has not prohibited offshoring but has issued guidance that increases compliance expectations for federal healthcare contractors and Medicare Advantage Plans.
CMS requires Medicare Advantage Organisations (MAOs) to obtain from healthcare providers who use offshore vendors detailed information regarding the offshore vendors’ safeguards protecting patient information. The provider must submit a signed attestation certificate to the MAO, to meet CMS requirements under 42 C.F.R. § 422.503. CMS maintains the authority to audit compliance and may penalise MAOs for failing to manage offshore risks adequately. These provisions are included in downstream contracts with healthcare providers and require the healthcare providers to ensure their downstream subcontractors or business associates comply with them.
Contractual Barriers to Healthcare Offshoring
Even where HIPAA and state law permit offshoring, many healthcare providers’ and health plans’ contracts include restrictions on their business associates. Payers and provider networks may include terms that prohibit PHI from leaving US territory or accessing PHI outside the US, or require that subcontractors meet specific additional security requirements. Such contract clauses often impose stricter obligations than those otherwise mandated by law.
Healthcare providers’ and health plans’ contracts include restrictions on their business associates.

Best Practices for Offshoring Patient Data
US healthcare companies can mitigate risk and maximise value from offshore operations by adopting the following best practices.
Adopt an Offshore Policy
Healthcare organisations and their vendors should adopt measures to collect, document, and maintain relevant information to identify offshore arrangements, impose appropriate measures in a consistent and orderly way, monitor compliance, and take action if problems arise.
Enter Into an Offshore Business Associate Agreement
Healthcare businesses should enter into a Business Associate Agreement with any offshore vendors that will access or store patient information, and develop and implement appropriate measures to address privacy and security issues not addressed by HIPAA that are unique to offshore entities.
In addition to the standard Business Associate Agreement requirements, healthcare businesses should consider including robust international arbitration clauses and requirements around cyber liability insurance coverage.
Establish Minimum Necessary Access, Encryption, and Data Retention Policies
Offshore contracting arrangements should prohibit the offshore contractor’s access to data not required to perform its services, essentially extending HIPAA’s minimum necessary rule to a broader range of protected information, and limiting the ability of offshore personnel to print and archive data locally.
Additionally, healthcare companies should confirm that data retention periods by the offshore entity are set forth in the contract and do not involve the offshore entity storing data for longer than needed. Healthcare providers may consider requiring the offshore entity to encrypt data at rest or in transit at appropriate encryption levels.
Prepare for Data Breaches
Offshoring contracts should include policies and procedures to address the offshore organisation’s response to data breaches or other matters of non-compliance. These should cover, for example, the time frame for reporting, the level of co-operation, and identifying who is responsible for determining if a reportable data breach has occurred.
HIPAA requires that a Business Associate contract be subject to termination if the business associate violates a material contractual term. US healthcare organisations should consider whether further or not expanded termination rights are appropriate when offshoring is involved, such as permitting termination following a data breach, even absent proof that a violation of the Business Associate contract caused the breach.
Comply With All Applicable Laws
Contracts with offshore vendors should include all language required by applicable laws and regulations, including the HIPAA Privacy and Security rules and, if applicable, the Medicare Advantage downstream provider requirements and state Medicaid requirements for offshoring. Healthcare companies should include a broad indemnity to cover non-compliance with applicable law and make sure the indemnity is not subject to low limitations of liability.
Undertake Annual Audits
US healthcare organisations should audit offshore subcontracts at least annually, and use those audit results to evaluate whether to continue their relationship with the vendor or take corrective action or other measures if appropriate.
Considerations for Non-US Vendors
Offshore vendors seeking to work with US healthcare companies must adapt to this fragmented and evolving landscape.
Vendors should conduct jurisdictional analyses to identify states and client types, e.g., commercial versus Medicare/Medicaid, where offshoring is viable. Proposals by vendors to US healthcare companies should be tailored to reflect this legal context and should appropriately consider the type of information being offshored, as well as the payor type.
Offshore vendors must be able to demonstrate HIPAA compliance via documented policies and training, robust security, and experience working within multi-jurisdictional legal environments.
In higher-risk jurisdictions, vendors might consider establishing US-based operations or collaborating with domestic intermediaries to minimise risk. Many offshore vendors have obtained third party certifications such as through the Health Information Trust Alliance as a means of demonstrating their commitment to the appropriate handling of patient information.

CFTC Launches Listed Spot Crypto Trading Initiative

The Commodity Futures Trading Commission (CFTC) has launched its Listed Spot Crypto Trading Initiative, which aims to establish a framework for retail trading of leveraged, margined, or financed spot crypto asset contracts (Retail Crypto Contracts) on CFTC-registered designated contract markets (DCMs).
The initiative would utilize existing authority under the Commodity Exchange Act (CEA), which requires that retail trading of commodities with leverage, margin, or financing must be conducted on DCMs. The CFTC is exploring how to implement such a framework using current regulatory tools. As background, retail commodity transactions involving leverage, margin or financing are subject to CFTC’s plenary jurisdiction unless they result in “actual delivery” within 28 days. In 2020, the CFTC issued interpretive guidance clarifying what constitutes “actual delivery” for digital assets. Despite this guidance, no DCM has listed Retail Crypto Contracts in the five years since its publication.
This development is part of the CFTC’s “crypto sprint” following the White House’s Digital Asset Policy Report.[1] The report, developed by the President’s Working Group on Digital Asset Markets under Executive Order 14178, directs federal agencies to use existing authorities to immediately facilitate digital asset trading. In March, the CFTC withdrew a pair of advisories that had reflected the agency’s heightened review approach to digital asset derivatives.[2]
Implementation Process and Stakeholder Input
The CFTC is seeking comprehensive industry feedback to ensure effective implementation. The agency has invited stakeholders to submit comments on listing spot crypto asset contracts on DCMs, with particular focus on section 2(c)(2)(D) of the CEA and Part 40 of CFTC regulations. The agency is also requesting input on potential securities law implications.
The public comment period closes on August 18, indicating the CFTC’s intent to move quickly on implementation. This compressed timeline suggests that operational frameworks could be established relatively soon, pending resolution of technical or legal issues identified through the comment process.
If implemented, this initiative could potentially change how spot crypto assets are traded in the United States by clarifying how existing regulated venues could be used for Retail Crypto Contracts.
As the CFTC develops this initiative, businesses operating in the spot crypto asset space should monitor developments closely and consider participating in the comment process. 
Footnotes 
[1] See Katten’s Quick Reads coverage of the White House Report here.
[2] See Katten’s Quick Reads coverage of the withdrawal of the two advisories here.

Synthetic Media Creates New Authenticity Concerns for Legal Evidence

When a high school principal’s voice went viral making racist and antisemitic comments, the audio seemed authentic enough to destroy careers and inflame community tensions. Only later did forensic analysis reveal the recording was a deepfake created by the school’s athletic director. The incident, requiring two forensic analysts to resolve the nature of the recording, illustrates a fundamental challenge facing the legal system: as AI-generated content becomes indistinguishable from human-created content, how do courts determine authenticity?
This challenge extends beyond theoretical concerns. Courts nationwide are grappling with synthetic evidence in real cases, from criminal defendants claiming prosecution videos are deepfaked to civil litigants using AI-generated content to support false claims.
Current Legal Framework Challenges
Technologies designed to detect AI-generated content have proven unreliable and biased, while humans demonstrate poor ability to distinguish between real and fake digital content. No foolproof method currently exists to classify text, audio, video, or images as authentic or AI-generated.
Recent cases reveal how this crisis manifests in practice. In USA v. Khalilian, defense counsel moved to exclude voice recordings on grounds they could be deepfaked. When prosecutors argued that witness familiarity with the defendant’s voice could authenticate the recording, the court responded that was “probably enough to get it in,” which is a standard that likely represents insufficient scrutiny for deepfake allegations.
In Wisconsin v. Rittenhouse, the defense successfully challenged prosecution efforts to zoom iPad video evidence, arguing that Apple’s pinch-to-zoom function uses AI that could manipulate footage. The court required expert testimony that the zoom function would not alter underlying video — testimony the prosecution could not provide on short notice.
Federal Evidence Rule Developments
The US Judicial Conference’s Advisory Committee on Evidence Rules considered proposals to amend Federal Rules to address AI-generated evidence challenges on May 2, 2025. Among the proposals being considered by the Committee are changes to Rule 901, which governs authentication of evidence in legal proceedings. Rule 901(a) provides that evidence is authentic if the proponent produces “evidence sufficient to support a finding that the item is what the proponent claims it is.” Rule 901(b) provides examples of evidence that satisfies the Rule 901(a) requirement. 
One proposal would modify Rule 901(b)(9), relative to an item generated by a “process or system,” by requiring the proponent to provide evidence describing the process or system and showing that it produces “valid and reliable result.”
Other proposals focus on new section specific to deepfakes, 901(c). One version of the proposed 901(c) would provide that “[i]f a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.”
Another version of the proposed 901(c) would establish a two-step process for evaluating deepfake challenges: First, parties challenging evidence authenticity on grounds of AI fabrication must present evidence sufficient to support a finding of fabrication warranting court inquiry. Mere assertions that evidence is deepfaked would be insufficient. Second, if opponents meet this requirement, evidence would be admissible only if proponents demonstrate it is more likely than not authentic — a higher standard than traditional “sufficient to support a finding” requirements.
This approach attempts to balance preventing baseless “deepfake defense” strategies while ensuring adequate scrutiny of potentially fabricated evidence. However, it leaves unresolved how courts will determine authenticity when detection technology proves unreliable.
The Committee determined a rule amendment was not necessary at this time and decided to keep proposed Rule 901(c) on the agenda for fall 2025 meeting without issuing public comment, noting reduced need for public input on deepfake issues.
Strate Evidence Rule Developments
Some states are paving the way when it comes to AI-generated evidentiary issues. Louisiana HB 178, which went into effect August 1, 2025, became the first statewide framework designed to address AI generated evidence by expanding and specifying the role of attorneys in exercising reasonable diligence to verify the authenticity of evidence. The law revises Louisiana Code of Civil Procedure article 371 to provide, in part, that “[a]n attorney shall exercise reasonable diligence to verify the authenticity of evidence before offering it to the court. If an attorney knew or should have known through the exercise of reasonable diligence that evidence was false or artificially manipulated, the offering of that evidence without disclosure of that fact shall be considered a violation of this Article [providing for contempt of court and further disciplinary action].”
Federal and State Responses
President Trump signed the Take It Down Act into law on May 19, 2025, criminalizing AI use to create deepfake images without depicted persons’ consent, with FTC enforcement beginning within one year. This federal action supplements complex state landscapes where many states have enacted laws addressing nonconsensual sexual deepfakes and have enacted laws limiting deepfakes in political campaigns.
Tennessee has enacted the ELVIS Act (which went into effect on July 1, 2024), becoming the first state with deepfake legislation outside intimate imagery and political content categories, specifically protecting musicians’ voices from AI manipulation. New York’s digital replica law requires written consent, clear contracts and compensation for AI-created likeness use, while Minnesota’s updated criminal code penalizes non-consensual deepfakes with misdemeanor or felony charges.
This state-by-state approach creates significant complications through inconsistencies that can lead to unpredictable outcomes for those seeking legal redress under newly enacted laws.
Judicial Gatekeeping Challenges
The authenticity crisis forces courts to confront fundamental questions about their role in the digital age. Traditional evidence authentication under Federal Rule 901 requires only evidence “sufficient to support a finding that the item is what the proponent claims it is” — a deliberately low threshold designed to let juries weigh evidence credibility.
This approach worked when authentication disputes involved questions like whether photographs accurately depicted crime scenes or whether signatures were genuine. Deepfakes shatter this framework by creating content that can fool both human observers and technological detection systems.
Some scholars propose expanding judicial gatekeeping authority, moving authenticity determinations from juries to judges. This approach would parallel how courts handle complex technical evidence under Daubert standards, requiring judges to evaluate evidence reliability before it reaches juries.
Access to Justice Implications
Synthetic media creates troubling access-to-justice problems. Hiring digital forensic experts costs from hundreds of dollars for hourly consulting to several thousand dollars per project, with higher fees for high-profile cases. This financial burden falls heaviest on those least able to bear it, with wealthy litigants affording comprehensive forensic analysis while individuals and small businesses may lack resources to challenge sophisticated deepfakes.
This disparity is particularly concerning in criminal cases where stakes include liberty and life. Current practice often places financial burden on defendants who may lack resources for adequate defense.
First Amendment Considerations
Synthetic media regulation faces significant constitutional hurdles, particularly regarding political speech. A federal judge blocked California’s prohibition law in 2024 over First Amendment concerns, finding that the law “unconstitutionally stifles the free and unfettered exchange of ideas … vital to American democratic debate.”
This tension between preventing harm and preserving free expression complicates legislative responses, with most lawmakers opting for lighter-touch disclosure policies not yet blocked in federal court.
Industry and Technology Responses
Private sector responses reflect both promise and limitations of technological solutions. Reputable synthetic media services typically prohibit malicious deepfake creation, requiring users to certify permission for uploaded content. However, users can misrepresent rights and circumvent guardrails.
Some platforms embed watermarks or digital signatures within AI-generated content for enhanced traceability, but these methods are far from foolproof, with evidence that watermarks can be removed easily.
Emerging Legal Standards
Courts are developing practical approaches to synthetic media challenges without formal rule changes, including enhanced burden requirements for video and audio evidence in high-stakes cases, pretrial evidentiary hearings to resolve authenticity disputes, expert testimony requirements for deepfake allegations and heightened scrutiny for celebrity content.
Practical Guidance
Given current legal uncertainty, practitioners should adopt proactive strategies, including:

Maintaining detailed records of content creation processes with timestamps and source materials;
Including specific inquiries about AI-generated materials in discovery requests;
Identifying qualified digital forensics experts early in cases involving audiovisual evidence;
Advising clients about reputational and legal risks associated with AI-generated content; and
Including specific provisions addressing AI-generated content in contracts.

Looking Ahead
The synthetic media revolution represents more than a technological challenge; it fundamentally questions how legal systems establish truth in the AI age. The legal system’s response demonstrates remarkable adaptability, with courts developing new authenticity approaches, legislatures crafting targeted responses and the legal profession building expertise in digital forensics.
The authenticity crisis requires coordinated responses across multiple legal domains. Federal evidence rules need updating while preserving adversarial testing, state legislation must balance harm prevention with constitutional protections, and the legal profession must develop technological literacy adequate to the digital age. The institutions that successfully adapt to these challenges will preserve judicial proceeding integrity and remain relevant in an era where reality itself can be artificially generated.