Colorado’s AI Task Force Proposes Updates to State’s AI Law

Stemming from Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (the Act), which will impose obligations on developers and deployers of artificial intelligence (AI), the Colorado Artificial Intelligence Impact Task Force recently issued a report outlining potential areas where the Act can be “clarified, refined[,] and otherwise improved.”
The Task Force’s mission is to review issues related to AI and automated detection systems (ADS) affecting consumers and employees. The Task Force met on several occasions and prepared a report summarizing their findings:

Revise the Act’s definition of the types of decisions that qualify as “consequential decisions,” as well as the definition of “algorithmic discrimination,” “substantial factor,” and “intentional and substantial modification;”
Revamp the list of exemptions to what qualifies as a “covered decision system;”
Change the scope of the information and documentation that developers must provide to deployers;
Update the triggering events and timing for impact assessments as well as changes to the requirements for deployer risk management programs;
Possible replacement of the duty of care standard for developers and deployers (i.e., consider whether such standard should be more or less stringent);
Consider whether to minimize or expand the small business exemption (the current exemption under the Act is for businesses with less than 50 employees);
Consider whether businesses should be provided a cure period for certain types of non-compliance before Attorney General enforcement under the Act; and,
Revise the trade secret exemptions and provisions related to a consumer’s right to appeal.

As of today, the requirements for AI developers and deployers under the Act go into effect on February 1, 2026. However, the Task Force recommends reconsidering the law’s implementation timing. We will continue to track this first-of-its-kind AI law. 

The BR Privacy & Security Download: February 2025

STATE & LOCAL LAWS & REGULATIONS
New York Legislature Passes Comprehensive Health Privacy Law: The New York state legislature passed SB-929 (the “Bill”), providing for the protection of health information. The Bill broadly defines “regulated health information” as “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual.” Regulated health information includes location and payment information, as well as inferences derived from an individual’s physical or mental health. The term “individual” is not defined. Accordingly, the Bill contains no terms restricting its application to consumers acting in an individual or household context. The Bill would apply to regulated entities, which are entities that (1) are located in New York and control the processing of regulated health information, or (2) control the processing of regulated health information of New York residents or individuals physically present in New York. Among other things, the Bill would restrict regulated entities to processing regulated health information only with a valid authorization, or when strictly necessary for certain specified activities. The Bill also provides for individual rights and requires the implementation of reasonable administrative, physical, and technical safeguards to protect regulated health information. The Bill would take effect one year after being signed into law and currently awaits New York Governor Kathy Hochul’s signature.
New York Data Breach Notification Law Updated: Two bills, SO2659 and SO2376, that amended the state’s data breach notification law were signed into law by New York Governor Kathy Hochul. The bills change the timing requirement in which notice must be provided to New York residents, add data elements to the definition of “private information,” and adds the New York Department of Financial Services to the list of regulators that must be notified. Previously, New York’s data breach notification statute did not have a hard deadline within which notice must be provided. The amendments now require affected individuals to be notified no later than 30 days after discovery of the breach, except for delays arising from the legitimate needs of law enforcement. Additionally, as of March 25, 2025, “private information” subject to the law’s notification requirements will include medical information and health insurance information.
California AG Issues Legal Advisory on Application of California Law to AI: California’s Attorney General has issued legal advisories to clarify that existing state laws apply to AI development and use, emphasizing that California is not an AI “wild west.” These advisories cover consumer protection, civil rights, competition, data privacy, and election misinformation. AI systems, while beneficial, present risks such as bias, discrimination, and the spread of disinformation. Therefore, entities that develop or use AI must comply with all state, federal, and local laws. The advisories highlight key laws, including the Unfair Competition Law and the California Consumer Privacy Act. The advisories also highlight new laws effective on January 1, 2025, which include disclosure requirements for businesses, restrictions on the unauthorized use of likeness, and regulations for AI use in elections and healthcare. These advisories stress the importance of transparency and compliance to prevent harm from AI.
New Jersey AG Publishes Guidance on Algorithmic Discrimination: On January 9, 2025, New Jersey’s Attorney General and Division on Civil Rights announced a new civil rights and technology initiative to address the risks of discrimination and bias-based harassment in AI and other advanced technologies. The initiative includes the publication of a Guidance Document, which addresses the applicability of New Jersey’s Law Against Discrimination (“LAD”) to automated decision-making tools and technologies. It focuses on the threats posed by automated decision-making technologies in the housing, employment, healthcare, and financial services contexts, emphasizing that the LAD applies to discrimination regardless of the technology at issue. Also included in the announcement is the launch of a new Civil Rights Innovation lab, which “will aim to leverage technology responsibly to advance [the Division’s] mission to prevent, address, and remedy discrimination.” The Lab will partner with experts and relevant industry stakeholders to identify and develop technology to enhance the Division’s enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of AI and related decision-making technology. This initiative, along with the recently effective New Jersey Data Protection Act, shows a significantly increased focus from the New Jersey Attorney General on issues relating to data privacy and automated decision-making technologies.
New Jersey Publishes Comprehensive Privacy Law FAQs: The New Jersey Division of Consumer Affairs Cyber Fraud Unit (“Division”) published FAQs that provide a general summary of the New Jersey Data Privacy Law (“NJDPL”), including its scope, key definitions, consumer rights, and enforcement. The NJDPL took effect on January 15, 2025, and the FAQs state that controllers subject to the NJDPL are expected to comply by such date. However, the FAQs also emphasize that until July 1, 2026, the Division will provide notice and a 30-day cure period for potential violations. The FAQs also suggest that the Division may adopt a stricter approach to minors’ privacy. While the text of the NJDPL requires consent for processing the personal data of consumers between the ages of 13 and 16 for purposes of targeted advertising, sale, and profiling, the FAQs state that when a controller knows or willfully disregards that a consumer is between the ages of 13 and 16, consent is required to process their personal data more generally.
CPPA Extends Formal Comment Period for Automated Decision-Making Technology Regulations: The California Privacy Protection Agency (“CPPA”) extended the public comment period for its proposed regulations on cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and insurance companies under the California Privacy Rights Act. The public comment period opened on November 22, 2024, and was set to close on January 14, 2025. However, due to the wildfires in Southern California, the public comment period was extended to February 19, 2025. The CPPA will also be holding a public hearing on that date for interested parties to present oral and written statements or arguments regarding the proposed regulations.
Oregon DOJ Publishes Toolkit for Consumer Privacy Rights: The Oregon Department of Justice announced the release of a new toolkit designed to help Oregonians protect their online information. The toolkit is designed to help families understand their rights under the Oregon Consumer Privacy Act. The Oregon DOJ reminded consumers how to submit complaints when businesses are not responsive to privacy rights requests. The Oregon DOJ also stated it has received 118 complaints since the Oregon Consumer Privacy Act took effect last July and had sent notices of violation to businesses that have been identified as non-compliant.
California, Colorado, and Connecticut AGs Remind Consumers of Opt-Out Rights: California Attorney General Rob Bonta published a press release reminding residents of their right to opt out of the sale and sharing of their personal information. The California Attorney General also cited the robust privacy protections of Colorado and Connecticut laws that provide for similar opt-out protections. The press release urged consumers to familiarize themselves with the Global Privacy Control (“GPC”), a browser setting or extension that automatically signals to businesses that they should not sell or share a consumer’s personal information, including for targeted advertising. The Attorney General also provided instructions for the use of the GPC and for exercising op-outs by visiting the websites of individual businesses.

FEDERAL LAWS & REGULATIONS
FTC Finalizes Updates to COPPA Rule: The FTC announced the finalization of updates to the Children’s Online Privacy Protection Rule (the “Rule”). The updated Rule makes a number of changes, including requiring opt-in consent to engage in targeted advertising to children and to disclose children’s personal information to third parties. The Rule also adds biometric identifiers to the definition of personal information and prohibits operators from retaining children’s personal information for longer than necessary for the specific documented business purposes for which it was collected. Operators must maintain a written data retention policy that documents the business purpose for data retention and the retention period for data. The Commission voted 5-0 to adopt the Rule, but new FTC Chair Andrew Ferguson filed a separate statement describing “serious problems” with the rule. Ferguson specifically stated that it was unclear whether an entirely new consent would be required if an operator added a new third party with whom personal information would be shared, potentially creating a significant burden for businesses. The Rule will be effective 60 days after its publication in the Federal Register.
Trump Rescinds Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: President Donald Trump took action to rescind former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). According to a Biden administration statement released in October, many action items from the AI EO have already been completed. Recommendations, reports, and opportunities for research that were completed prior to revocation of the AI EO may continue in place unless replaced by additional federal agency action. It remains unclear whether the Trump Administration will issue its own executive orders relating to AI.
U.S. Justice Department Issues Final Rule on Transfer of Sensitive Personal Data to Foreign Adversaries: The U.S. Justice Department issued final regulations to implement a presidential Executive Order regarding access to bulk sensitive personal data of U.S. citizens by foreign adversaries. The regulations restrict transfers involving designated countries of concern – China, Cuba, Iran, North Korea, Russia, and Venezuela. At a high level, transfers are restricted if they could result in bulk sensitive personal data access by a country of concern or a “covered person,” which is an entity that is majority-owned by a country of concern, organized under the laws of a country of concern, has its principle place of business in a country of concern, or is an individual whose primary residence is in a county of concern. Data covered by the regulation includes precise geolocation data, biometric identifiers, genetic data, health data, financial data, government-issued identification numbers, and certain other identifiers, including device or hardware-based identifiers, advertising identifiers, and demographic or contact data.
First Complaint Filed Under Protecting Americans’ Data from Foreign Adversaries Act: The Electronic Privacy Information Center (“EPIC”) and the Irish Counsel for Civil Liberties (“ICCL”) Enforce Unit filed the first-ever complaint under the Protecting Americans’ Data from Foreign Adversaries Act (“PADFAA”). PADFAA makes it unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, or otherwise make available specified personally identifiable sensitive data of individuals residing in the United States to North Korea, China, Russia, Iran, or an entity controlled by one of those countries. The complaint alleges that Google’s real-time bidding system data includes personally identifiable sensitive data, that Google executives were aware that data from its real-time bidding system may have been resold, and that Google’s public list of certified companies that receive real-time bidding bid request data include multiple companies based in foreign adversary countries.
FDA Issues Draft Guidance for AI-Enabled Device Software Functions: The U.S. Food and Drug Administration (“FDA”) published its January 2025 Draft Guidance for Industry and FDA Staff regarding AI-enabled device software functionality. The Draft provides recommendations regarding the contents of marketing submissions for AI-enabled medical devices, including documentation and information that will support the FDA’s evaluation of their safety and effectiveness. The Draft Guidance is designed to reflect a “comprehensive approach” to the management of devices through their total product life cycle and includes recommendations for the design, development, and implementation of AI-enabled devices. The FDA is accepting comments on the Draft Guidance, which may be submitted online until April 7, 2025.
Industry Coalition Pushes for Unified National Data Privacy Law: A coalition of over thirty industry groups, including the U.S. Chamber of Commerce, sent a letter to Congress urging it to enact a comprehensive national data privacy law. The letter highlights the urgent need for a cohesive federal standard to replace the fragmented state laws that complicate compliance and stifle competition. The letter advocates for legislation based on principles to empower startups and small businesses by reducing costs and improving consumer access to services. The letter supports granting consumers the right to understand, correct, and delete their data, and to opt out of targeted advertising, while emphasizing transparency by requiring companies to disclose data practices and secure consent for processing sensitive information. It also focuses on the principles of limiting data collection to essential purposes and implementing robust security measures. While the principles aim to override strong state laws like that in California, the proposal notably excludes data broker regulation, a previous point of contention. The coalition cautions against legislation that could lead to frivolous litigation, advocating for balanced enforcement and collaborative compliance. By adhering to these principles, the industry groups seek to ensure legal certainty and promote responsible data use, benefiting both businesses and consumers.
Cyber Trust Mark Unveiled: The White House launched a labeling scheme for internet-of-things devices designed to inform consumers when devices meet certain government-determined cybersecurity standards. The program has been in development for several months and involves collaboration between the White House, the National Institute of Standards and Technology, and the Federal Communications Commission. UL Solutions, a global safety and testing company headquartered in Illinois, has been selected as the lead administrator of the program along with 10 other firms as deputy administrators. With the main goal of helping consumers make more cyber-secure choices when purchasing products, the White House hopes to have products with the new cyber trust mark hit shelves before the end of 2025.

U.S. LITIGATION
Texas Attorney General Sues Insurance Company for Unlawful Collection and Sharing of Driving Data: Texas Attorney General Ken Paxton filed a lawsuit against Allstate and its data analytics subsidiary, Arity. The lawsuit alleges that Arity paid app developers to incorporate its software development kit that tracked location data from over 45 million consumers in the U.S. According to the lawsuit, Arity then shared that data with Allstate and other insurers, who would use the data to justify increasing car insurance premiums. The sale of precise geolocation data of Texans violated the Texas Data Privacy and Security Act (“TDPSA”) according to the Texas Attorney General. The TDPSA requires the companies to provide notice and obtain informed consent to use the sensitive data of Texas residents, which includes precise geolocation data. The Texas Attorney General sued General Motors in August of 2024, alleging similar practices relating to the collection and sale of driver data. 
Eleventh Circuit Overturns FCC’s One-to-One Consent Rule, Upholds Broader Telemarketing Practices: In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission, No. 24-10277, 2025 WL 289152 (11th Cir. Jan. 24, 2025), the Eleventh Circuit vacated the FCC’s one-to-one consent rule under the Telephone Consumer Protection Act (“TCPA”). The court found that the rule exceeded the FCC’s authority and conflicted with the statutory meaning of “prior express consent.” By requiring separate consent for each seller and topic-related call, the rule was deemed unnecessary. This decision allows businesses to continue using broader consent practices, maintaining shared consent agreements. The ruling emphasizes that consent should align with common-law principles rather than be restricted to a single entity. While the FCC’s next steps remain uncertain, the decision reduces compliance burdens and may challenge other TCPA regulations.
California Judge Blocks Enforcement of Social Media Addiction Law: The California Protecting Our Kids from Social Media Addiction Act (the “Act”) has been temporarily blocked. The Act was set to take effect on January 1, 2025. The law aims to prevent social media platforms from using algorithms to provide addictive content to children. Judge Edward J. Davila initially declined to block key parts of the law but agreed to pause enforcement until February 1, 2025, to allow the Ninth Circuit to review the case. NetChoice, a tech trade group, is challenging the law on First Amendment grounds. NetChoice argues that restricting minors’ access to personalized feeds violates the First Amendment. The group has appealed to the Ninth Circuit and is seeking an injunction to prevent the law from taking effect. Judge Davila’s decision recognized the “novel, difficult, and important” constitutional issues presented by the case. The law includes provisions to restrict minors’ access to personalized feeds, limit their ability to view likes and other feedback, and restrict third-party interaction.

U.S. ENFORCEMENT
FTC Settles Enforcement Action Against General Motors for Sharing Geolocation and Driving Behavior Data Without Consent: The Federal Trade Commission (“FTC”) announced a proposed order to settle FTC allegations against General Motors that it collected, used, and sold driver’s precise geolocation data and driving behavior information from millions of vehicles without adequately notifying consumers and obtaining their affirmative consent. The FTC specifically alleged General Motors used a misleading enrollment process to get consumers to sign up for its OnStar-connected vehicle service and Smart Driver feature without proper notice or consent during that process. The information was then sold to third parties, including consumer reporting agencies, according to the FTC. As part of the settlement, General Motors will be prohibited from disclosing driver data to consumer reporting agencies, required to allow consumers to obtain and delete their data, required to obtain consent prior to collection, and required to allow consumers to limit data collected from their vehicles.
FTC Releases Proposed Order Against GoDaddy for Alleged Data Security Failures: The Federal Trade Commission (“FTC”) has announced it had reached a proposed settlement in its action against GoDaddy Inc. (“GoDaddy”) for failing to implement reasonable and appropriate security measures, which resulted in several major data breaches between 2019 and 2022. According to the FTC’s complaint, GoDaddy misled customers of its data security practices, through claims on its websites and in email and social media ads, and by representing it was in compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. However, the FTC found that GoDaddy failed to inventory and manage assets and software updates, assess risks to its shared hosting services, adequately log and monitor security-related events, and segment its shared hosting from less secure environments. The FTC’s proposed order against GoDaddy prohibits GoDaddy from misleading its customers about its security practices and requires GoDaddy to implement a comprehensive information security program. GoDaddy must also hire a third-party assessor to conduct biennial reviews of its information security program.
CPPA Reaches Settlements with Additional Data Brokers: Following their announcement of a public investigative sweep of data broker registration compliance, the CPPA has settled with additional data brokers PayDae, Inc. d/b/a Infillion (“Infillion”), The Data Group, LLC (“The Data Group”), and Key Marketing Advantage, LLC (“KMA”) for failing to register as a data broker and pay an annual fee as required by California’s Delete Act. Infillion will pay $54,200 for failing to register between February 1, 2024, and November 4, 2024. The Data Group will pay $46,600 for failing to register between February 1, 2024, and September 20, 2024. KMA will pay $55,800 for failing to register between February 1, 2024, and November 5, 2024. In addition to the fines, the companies have agreed to injunctive terms. The Delete Act imposes fines of $200 per day for failing to register by the deadline.
Mortgage Company Fined by State Financial Regulators for Cybersecurity Breach: Bayview Asset Management LLC and three affiliates (collectively, “Bayview”) agreed to pay a $20 million fine and improve their cybersecurity programs to settle allegations from 53 state financial regulators. The Conference of State Bank Supervisors (“CSBS”) alleged that the mortgage companies had deficient cybersecurity practices and did not fully cooperate with regulators after a 2021 data breach. The data breach compromised data for 5.8 million customers. The coordinated enforcement action was led by financial regulators in California, Maryland, North Carolina, and Washington State. The regulators said the companies’ information technology and cybersecurity practices did not meet federal or state requirements. The firms also delayed the supervisory process by withholding requested information and providing redacted documents in the initial stages of a post-breach exam. The companies also agreed to undergo independent assessments and provide three years of additional reporting to the state regulators.
SEC Reaches Settlement over Misleading Cybersecurity Disclosures: The SEC announced it has settled charges with Ashford Inc., an asset management firm, over misleading disclosures related to a cybersecurity incident. This enforcement action stemmed from a ransomware attack in September 2023, compromising over 12 terabytes of sensitive hotel customer data, including driver’s licenses and credit card numbers. Despite the breach, Ashford falsely reported in its November 2023 filings that no customer information was exposed. The SEC alleged negligence in Ashford’s disclosures, citing violations of the Securities Act of 1933 and the Exchange Act of 1934. Without admitting or denying the allegations, Ashford agreed to a $115,231 penalty and an injunction. This case highlights the critical importance of accurate cybersecurity disclosures and demonstrates the SEC’s commitment to ensuring transparency and accountability in corporate reporting.
FTC Finalizes Data Breach-Related Settlement with Marriott: The FTC has finalized its order against Marriott International, Inc. (“Marriott”) and its subsidiary Starwood Hotels & Resorts Worldwide LLC (“Starwood”). As previously reported, the FTC entered into a settlement with Marriott and Starwood for three data breaches the companies experienced between 2014 and 2020, which collectively impacted more than 344 million guest records. Under the finalized order, Marriott and Starwood are required to establish a comprehensive information security program, implement a policy to retain personal information only for as long as reasonably necessary, and establish a link on their website for U.S. customers to request deletion of their personal information associated with their email address or loyalty rewards account number. The order also requires Marriott to review loyalty rewards accounts upon customer request and restore stolen loyalty points. The companies are further prohibited from misrepresenting their information collection practices and data security measures.
New York Attorney General Settles with Auto Insurance Company over Data Breach: The New York Attorney General settled with automobile insurance company, Noblr, for a data breach the company experienced in January 2021. Noblr’s online insurance quoting tool exposed full, plaintext driver’s license numbers, including on the backend of its website and in PDFs generated when a purchase was made. The data breach impacted the personal information of more than 80,000 New Yorkers. The data breach was part of an industry-wide campaign to steal personal information (e.g., driver’s license numbers and dates of birth) from online automobile insurance quoting applications to be used to file fraudulent unemployment claims during the COVID-19 pandemic. As part of its settlement, Noblr must pay the New York Attorney General $500,000 in penalties and strengthen its data security measures such as by enhancing its web application defenses and maintaining a comprehensive information security program, data inventory, access controls (e.g., authentication procedures), and logging and monitoring systems.
FTC Alleges Video Game Maker Violated COPPA and Engaged in Deceptive Marketing Practices: The Federal Trade Commission (“FTC”) has taken action against Cognosphere Pte. Ltd and its subsidiary Cognosphere LLC, also known as HoYoverse, the developer of the game Genshin Impact (“HoYoverse”). The FTC alleges that HoYoverse violated the Children’s Online Privacy Protection Act (“COPPA”) and engaged in deceptive marketing practices. Specifically, the company is accused of unfairly marketing loot boxes to children and misleading players about the odds of winning prizes and the true cost of in-game transactions. To settle these charges, HoYoverse will pay a $20 million fine and is prohibited from allowing children under 16 to make in-game purchases without parental consent. Additionally, the company must provide an option to purchase loot boxes directly with real money and disclose loot box odds and exchange rates. HoYoverse is also required to delete personal information collected from children under 13 without parental consent. The FTC’s actions aim to protect consumers, especially children and teens, from deceptive practices related to in-game purchases.
OCR Finalizes Several Settlements for HIPAA Violations: Prior to the inauguration of President Trump, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) brought enforcement actions against four entities, USR Holdings, LLC (“USR”), Elgon Information Systems (“Elgon”), Solara Medical Supplies, LLC (“Solara”) and Northeast Surgical Group, P.C. (“NESG”), for potential violations of the Health Insurance Portability and Accountability Act’s (“HIPAA”) Security Rule due to the data breaches the entities experienced. USR reported that between August 23, 2018, and December 8, 2018, a database containing the electronic protected health information (“ePHI”) of 2,903 individuals was accessed by an unauthorized third party who was able to delete the ePHI in the database. Elgon and NESG each discovered a ransomware attack in March 2023, which affected the protected health information (“PHI”) of approximately 31,248 individuals and 15,298 individuals, respectively. Solara experienced a phishing attack that allowed an unauthorized third party to gain access to eight of Solara’s employees’ email accounts between April and June 2019, resulting in the compromise of 114,007 individuals’ ePHI. As part of their settlements, each of the entities is required to pay a fine to OCR: USR $337,750, Elgon $80,000, Solara $3,000,000, and NESG $10,000. Additionally, each of the entities is required to implement certain data security measures such as conducting a risk analysis, implementing a risk management plan, maintaining written policies and procedures to comply with HIPAA, and distributing such policies or providing training on such policies to its workforce.  
Virgina Attorney General Sues TikTok for Addictive Fees and Allowing Chinese Government to Access Data: Virginia Attorney General Jason Miyares announced his office had filed a lawsuit against TikTok and ByteDance Ltd, the Chinese-based parent company of TikTok. The lawsuit alleges that TikTok was intentionally designed to be addictive for adolescent users and that the company deceived parents about TikTok content, including by claiming the app is appropriate for children over the age of 12 in violation of the Virginia Consumer Protection Act. 

INTERNATIONAL LAWS & REGULATIONS
UK ICO Publishes Guidance on Pay or Consent Model: On January 23, the UK’s Information Commissioner’s Office (“ICO”) published its Guidance for Organizations Implementing or Considering Implementing Consent or Pay Models. The guidance is designed to clarify how organizations can deploy ‘consent or pay’ models in a manner that gives users meaningful control over the privacy of their information while still supporting their economic viability. The guidance addresses the requirements of applicable UK laws, including PECR and the UK GDPR, and provides extensive guidance as to how appropriate fees may be calculated and how to address imbalances of power. The guidance includes a set of factors that organizations can use to assess their consent models and includes plans to further engage with online consent management platforms, which are typically used by businesses to manage the use of essential and non-essential online trackers. Businesses with operations in the UK should carefully review their current online tracker consent management tools in light of this new guidance.
EU Commission to Pay Damages for Sending IP Address to Meta: The European General Court has ordered the European Commission to pay a German citizen, Thomas Bindl, €400 in damages for unlawfully transferring his personal data to the U.S. This decision sets a new precedent regarding EU data protection litigation. The court found that the Commission breached data protection regulations by operating a website with a “sign in with Facebook” option. This resulted in Bindl’s IP address, along with other data, being transferred to Meta without ensuring adequate safeguards were in place. The transfer happened during the transition period between the EU-U.S. Privacy Shield and the EU-U.S. Data Protection Framework. The court determined that this left Bindl in a position of uncertainty about how his data was being processed. The ruling is significant because it recognizes “intrinsic harm” and may pave the way for large-scale collective redress actions.
European Data Protection Board Releases AI Bias Assessment and Data Subject Rights Tools: The European Data Protection Board (“EDPB”) released two AI tools as part of the AI: Complex Algorithms and effective Data Protection Supervision Projects. The EDPB launched the project in the context of the Support Pool of Experts program at the request of the German Federal Data Protection Authority. The Support Pool of Experts program aims to help data protection authorities increase their enforcement capacity by developing common tools and giving them access to a wide pool of experts. The new documents address best practices for bias evaluation and the effective implementation of data subject rights, specifically the rights to rectification and erasure when AI systems have been developed with personal data.
European Data Protection Board Adopts New Guidelines on Pseudonymization: The EDPB released new guidelines on pseudonymization for public consultation (the “Guidelines”). Although pseudonymized data still constitutes personal data under the GDPR, pseudonymization can reduce the risks to the data subjects by preventing the attribution of personal data to natural persons in the course of the processing of the data, and in the event of unauthorized access or use. In certain circumstances, the risk reduction resulting from pseudonymization may enable controllers to rely on legitimate interests as the legal basis for processing personal data under the GDPR, provided they meet the other requirements, or help guarantee an essentially equivalent level of protection for data they intend to export. The Guidelines provide real-world examples illustrating the use of pseudonymization in various scenarios, such as internal analysis, external analysis, and research.
CJEU Issues Ruling on Excessive Data Subject Requests: On January 9, the Court of Justice of the European Union (“CJEU”) issued its ruling in the case Österreichische Datenschutzbehörde (C‑416/23). The primary question before the Court was when a European data protection authority may deny consumer requests due to their excessive nature. Rather than specifying an arbitrary numerical threshold of requests received, the CJEU found that authorities must consider the relevant facts to determine whether the individual submitting the request has “an abusive intention.” While the number of requests submitted may be a factor in determining this intention, it is not the only factor. Additionally, the CJEU emphasized that Data Protection Authorities should strongly consider charging a “reasonable fee” for handling requests they suspect may be excessive prior to simply denying them.
Daniel R. Saeedi, Rachel L. Schaller Gabrielle N. Ganz, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article

Illinois Employers: Navigating New E-Verify and I-9 Compliance Requirements

Effective January 1, 2025, Illinois employers face updated regulations under Public Act 103-0879, altering the landscape of E-Verify and Form I-9 compliance. This law applies to companies located in Illinois and any employer with employees working in Illinois, regardless of where the company is headquartered. It does not extend to employees working outside of Illinois for an Illinois-based company. The Illinois Department of Labor has clarified that the law does not prohibit private employers from using E-Verify. However, it does reaffirm current federal E-Verify requirements and impose several additional obligations to protect workers and ensure fair practices.
Key Compliance Updates:
Expanded Employee Protections:

Employers must notify employees when their work eligibility or documentation is questioned.
Employers are required to inform the entire workforce in case of a federal I-9 audit.

Prohibited Practices:

E-Verify cannot be used to prescreen applicants.
Employers cannot act on tentative non-confirmations without following federal procedures.
Employers must follow specific state and federal guidelines if they receive tentative non-confirmation notifications to ensure fair treatment and due process for all their employees.

State Penalties for Non-Compliance:

Violations of the law may lead to penalties, emphasizing the importance of strict adherence.

Conclusion:
Contrary to some misconceptions, the law does not prevent private employers from using E-Verify. Rather, it regulates its use to uphold anti-discrimination policies, safeguard worker rights, and protect businesses. A number of the law’s provisions mirror those that have always been part of the E-Verify program, but employers (especially those in Illinois or with employees working in Illinois) should review their practices to ensure compliance with the state and federal requirements.

Corporate Transparency Act Recent Update

As previously reported, in early December, the District Court for the Northern District of Texas issued a nationwide injunction against the enforcement of the CTA [1]. The government quickly appealed. Just a few weeks later, on December 23, 2024, the Fifth Circuit Court of Appeals granted the government’s emergency motion to stay the nationwide injunction — effectively lifting the injunction and allowing the enforcement of the CTA to proceed. Given there was a January 1, 2025, deadline for millions of small business owners to file, FinCEN graciously decided to extend the filing deadline to January 13, 2025.
Then, just three days later, on December 26, 2024, in a short, one-page order, a different panel of judges from the same Fifth Circuit Court of Appeals reinstated the injunction, again placing the CTA and its enforcement provisions on hold. The government again quickly responded, petitioning the U.S. Supreme Court to lift the injunction. On January 23, 2025, the Supreme Court did precisely that — granting the government’s motion. The Supreme Court’s order, however, only applied to the injunction issued by the federal judge in Texas. Since a separate nationwide order issued by a different federal judge in Texas [2] was still in place, FinCEN posted a new update to its website one day later, stating:
“Reporting companies are not currently required to file beneficial ownership information with FinCEN despite the Supreme Court’s action in Texas Top Cop Shop. Reporting companies also are not subject to liability if they fail to file this information while the Smith order remains in force. However, reporting companies may continue to voluntarily submit beneficial ownership information reports. [3] “
Opinions vary regarding whether reporting companies should file voluntarily. At the very least, reporting companies should be prepared to file quickly if and when the “red light” turns green once again. In the meantime, we continue to watch for any additional rulings. To stay up to date, please check our website regularly or contact a member of our Corporate Transparency Team for advice.
[1] Texas Top Cop Shop, Inc. v. McHenry
[2] Smith v. U.S. Department of the Treasury
[3] https://www.fincen.gov/boi (last accessed February 3, 2025)

False Claims Act Exposure in Focus: President Trump Signs Executive Order Targeting DEI Programs

On January 21, 2025, President Trump issued an executive order titled “Ending Illegal Discrimination and Restoring Merit-Based Opportunity” (the “EO”), which aims to eliminate diversity, equity, and inclusion (DEI) policies and programs across the federal government and within companies that do business with the federal government.
Importantly, the EO revokes Executive Order 11246, which, since 1965, has mandated affirmative action in employment from government contractors and required implementation of affirmative action programs.[i]
Federal contractors and grant recipients have until April 21, 2025 (90 days from the issuance of the EO) to comply with the EO’s provisions. 
Below, we summarize the False Claims Act (FCA) implications of the EO.[ii] Briefly stated, federal contractors and grant recipients, including certain health care organizations, should pay close attention to the EO’s required certifications since they directly tie to potential FCA liability premised on false certification of compliance with the federal anti-discrimination laws.
Key Provisions of the EO

Directs that federal contractors “shall not consider race, color, sex, sexual preference, religion, or national origin in ways that violate the Nation’s civil rights laws.”
Instructs the Director of the Office of Management and Budget to (1) review and revise, as appropriate, all governmentwide processes, directives, and guidance; (2) remove references to DEI and diversity, equity, inclusion, and accessibility (DEIA) from federal acquisition, contracting, grants, and financial assistance procedures; and (3) terminate all “diversity,” “equity,” and analogous mandates, requirements, programs, or activities, as appropriate.
Directs the head of each agency to include “in every contract or grant award” a (1) “term requiring the contractual counterparty or grant recipient to agree that its compliance in all respects with all applicable Federal anti-discrimination laws is material to the government’s payment decisions for purposes of [the FCA]” and (2) “to certify that it does not operate any programs promoting DEI that violate any applicable Federal anti-discrimination laws.”
Instructs the Attorney General, within 120 days of the EO (by May 21, 2025), in consultation with other agency heads, to submit a report containing a “proposed strategic enforcement plan” that outlines, among other things, “the most egregious and discriminatory DEI practitioners in each sector of concern” and “specific steps or measures to deter DEI programs or principles … that constitute illegal discrimination or preferences.”

Pertinent FCA Background
Unlike other federal laws that are enforceable only by the federal government, the FCA is unique in that it also allows private whistleblowers, known as relators, to file qui tam actions on behalf of the government in exchange for a share of the recovery (ranging between 15 and 30 percent of the recovery). The FCA imposes mandatory per-claim statutory penalties that are adjusted annually (currently ranging from $13,946 to $27,894 for each false claim) as well as treble damages.
There are a variety of actionable theories under the FCA beyond the scenario where a company bills the government for products or services that were never provided. One such theory, known as “false certification,” occurs when a party certifies compliance with a required contractual provision, statute, regulation, or governmental program in connection with the submission of a claim.
In false certification cases, noncompliance with applicable legal requirements must be “material” to the government’s payment decision. Materiality is often a contested, focal issue in FCA cases. The U.S. Supreme Court clarified in Universal Health Services, Inc. v. U.S. ex rel. Escobar that the materiality standard is “rigorous” and “demanding” because the FCA is not “a vehicle for punishing garden-variety breaches of contract or regulatory violations.”[iii]
FCA Implications
The mandates set forth in the EO will require a clause in all contracts and grant awards with the federal government where the contractor or grant recipient certifies that it does not have any programs promoting DEI that violate any applicable federal anti-discrimination laws and acknowledges that such compliance is material to the government’s payment decision.
With the new certification and materiality requirements, whistleblowers are likely to be further incentivized to bring FCA actions on the belief that it may be easier to prove a violation. It is unclear how that will play out in the courts. For example, while the EO will require that contracts and grant awards contain a clause stating that compliance with the federal anti-discrimination laws is “material” to the government’s payment decision, that does not end the materiality inquiry. The U.S. Supreme Court in Escobar noted how “the Government’s decision to expressly identify a provision as a condition of payment is relevant, but not automatically dispositive.”[iv]  
Additionally, it remains to be seen how uniformly courts will apply the “rigorous” and “demanding” materiality standard in FCA cases predicated on DEI programs while adhering to Escobar’s direction that “the False Claims Act is not a means of imposing treble damages and other penalties for insignificant regulatory or contractual violations.”[v] Indeed, federal contractors, particularly certain health care organizations, that submit many claims to the federal government could face billions of dollars in potential exposure—largely due to the FCA’s per-claim penalties—stemming from a particular program that was indisputably lawful prior to the second Trump administration and unrelated to the nature of the contracted items or services.
While it is not clear precisely which specific DEI/DEIA programs or initiatives would be prohibited, the Trump administration’s position is clear that contractors or grant recipients found to have submitted requests for payment while maintaining unlawful DEI programs could be subject to significant FCA liability.
Best Practices for Mitigating FCA Risk 

DEI and DEIA initiatives, including policies, programs, and plans, should be promptly and carefully evaluated to determine whether they may violate federal anti-discrimination laws, as federal contractors and grant recipients will need to certify compliance with those laws. Remedial measures should be promptly implemented, as appropriate, to the extent any initiatives are likely to violate federal anti-discrimination laws.
Companies should monitor agency publications for guidance on which initiatives remain permissible under the EO. Courts are also expected to play an important role in clarifying the reach of the anti-discrimination laws, especially following the Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo, where it held that “agency interpretations of statutes—like agency interpretations of the Constitution—are not entitled to deference.”[vi] This is especially true here, where the new EO interpretation of DEI activities as unlawful is a radical shift from the Biden administration’s position as expressed in both guidance and regulations.
Documentation of compliance with anti-discrimination laws is essential. Records reflecting policy reviews, trainings, and remedial program changes, as appropriate, will be critical in the event of a government investigation or whistleblower claim.
Because the FCA’s anti-retaliation provisions prohibit adverse employment actions against employees for engaging in protected activity, which could include investigating perceived violations of the FCA stemming from unlawful DEI programs, anti-retaliation compliance protocols and training programs to address this heightened whistleblower risk are recommended.
While the EO is not binding on private-sector organizations that do not contract or do business with the federal government, the EO is still valuable insofar as it shows the Trump administration’s view that various DEI programs and policies may be considered illegal under the anti-discrimination laws.
Private-sector organizations should promptly review any DEI/DEIA plans, programs, and policies, as well as their affirmative action programs, to determine whether they contain any aspects that could be deemed unlawful under Title VII of the Civil Rights Act of 1964 or any other federal, state, or local civil rights law, and consider whether to take any action to modify such plans, programs, or policies, including the names of such plans, programs, or policies.

ENDNOTES
[i] Exec. Order 11246, 3 C.F.R. § 339 (1964–1965).
[ii] Members of our labor and employment team have prepared an employment law-focused analysis of the EO in this blog post.
[iii] See 579 U.S. 176, 194 (2016). More information on materiality and how courts have grappled with Escobar over the years is available in our prior blog post.
[iv] Id. at 178.
[v] Id. at 196.
[vi] See 603 U.S. 369, 392 (2024).

In First Major Speech as Acting CFTC Chairman, Pham Describes Policy Priorities

In a fireside chat at the ABA Futures & Derivatives Law Committee Winter Meeting on January 30th, Acting Chairman Caroline Pham shared her industry-friendly agenda for the Commodity Futures Trading Commission (CFTC). She described specific goals for the agency in the coming months (discussed below) and encouraged listeners to read her past dissenting statements for even more context about her priorities.
Back to Basics
To achieve her first priority—back to basics—Pham will hold weekly senior staff meetings and focus on “the 3 Ms” when considering an initiative:

Mission: Does it serve the CFTC’s mission?
Markets: Does it serve the markets?
Mindset: Is the CFTC looking at the initiative with a growth mindset?

Reorganizing the CFTC
Years ago, one division was responsible for overseeing clearing houses and intermediaries. After Dodd-Frank, this division was split into two, and Pham is now proposing to reunite them (with FCMs and DCOs having shared oversight). She also contemplated the creation of a Division of Examinations, which would engage with registrants on issues of non-compliance and remediation, rather than the Division of Enforcement.
Industry Roundtables
Industry engagement was a clear priority for Pham, and she announced that the CFTC will host roundtables on three key areas of interest: (1) Digital Assets; (2) Affiliation and Conflicts of Interest; and (3) Prediction Markets. Pham clarified that she does not intend to act on digital assets until receiving direction from President Trump.
Enforcement Priorities
Acting Chair Pham stated unequivocally that her top enforcement priority will be fraud and misconduct that leads to actual consumer harm, where there is a reasonable probability of making consumers whole again. In a more recent statement, she announced the creation of two new task forces: one on Complex Fraud, and one on Retail Fraud and General Enforcement, reflecting the agency’s new priorities and replacing the prior task forces.

Looking Back at the False Claims Act in 2024 as the Government Keeps its Sights on Cybersecurity in 2025

In 2024, the government and whistleblowers were party to 558 settlements and judgments collecting over $2.9 billion. The government continued its effort to combat cybersecurity threats through its Civil Cyber-Fraud Initiative, which is dedicated to using the FCA to ensure that federal contractors and grantees are compliant with cybersecurity requirements. Settlements in 2024 included allegations against companies for their failure to provide secure systems to customers, failure to provide secure hosting of personal information, and failing to properly maintain, patch, and update the software systems. The Justice Department has made clear that cybersecurity is one of its key enforcement priorities in 2025 and moving forward, meaning all federal contractors must be particularly mindful of federal cybersecurity requirements. To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement & Investigations Practice Group is pleased to present the False Claims Act: 2024 Year in Review, our 13th annual review of significant FCA cases, developments, and trends.
 
Listen to this post

 

The False Claims Act in 2024: A Government Enforcement Update

This past year, the False Claims Act (FCA) continued to be a key tool for the Justice Department and whistleblowers to bring suits against companies, including those in the financial services sector. The Justice Department secured 558 FCA settlements and judgments and collected $2.9 billion in fiscal year 2024. Whistleblowers were responsible for 979 qui tam suits — a record number — and collected over $400 million for filing actions to expose fraud and false claims. With a constant focus on FCA enforcement, the risk to corporations of huge financial penalties under the FCA remains. Companies in the financial services sector must continue to take the necessary steps to prevent FCA violations and be particularly mindful of potential whistleblowers who stand to have significant paydays in the event of a successful FCA claim. To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement & Investigations Practice Group is pleased to present the False Claims Act: 2024 Year in Review, our 13th annual review of significant FCA cases, developments, and trends.
Listen to this post

 

False Claims Act: 2024 Year in Review

In 2024, the government and whistleblowers were party to 558 False Claims Act (“FCA”) settlements and judgments, just slightly fewer cases than last year’s record. As a result, collections under the FCA exceeded $2.9 billion, confirming that the FCA remains one of the government’s most important tools to root out fraud, safeguard government programs, and ensure that public funds are used appropriately. As in recent years, the healthcare industry was the primary focus of FCA enforcement, with over $1.67 billion recovered from matters involving managed care providers, hospitals, pharmacies, physicians, laboratories, and long-term acute care facilities. Other areas of focus in 2024 were government procurement fraud, pandemic fraud, and enforcement through the government’s Cyber-Fraud Initiative.
To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement and Investigations Practice Group is pleased to present the False Claims Act: 2024 Year in Review, our thirteenth annual review of significant FCA cases, developments, and trends.
 
Giovanni P. Giarratana, Gregory G. Marshall, Jack W. Selden, Erin K. Sullivan, Rico Falsone, Lyndsay E. Medlin, Tara S. Sarosiek, Anna M. Lashley, Ocasha O. Musah, Brianna Rhymes, and Virginia C. Wright contributed to this article.

More Whistleblower Suits Filed Than Ever Before: The False Claims Act in 2024

As in recent years, the False Claims Act (FCA) continued to serve as a tool utilized by the federal government against government contractors in 2024. The government collected more than $2.9 billion as a result of 558 FCA settlements and judgments. Although procurement fraud was not as large a driver of the government’s recoveries as it has been in prior years, matters involving the military’s purchase of goods and services, including allegations related to the procurement process, failing to comply with contract requirements, and paying kickbacks have and will continue to be a significant concern for the government. In addition, the government’s effort to root out COVID-19-related fraud resulted in more than 250 FCA settlements and judgments, and the government collecting more than $250 million. To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement & Investigations Practice Group is pleased to present the False Claims Act: 2024 Year in Review, our 13th annual review of significant FCA cases, developments, and trends.
 
Listen to this post

 

Netflix Content Becomes Federal Evidence: EDNY’s OneTaste Prosecution Faces Scrutiny Amid DOJ Transition

Recent developments in the Eastern District of New York’s prosecution of wellness company OneTaste in U.S. v. Cherwitz have raised novel questions about the intersection of streaming content and criminal evidence.1 Defense motions filed in December of 2024 and January 2025 challenge the government’s use of journal entries originally created for a Netflix documentary as key evidence in its forced labor conspiracy case. This occurs during a sea change in DOJ priorities entering a new presidential administration.
After a five-year investigation, EDNY prosecutors in April of 2023 filed a single-count charge of forced labor conspiracy against OneTaste founder Nicole Daedone and former sales leader Rachel Cherwitz. The government alleges the conspiracy unfolded over a fourteen-year span, but in a prosecutorial first did not charge a substantive crime. Over the course of the prosecution, the defendants filed repeated motions with the court asking it to order the government to specify the nature of the offense. Most recently, Celia Cohen, newly appointed defense counsel for Rachel Cherwitz, highlighted in a January 18 motion the case’s unusual nature: “The government has charged one count of a forced labor conspiracy…without providing any critical details about the force that occurred and how it specifically induced any labor.”
In recent defense filings, the prosecution has faced mounting scrutiny over the authenticity of journal entries attributed to key government witness Ayries Blanck. Prosecutors had previously moved in October of 2024 for the court to admit the journal entries as evidence at trial for their case in chief. In a December 30 motion, Jennifer Bonjean, defense counsel for Nicole Daedone revealed that civil discovery exposes that the journal entries presented by the government as contemporaneous accounts from 2015 were actually created and extensively edited for Netflix’s 2022 documentary “Orgasm Inc” on OneTaste. 
“Through metadata and edit histories, we can watch entertainment become evidence,” Bonjean argued in her motion. Technical analysis from a court-ordered expert showed the entries underwent hundreds of revisions by multiple authors, including Netflix production staff, before being finalized in March 2023 – just days before a sealed indictment was filed against defendants Cherwitz and Daedone. The defense has argued that this Netflix content was presented to the grand jury to secure an indictment.
The government’s handling of these journal entries took a dramatic turn during a January 23 meet-and-confer session. After defense counsel challenged the authenticity of handwritten journals matching the Netflix content, prosecutors abruptly withdrew them from their case-in-chief. While maintaining the journals’ legitimacy, this retreat from evidence previously characterized as central to their case prompted new defense challenges.
“This prosecution is a house of cards,” argued defense counsel Celia Cohen and Michael Roboti of Ballard Spahr in a January 24 motion to dismiss. Cohen and Roboti, who joined Rachel Cherwitz’s defense team earlier this month, highlighted how the government’s withdrawal of the handwritten journals “exemplifies the serious problems with this prosecution.” Their motion notes that defense witnesses in a parallel civil case have exposed government witnesses as “perjurers” who “have received significant benefits from the government and from telling their ‘stories’ in the media.”
The matter came to head during a January 24 hearing before Judge Diane Gujarati, who had previously denied prosecutors’ request to grant anonymity to ten potential witnesses. When Cohen attempted to address unresolved issues regarding the journals, she was sharply rebuked by the court, which had indicated it would not address the new filing during the scheduled hearing. Gujarati stated that she did not intend to schedule any further conferences before trial. The trial date is scheduled for May 5, 2025. 
The case’s challenges coincide with significant changes at DOJ and EDNY under the new Trump administration. EDNY Long Island Division Criminal Chief John J. Durham was sworn in as Interim U.S. Attorney for EDNY on January 21, following former U.S. Attorney Breon Peace’s January 10 resignation. Peace spearheaded the OneTaste prosecution. Durham will serve until the Senate confirms President Trump’s nominee, Nassau County District Court Judge Joseph Nocella Jr.
The timing is particularly significant given President Trump’s January 20 executive order “Ending The Weaponization of The Federal Government.” The order specifically cites the EDNY prosecution of Douglass Mackey as an example of “third-world weaponization of prosecutorial power.” This reference carries special weight as EDNY deployed similar strategies in both the Mackey and Cherwitz cases – single conspiracy charges without substantive crimes, supported by media narratives rather than traditional evidence.
As Durham takes the helm at EDNY, this case presents an early test of how the office will handle prosecutions that blend entertainment with evidence, and whether novel theories of conspiracy without specified crimes will survive increased scrutiny under new leadership. The transformation of Netflix content into federal evidence may face particular challenges as the Attorney General reviews law enforcement activities of the prior four years under the new executive order’s mandate.
The government’s position faces further scrutiny as mainstream media begins to question its narrative. A January 24 Wall Street Journal profile by veteran legal reporter Corinne Ramey presents Daedone as a complex figure whose supporters call her a “visionary,” while examining the unusual nature of prosecuting wellness education as forced labor. The piece’s headline – “She Made Orgasmic Meditation Her Life. Not Even Prison Will Stop Her” – captures both the prosecution’s gravity and Daedone’s unwavering commitment to her work despite federal charges.

1 U.S. v. Cherwitz, et al., No. 23-cr-146 (DG).

Human Trafficking Monitoring for Telehealth Providers

Overview: Telehealth providers are uniquely positioned to monitor for human trafficking when interacting with patients. Survivor records indicate that health services are among the most common points of access to help trafficked persons, and nearly 70% of human trafficking survivors report having had access to health services at some point during their exploitation. While there’s limited data regarding trafficked persons’ use of telehealth services, empirical evidence demonstrates that a greater proportion of trafficked persons completed telehealth appointments during the early period of the COVID-19 pandemic than pre-pandemic. To enable telehealth providers to assist trafficked patients, this article discusses the legal landscape surrounding human trafficking and lays out best practices for telehealth providers.
Background: Telehealth providers are subject to a patchwork of legal requirements aimed at reducing human trafficking. If the patient is under the age of 18 or is disabled, many states require telehealth providers to report instances in which they know or reasonably believe the patient has experienced or is experiencing abuse, mistreatment, or neglect. Some states, such as Florida and New Jersey, also require telehealth providers partake in anti-trafficking education.
Online platforms that offer telehealth services are also subject to federal legislation regarding sex trafficking monitoring. In 2018, US Congress passed the Allow States and Victims to Fight Online Trafficking Act of 2017 (FOSTA). The law was enacted primarily in response to unsuccessful litigation against Backpage.com, a website accused of permitting and even assisting users in posting advertisements for sex trafficking. Before FOSTA’s enactment, Section 230 of the Communications Decency Act essentially shielded online platforms from liability for such conduct. FOSTA, however, effectively created an exception to Section 230 by establishing criminal penalties for those who promote or facilitate sex trafficking through their control of online platforms. These penalties, generally limited to a fine, imprisonment of up to 10 years, or both, may be heightened for aggravated violations, which are violations involving reckless disregard of sex trafficking or the promotion or facilitation of prostitution of five or more people. State attorneys general and, in cases of aggravated violations, injured persons also may bring civil actions against those who control online platforms in violation of the law.
Since FOSTA’s inception, the US Department of Justice (DOJ) has brought at least one criminal charge under the law. In 2021, after being charged by DOJ, the owner of the online platform CityXGuide.com pleaded guilty to one count of promotion of prostitution and reckless disregard of sex trafficking, a violation of FOSTA’s aggravated violations provision. According to DOJ officials, more charges have not been brought under FOSTA because the law is relatively new and federal prosecutors have had success prosecuting those who control online platforms by bringing racketeering and money laundering charges. Nonetheless, it is possible that prosecutors will pursue FOSTA violations more regularly during the Trump administration, particularly because US President Donald Trump signed it into law during his first term in office, calling it “crucial legislation.”
Best Practices for Telehealth Providers
Telehealth providers and online platforms that offer telehealth services should consider adhering to the following best practices when monitoring for human trafficking:

Complete Anti-Trafficking Training. Telehealth providers should complete an anti-trafficking training or educational program on a regular basis, regardless of whether they are legally required to do so. One such program is the US Department of Health and Human Services’ Stop, Observe, Ask, and Respond (SOAR) to Health and Wellness Training program. Telehealth providers may attend SOAR trainings in person or online and, depending on the program, may receive continuing education credit for their participation.
Implement a Referral Network. Prior to monitoring patients, telehealth providers should prepare a comprehensive referral list with detailed procedures for assisting identified individuals who have been trafficked or are vulnerable to trafficking. Referral lists should help patients access services that meet various immediate, intermediate, and long-term needs. Referral lists also should include information about how to connect with both national and local anti-trafficking resources.
Be Aware of Indicators of Human Trafficking for Adults. The National Human Trafficking Training and Technical Assistance Center has developed indicators of adult human trafficking. Indicators that may arise during a telehealth visit include instances where:

The patient is not in control of personal identification or does not have valid identification as part of the visit
The patient does not know where they live (or their geolocation does not match their stated location)
The patient’s story does not make sense or seems scripted
The patient seems afraid to answer questions
The patient appears to be looking at an unidentified person offscreen after speaking
The patient’s video background appears to be an odd living or work space (may include tinted windows, security cameras, barbed wire, or people sleeping or living at worksite)
The patient exhibits or indicates signs of physical abuse, drug or alcohol misuse, or malnourishment.

Be Aware of Indicators of Human Trafficking for Children. Indicators of human trafficking for children often differ from those for adults. The National Center for Missing & Exploited Children (NCMEC) has issued a list of risk factors useful for identifying possible indicators of child sex trafficking. Although NCMEC cautions that no single indicator can accurately identify a child as a sex trafficking victim, the presence of multiple factors increases the likelihood of identifying victims. Indicators that may arise during a telehealth visit include the following:

The child avoids answering questions or lets others speak for them
The child lies about their age and identity or otherwise responds to the provider in a manner that doesn’t align with their telehealth profile or account information
The child appears to be looking at an unidentified person offscreen after speaking
The child uses prostitution-related terms, such as “daddy,” “the life,” and “the game”
The child has no identification (or their identification is held by another person)
The child displays evidence of travel in their video background (living out of suitcases, at motels, or in a car)
The child references traveling to cities or states that do not match their geolocation
The child has numerous unaddressed medical issues.