Ninth Circuit Upholds DFPI’s Commercial Financing Disclosure Rules
On September 30, 2018, California enacted SB 1235, codified at California Financial Codes sections 22800–22805. See California Will Soon Require Novel Disclosure Requirements Providers Of Commercial Financings. SB 1235 requires that an offer of commercial financing for $500,000 or less be accompanied by disclosures of: (1) the amount of funds provided, (2) the total dollar cost of financing, (3) the term or estimated term, (4) the method, frequency, and amount of payments, (5) a description of prepayment policies, and (6) the total cost of financing expressed as an annualized rate. Cal. Fin. Code §§ 22802(b) & 22803(a). Four years after SB 1235 was enacted, the Office of Administrative Law approved regulations implementing the disclosure requirements, 10 CCR § 900 et seq. See OAL Approves DFPI Commercial Financing Disclosure Rules – But Who Got Stuck With The Check? A few months later, , the Small Business Finance Association filed a Complaint a challenging the validity of those regulations as unconstitutional compelled commercial speech. Small Bus. Finance Ass’n v. Hewlett, 2023 WL 8711078 (C.D. Cal. Dec. 4, 2024).
Last December, U.S. District Court Judge R. Gary Klausner granted the Department of Financial Protection & Innovation’s motion for summary judgment, finding that the regulations do not violate the First Amendment under the Supreme Court’s test for compelled commercial speech established in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 417 U.S. 626 (1985). In an unpublished decision last week, the Ninth Circuit Court of Appeals affirmed Judge Klausner’s ruling. Small Bus. Finance Ass’n. v. Mohseni, 2025 WL 1111493 (9th Cir. Apr. 15, 2025).
The UK’s Failure To Prevent Fraud Act
Effective September 1, 2025, the UK’s Failure to Prevent Fraud offense will go into effect as part of the UK’s Economic Crime and Corporate Transparency Act 2023 (the ECCTA). The law significantly expands corporate liability for fraud committed by employees and other associated persons of relevant corporates and will require compliance refinement for any business within scope of the offense operating in connection with the UK. The UK government (its Home Office) published guidance in 2024 (the “Guidance”) to help companies navigate this corporate criminal fraud offense as well as take appropriate action to help prevent fraud.
As companies continue to grapple with recent developments regarding enforcement of the FCPA, international efforts to curb bribery and corruption have not waned. Foreign governments continue to prioritize anti-corruption enforcement such as the European Commission’s proposed directive from May 2023 to combat corruption, the ECCTA and Failure to Prevent Fraud Offense, as well as the recently announced International Anti-Corruption Prosecutorial Task Force with the UK, France, and Switzerland. These cross-border initiatives demonstrate how a temporary pause in U.S. enforcement of the FCPA should not result in companies moving away from maintaining robust and effective compliance programs.
The Failure to Prevent Fraud Offense
You can see more detail on the new offense in this article from our UK colleagues (Failure to prevent fraud: get ready for September | Womble Bond Dickinson). In summary, a “large organization” can be held criminally liable where an employee, agent, subsidiary, or other “associated person” commits a fraud offense intending to benefit the organization or its clients, and the organization failed to have reasonable fraud prevention procedures in place. An employee, an agent or a subsidiary is considered an “associated person” as are business partners and small organizations that provide services for or on behalf of large organizations. Regarding the underlying fraud offense itself, this includes a range of existing offenses under fraud, theft and corporate laws, which the UK’s Home Office notes as including “dishonest sales practices, the hiding of important information from consumers or investors, or dishonest practices in financial markets.”
A “large organization” for purposes of the fraud offense is defined as meeting two of the following three thresholds: (1) more than 250 employees; (2) more than £36 million (approx. USD $47.6 million) turnover; (3) more than £18 million (approx. USD $23.8 million) in total assets – and includes groups where the resources across the group meet the threshold. Further, the fraud offense has extraterritorial reach, meaning that non-UK companies may be liable for the fraud if there is a UK nexus. This could play out in several scenarios. For example, the fraud took place in the UK, the gain or loss occurred in the UK, or, alternatively, if a UK-based employee commits fraud, the employing organization could be prosecuted, regardless of where the organization is based.
What Companies Can Do Now
The Failure to Prevent Fraud offense is an important consideration in corporate compliance, extending beyond UK-based companies to non-UK companies with operations or connections in the UK. The only available defense to the failure to prevent fraud offense is for the company to demonstrate that it “had reasonable fraud prevention measures in place at the time that the fraud was committed” Or, more riskily that it was not reasonable under the circumstances to expect the organization to have any prevention procedures in place. To that end, the Guidance outlines six core principles that should underpin any effective fraud prevention framework: (1) top-level commitment; (2) risk assessment; (3) proportionate and risk-based procedures; (4) due diligence; (5) communication and training; and (6) ongoing monitoring and review. Specifically, the Guidance makes clear that even “strict compliance” with its terms will not be a “safe harbor” and that failure to conduct a risk assessment will “rarely be considered reasonable.” These principles mirror the now well-established principles in the UK that apply to the UK offences of failure to prevent bribery under the UK Bribery Act 2010, and failure to prevent the facilitation of tax evasion under the UK Criminal Finances Act 2017.
Companies should consider the following proactive steps:
Determining whether they fall within the scope of the ECCTA’s fraud offense.
Identifying individuals who qualify as “associated persons.”
Conducting and documenting a comprehensive fraud risk assessment to determine whether the company’s internal controls adequately address potential fraudulent activity involving the company.
Ensuring due diligence procedures, as related to, for instance, external commercial partner engagements and other transactions, address the risk of fraud in those higher risk activities.
Reviewing and updating existing policies and procedures to address the risks of fraud.
Communicating the company’s requirements around preventing fraud and providing targeted training to employees and other associated persons, including subsidiaries and business partners, to make clear the company’s expectations around managing the risk of fraud.
Establishing fraud related monitoring and audit protocols, including in relation to third party engagements, for ongoing oversight and periodic review.
Ensuring these policies and procedures are aligned with other financial crime prevention policies and procedures and relevant regulatory expectations.
The months ahead are a critical window to align internal policies and procedures not only with the UK’s elevated enforcement expectations as evidenced by the ECCTA and the Failure to Prevent Fraud offense, but also as bribery and corruption remain a mainstay priority for other foreign regulators. Companies should continue to prioritize the design, implementation, and assessment of their compliance internal controls. Companies with a well-designed and effective compliance program will be better equipped to adapt as regulatory landscapes shift and emerging risks develop, enabling companies to more efficiently respond to new enforcement trends.
State Privacy Enforcement Updates: CPPA Extracts Civil Penalties in Landmark Case; State Regulators Form Consortium for Privacy Enforcement Collaboration

Companies in all industries take note: regulators are scrutinizing how companies offer and manage privacy rights requests and looking into the nature of vendor processing in connection with application of those requests. This includes applying the proper verification standards and how cookies are managed. Last month, the California Privacy Protection Agency (“CPPA” or “Agency”) provided yet another example of this regulatory focus in a March 2025 Stipulated Final Order (“Order”) against a global vehicle manufacturer (referred to throughout this blog as “the Company”). We discuss this case in further detail, and provide practical takeaways from the case, further below.
On the heels of the CPPA’s landmark case against the Company, various state AGs and the CPPA announced a formal agreement to promote collaboration and information sharing in the bipartisan effort to safeguard the privacy rights of consumers. The announcement Attorney General Bonta of California can be found here. The consortium includes the CPPA and State Attorneys General from California, Colorado, Connecticut, Delaware, Indiana, New Jersey and Oregon. According to an announcement by the CPPA, the participating regulators established the consortium to share expertise and resources and coordinate in investigating potential violations of their respective privacy laws. With the establishment of a formal enforcement consortium, we can expect cross-jurisdictional collaboration on privacy enforcement by the participating states’ regulators. On the plus side, perhaps we will see the promotion of consistent interpretation of these seven states’ various laws that make up almost a third of the current patchwork of U.S. privacy legislation.
CPPA Case – Detailed Summary
In the case against the Company, the CPPA alleged that it violated the California Consumer Privacy Act (“CCPA”) by:
requiring Californians to verify themselves where verification is not required or permitted (the right to opt-out of sale/sharing and the right to limit) and provide excessive personal information to exercise privacy rights subject to verification (know, delete, correct);
using an online cookie management tool (often known as a CMP) that failed to offer Californians their privacy choices in a symmetrical or equal way and was confusing;
requiring Californians to verify that they gave their agents authority to make opt-out of sale/sharing and right to limit requests on their behalf; and
sharing consumers’ personal information with vendors, including ad tech companies, without having in place contracts that contain the necessary terms to protect privacy in connection with their role as either a service provider, contractor or third party.
This Order illustrates the potential fines and financial risks associated with non-compliance with the state privacy laws. Of the $632,500 administrative fine lodged against the company, the Agency clearly spelled out that $382,500 of the fine accounts for 153 violations – $2,500 per violation – that are alleged to have occurred with respect to the Company’s consumer privacy rights processing between July 1 and September 23, 2023. It is worth emphasizing that the Agency lodged the maximum administrative fine – “up to two thousand five hundred ($2,500)” – that is available to it for non-intentional violations for each of the incidents where consumer opt-out/limit rights were wrongly applying verification standards. It Is unclear to what the remaining $250,000 in fines were attributed, but they are presumably for the other violations alleged in the order, such as disclosing PI to third parties without having contracts with the necessary terms, confusing cookie and other consumer privacy requests methods and requiring excessive personal data to make a request. It is unclear the number of incidents that involved those infractions but based on likely web traffic and vendor data processing, the fines reflect only a fraction of the personal information processed in a manner alleged to be non-compliant.
The Agency and Office of the Attorney General of California (which enforces the CCPA alongside the Agency) have yet to seek truly jaw-dropping fines in amounts that have become common under the UK/EU General Data Protection Regulation (“GDPR”). However, this Order demonstrates California regulators’ willingness to demand more than remediation. It is also significant that the Agency requires the maximum administrative penalty on a per-consumer basis for the clearest violations that resulted in denial of specific consumers’ rights. This was a relatively modest number of consumers:
“119 Consumers who were required to provide more information than necessary to submit their Requests to Opt-out of Sale/Sharing and Requests to Limit;
20 Consumers who had their Requests to Opt-out of Sale/Sharing and Requests to Limit denied because the Company required the Consumer to Verify themselves before processing the request and;
14 Consumers who were required to confirm with the Company directly that they had given their Authorized Agents permission to submit the Request to Opt-out of Sale/Sharing and Request to Limit on their behalf.”
The fines would have likely been greater if applied to all Consumers who accessed the cookie CMP, or that made requests to know, delete or correct. Further, it is worth noting that many companies receive thousands of consumer requests per year (or even per month), and the statute of limitations for the Agency is five years; applying the per-consumer maximum fine could therefore result in astronomical fines for some companies.
Let us also not forget that regulators also have injunctive relief at their disposal. Although, the injunctive relief in this Order was effectively limited to fixing alleged deficiencies, it included “fencing in” requirements such as use of a UX designer to evaluate consumer request “methods – including identifying target user groups and performing testing activities, such as A/B testing, to access user behavior” – and reporting of consumer request metrics for five years. More drastic relief, such as disgorgement or prohibiting certain data or business practices, are also available. For instance, in a recent data broker case brought by the Agency, the business was barred from engaging in business as a data broker in California for three years.
We dive into each of the allegations in the present case further below and provide practical takeaways for in-house legal and privacy teams to consider.
Requiring consumers to provide more info than necessary to exercise verifiable requests and requiring verification of CCPA sale/share opt-out and sensitive PI limitation requests
The Order alleges two main issues with the Company’s rights request webform:
The Company’s webform required too many data points from consumers (e.g., first name, last name, address, city, state, zip code, email, phone number). The Agency contends that requiring all of this information necessitates that consumers provide more information than necessarily needed to exercise their verifiable rights considering that the Agency alleged that the Company “generally needs only two data points from the Consumer to identify the Consumer within its database.” The CPPA and its regulations allow a business to seek additional personal information if necessary to verify to the requisite degree of certainty required under the law (which varies depending on the nature of the request and the sensitivity of the data and potential harm of disclosure, deletion or change), or to reject the request and provide alternative rights responses that require lesser verification (e.g., treat a request of a copy of personal information as a right to know categories of person information). However, the regulations prohibit requiring more personal data than is necessary under the particular circumstances of a specific request. Proposed amendments the Section 7060 of the CCPA regulations also demonstrate the Agency’s concern about requiring more information than is necessary to verify the consumer.
The Company required consumers to verify their Requests to Opt-Out of Sale/Sharing and Requests to Limit, which the CCPA prohibits.
In addition to these two main issues, the Agency also alluded to (but did not directly state) that the consumer rights processes amounted to dark patterns. The CPPA cited the policy reasons behind differential requirements as to Opt-Out of Sale/Sharing and Right to Limit; i.e., so that consumers can exercise Opt-Out of Sale/Sharing and Right to Limit requests without undue burden, in particular because there is minimal or nonexistent potential harm to consumers if such requests are not verified.
In the Order, the CPPA goes on to require the Company to ensure that its personnel handling CCPA requests are trained on the CCPA’s requirements for rights requests, which is an express obligation under the law, and confirming to the Agency that it has provided such training within 90 days of the Order’s effective date.
Practical Takeaways
Configure consumer rights processes, such as rights request webforms, to only require a consumer to provide the minimum information needed to initiate and verify (if permitted) the specific type of request. This may be difficult for companies that have developed their own webforms, but most privacy tech vendors that offer webforms and other consumer rights-specific products allow for customizability. If customizability is not possible, companies may have to implement processes to collect minimum information to initiate the request and follow up to seek additional personal information if necessary to meet CCPA verification standards as may be applicable to the specific consumer and the nature of the request.
Do not require verification of do not sell/share and sensitive PI limitation requests (note, there are narrow fraud prevention exceptions here, though, that companies can and should consider in respect of processing Opt-Out of Sale/Sharing and Right to Limit requests).
Train personnel handling CCPA requests (including those responsible for configuring rights request “channels”) to properly intake and respond to them.
Include instructions on how to make the various types of requests that are clear and understandable, and that track the what the law permits and requires.
Requiring consumers to directly confirm with the Company that they had given permission to their authorized agent to submit opt-out of sale/sharing sensitive PI limitation requests
The CPPA’s Order also outlines that the Company allegedly required consumers to directly confirm with the Company that they gave permission to an authorized agent to submit Opt-Out of Sale/Sharing and Right to Limit requests on their behalf. The Agency took issue with this because under the CCPA, such direct confirmation with the consumer regarding authority of an agent is only permitted as to requests to delete, correct and know.
Practical Takeaways
When processing authorized agent requests to Opt-Out of Sale/Sharing or Right to Limit, avoid directly confirming with the consumer or verifying the identity of the authorized agent (the latter is also permitted in respect of requests to delete, correct and know). Keep in mind that what agents may request, and agent authorization and verification standards, differ from state-to-state.
Failure to provide “symmetry in choice” in its cookie management tool
The Order alleges that, for a consumer to turn off advertising cookies on the Company’s website (cookies which track consumer activity across different websites for cross-context behavioral advertising and therefore require an Opt-out of Sale/Sharing), consumers must complete two steps: (1) click the toggle button to the right of Advertising Cookies and (2) click the “Confirm My Choices” button.
The Order compares this opt-out process to that for opting back into advertising cookies following a prior opt-out. There, the Agency alleged that if consumers return to the cookie management tool (also known as a consent management platform or “CMP”) after turning “off” advertising cookies, an “Allow All” choice appears. This is likely a standard configuration of the CMP that can be modified to match the toggle and confirm approach used for opt-out. Thus, the CPPA alleged, consumers need only take one step to opt back into advertising cookies when two steps are needed to opt-out, in violation of and express requirement of the CCPA to have no more steps to opt-in than was required to opt-out.
The Agency took issue with this because the CCPA requires businesses to implement request methods that provide symmetry in choice, meaning the more privacy-protective option (e.g., opting-out) cannot be longer, more difficult or more time consuming than the less privacy protective option (e.g., opting-in).
The Agency also addressed the need for symmetrical choice in the context of “website banners,” also known as cookie banners, pointing to an example cited as insufficient symmetry in choice from the CCPA regulations – i.e., using “’Accept All’ and ‘More Information,’ or ‘Accept All’ and ‘Preferences’ – is not equal or symmetrical” because it suggests that the company is seeking and relying on consent (rather than opt-out) to cookies, and where consent is sought acceptance and acceptance must be equally as easy to choose. The CCPA further explained that “[a]n equal or symmetrical choice” in the context of a website banner seeking consent for cookies “could be between “Accept All” and “Decline All.”” Of course, under CCPA consent to even cookies that involve a Share/Sale is not required, but the Agency is making clear that where consent is sought there must be symmetry in acceptance and denial of consent.
The CPPA’s Order also details other methods by which the company should modify its CCPA requests procedures including:
separating the methods for submitting sale/share opt-out requests and sensitive PI limitation requests from verifiable consumer requests (e.g., requests to know, delete, and correct);
including the link to manage cookie preferences within the Company’s Privacy Policy, Privacy Center and website footer; and
applying global privacy control (“GPC”) preference signals for opt-outs to known consumers consistent with CCPA requirements.
Practical Takeaways
It is unclear whether the company configured the cookie management tool in this manner deliberately or if the choice of the “Allow All” button in the preference center was simply a matter of using a default configuration of the CMP, a common issue with CMPs that are built off of a (UK/EU) GDPR consent model. Companies should pay close attention to the configuration of their cookie management tools, including in both the cookie banner (or first layer), if used, and the preference center, and avoid using default settings and configurations provided by providers that are inconsistent with state privacy laws. Doing so will help mitigate the risk of choice asymmetry presented in this case, and the risks discussed in the following three bullets.
State privacy laws like the CCPA are not the only reason to pay close attention and engage in meticulous legal review of cookie banner and preference center language, and proper functionality and configuration of cookie management tools.
Given the onslaught of demands and lawsuits from plaintiffs’ firms under the California Invasion of Privacy Act and similar laws – based on cookies, pixels and other tracking technologies – many companies turn to cookie banner and preference center language to establish an argument for a consent defense and therefore mitigate litigation risk. In doing so it is important to bear in mind the symmetry of choice requirements of state consumer privacy laws. One approach is to make it clear that acceptance is of the site terms and privacy practices, which include use of tracking by the operator and third parties, subject to the ability to opt-out of some types of cookies. This can help establish consent to use of cookies by using the site after notice of cookie practices, while not suggesting that cookies are opt-in, and having lack of symmetry in choice.
In addition, improper wording and configuration of cookie tools – such as providing an indication of an opt-in approach (“Accept Cookies”) when cookies in fact already fired upon the user’s site visit, or that “Reject All” opts the user out of all, including functional and necessary cookies that remain “on” after rejection – present risks under state unfair and deceptive acts and practices (UDAAP) and unfair competition laws, and make the cookie banner notice defense to CIPA claims potentially vulnerable since the cookies fire before the notice is given.
Address CCPA requirements for GPC, linking to the business’s cookie preference center, and separating methods for exercising verifiable vs. non-verifiable requests. Where the business can tie a GPC signal to other consumer data (e.g., the account of a logged in user), it must also apply the opt-out to all linkable personal information.
Strive for clear and understandable language that explains what options are available and the limitations of those options, including cross-linking between the CMP for cookie opt-outs and the main privacy rights request intake for non-cookie privacy rights, and explain and link to both in the privacy policy or notice.
Make sure that the “Your Privacy Choices” or “Do Not Sell or Share My Personal Information” link gets the consumer to both methods. Also make sure the opt-out process is designed so that the required number of steps to make those opt-outs is not more than to opt-back in. For example, linking first to the CMP, which then links the consumer rights form or portal, rather than the other way around, is more likely to avoid the issue with additional steps just discussed.
Failure to produce contracts with advertising technology companies
The Agency’s Order goes on to allege that the Company did not produce contracts with advertising technology companies despite collecting and selling/sharing PI via cookies on its website to/with these third parties. The CPPA took issue with this because the CCPA requires a written contract meeting certain requirements to be in place between a business and PI recipients that are a CCPA service provider, contractor or third party in relation to the business. We have seen regulators request copies of contracts with all data recipients in other enforcement inquiries.
Practical Takeaways
Vendor and contract management are a growing priority of privacy regulators, in California and beyond, and should be a priority for all companies. Be prepared to show that you have properly categorized all personal data recipients and have implemented and maintain processes to ensure proper contracting practices with vendors, partners and other data recipients, which should include a diligence and assessment process to ensure that the proper contractual language is in place with the data recipient based on the recipient’s data processing role. To state it another way, it may not be proper as to certain vendors to simply put in place a data processing agreement or addendum with service provider/processor language. For instance, vendors that process for cross-context behavioral advertising cannot qualify as a service provider/contractor. In order to correctly categorize cookie and other vendors as subject to opt-out or not, this determination is necessary.
Attention to contracting is important under the CCPA in particular because different language is required depending on whether the data recipient constitutes a “third party,” “service provider” or a “contractor,” the CCPA requires different contracting terms be included in the agreements with each of those three types of personal information recipients. Further, in California, the failure to have all of the required service provider/contractor contract terms will convert the recipient to a third party and the disclosure into a sale.
Conclusion
This case demonstrates the need for businesses to review their privacy policies and notices, and audit their privacy rights methods and procedures to ensure that they are in compliance with applicable state privacy laws, which have some material differences from state-to-state. We are aware of enforcement actions in progress not only in California, but other states including Oregon, Texas and Connecticut, and these states are looking for clarity as to what specific rights their residents have and how to exercise them. Further, it can be expected that regulators will start, potentially in multi-state actions that have become common in other consumer protection matters, looking beyond obvious notice and rights request program errors to data knowledge and management, risk assessment, minimization and purpose and retention limitation obligations. Compliance with those requirements requires going beyond “check the box” compliance as to public facing privacy program elements and to the need to have a mature, comprehensive and meaningful information governance program.
Comments on Minnesota’s Proposed Rule for Reporting Products Containing Intentionally Added PFAS Are Due May 21, 2025
With the January 1, 2026, reporting deadline fast approaching for reporting on products containing intentionally added per- and polyfluoroalkyl substances (PFAS), on April 21, 2025, the Minnesota Pollution Control Agency (MPCA) published a proposed rule intended to clarify the reporting requirements, specify how and what to report, and establish fees. Written comments on the proposed rule are due May 21, 2025, at 4:30 p.m. (CDT). On May 22, 2025, at 2:00 p.m. (CDT), MPCA will hold a public hearing during which it will accept oral comments on the proposed rule. The hearing will end at 5:00 p.m. (CDT), but additional days of hearings may be scheduled if necessary. The procedural rulemaking documents available include:
Proposed Permanent Rules Relating to PFAS in Products; Reporting and Fees (c-pfas-rule1-06) (proposed rule);
Statement of Need and Reasonableness for PFAS in products reporting and fees rulemaking (c-pfas-rule1-07) (SONAR); and
Notice of intent to adopt rules with a hearing (c-pfas-rule1-05).
Definitions
The proposed rule includes definitions not included in Minnesota’s statute, including:
Component: A distinct and identifiable element or constituent of a product. Component includes packaging only when the packaging is inseparable or integral to the final product’s containment, dispensing, or preservation.
Distribute for sale: To ship or otherwise transport a product with the intent or understanding that the product will be sold or offered for sale by a receiving party after the product is delivered.
Function: The explicit purpose or role served by PFAS when intentionally incorporated at any stage in the process of preparing a product or its constituent components for sale, offer for sale, or distribution for sale.
Homogenous material: One material of uniform composition throughout or a material, consisting of a combination of materials, that cannot be disjointed or separated into different materials by mechanical actions.
Packaging: The meaning given under Minnesota Statutes, Section 115A.03 — “‘Packaging’ means a container and any appurtenant material that provide a means of transporting, marketing, protecting, or handling a product. ‘Packaging’ includes pallets and packing such as blocking, bracing, cushioning, weatherproofing, strapping, coatings, closures, inks, dyes, pigments, and labels.”
Significant change: A change in the composition of a product that results in the addition of a specific PFAS not previously reported in a product or component or a measurable change in the amount of a specific PFAS from the initial amount reported that would move the product into a different concentration range.
Substantially equivalent information: Information that the MPCA commissioner can identify as conveying the same information required under Part 7026.0030 and Minnesota Statutes, Section 116.943, Subdivision 2. Substantially equivalent information includes an existing notification by a person who manufactures a product or component when the same product or component is offered for sale under multiple brands.
For some definitions, the proposed rule expands on definitions that are included in the statute. The statute defines manufacturer, but MPCA proposes additional language to clarify the definition (new language is italicized):
Manufacturer: The person that creates or produces a product, that has a product created or produced, or whose brand name is legally affixed to the product. In the case of a product that is imported into the United States when the person that created or produced the product or whose brand name is affixed to the product does not have a presence in the United States, manufacturer means either the importer or the first domestic distributor of the product, whichever is first to sell, offer for sale, or distribute for sale the product in the state.
According to the SONAR, MPCA inserted the phrase “has a product created or produced” to clarify the parties responsible for reporting. MPCA states that “[s]imilarly, the definition encompasses parties that either import or are the first domestic distributor of the product, whichever is first to sell, offer for sale, or distribute the product for sale in the state.” MPCA intends the revisions to clarify that companies that do not manufacture their own products are subject to the reporting and fee requirements.
Parties Responsible for Reporting
Under the proposed rule, a manufacturer or a group of manufacturers must submit a report for each product or component that contains intentionally added PFAS. Manufacturers in the same supply chain may enter into an agreement to establish their reporting responsibilities. The proposed rule allows a manufacturer to submit information on behalf of another manufacturer if the following requirements are met:
The reporting manufacturer must notify any other manufacturer that is a party to the agreement that the reporting manufacturer has fulfilled the reporting requirements;
All manufacturers must maintain documentation of a reporting responsibility agreement and must provide the documentation to MPCA upon request;
All manufacturers must verify that the data submitted on their behalf are accurate and complete; and
For the verification to be considered complete, all manufacturers must submit the required fee, as applicable.
MPCA states in the SONAR that “[i]t is reasonable to allow a manufacturer to submit the reporting requirements for another manufacturer because of the large overlap in common components used throughout the manufacturing of complex products.” According to MPCA, it will provide detailed guidance on how reporting entities can submit on behalf of multiple manufacturers in the reporting system instructions or in supplemental guidance.
Information Required
Under the statute, the following information must be reported:
(1) A brief description of the product, including a universal product code (UPC), stock keeping unit (SKU), or other numeric code assigned to the product;
(2) The purpose for which PFAS are used in the product, including in any product components;
(3) The amount of each PFAS, identified by its Chemical Abstracts Service Registry Number® (CAS RN®), in the product, reported as an exact quantity determined using commercially available analytical methods or as falling within a range approved for reporting purposes;
(4) The name and address of the manufacturer and the name, address, and phone number of a contact person for the manufacturer; and
(5) Any additional information requested by the commissioner as necessary to implement the requirements of this section.
Rather than requiring information regarding the purpose for which PFAS are used in the product, the proposed rule would require that manufacturers provide “the function that each PFAS chemical provides to the product or its components.” Under the proposed rule’s definition of function (“the explicit purpose or role served by PFAS when intentionally incorporated at any stage in the process of preparing a product or its constituent components for sale, offer for sale, or distribution for sale”), manufacturers would be required to report not only any PFAS intentionally added to the product, but PFAS used during the manufacturing process even if the PFAS are not present in the final product.
A manufacturer would be allowed to group similar products compromised of homogenous materials if the following criteria are met:
The PFAS chemical composition is the same;
The PFAS chemicals fall into the same reporting concentration ranges;
The PFAS chemicals provide the same function; and
The products have the same basic form and function and only differ in size, color, or other superficial qualities that do not impact the composition of the intentionally added PFAS.
If the product consists of multiple PFAS-containing components, the manufacturer would be required to report each component under the product name provided in the brief description of the product. Similar components listed within a product could be grouped together if the components meet the criteria listed above.
The proposed rule will allow manufacturers to report the concentration of PFAS using the following ranges:
Practical detection limit to less than (
CPPA to Hold Board Meeting on Proposed CCPA Regulations
The California Privacy Protection Agency (“CPPA”) Board will hold a Board meeting on May 1, 2025, at 9:00 am PT. The public is invited to attend the meeting in person or virtually. The agenda for the meeting includes a legislative update on the CPPA’s positions on pending legislation and a discussion on the adoption of proposed CCPA regulations addressing automated decisionmaking, risk assessments, insurance and cybersecurity audits. The meeting also will cover proposed revisions to existing CCPA regulations.
Q1 2025 New York Artificial Intelligence Developments: What Employers Should Know About Proposed and Passed Artificial Intelligence Legislation
In the first part of 2025, New York joined other states, such as Colorado, Connecticut, New Jersey, and Texas,1 seeking to regulate artificial intelligence (AI) at the state level. Specifically, on 8 January 2025, bills focused on the use of AI decision-making tools were introduced in both the New York Senate and State Assembly. As discussed further below, the New York AI Act Bill S011692 (the NY AI Act) focuses on addressing algorithmic discrimination by regulating and restricting the use of certain AI systems, including in employment. The NY AI Act would allow for a right of private action, empowering citizens to bring lawsuits against technology companies. Additionally, the New York AI Consumer Protection Act Bill A007683 (the Protection Act) would amend the general business law to prevent the use of AI algorithms to discriminate against protected classes, including in employment.
This alert discusses these two pieces of legislation and provides recommendations for employers as they navigate the patchwork of proposed and enacted AI legislation and federal guidance.
Senate Bill 1169
On 8 January 2025, New York State Senator Kristen Gonzalez introduced the NY AI Act because “[a] growing body of research shows that AI systems that are deployed without adequate testing, sufficient oversight, and robust guardrails can harm consumers and deny historically disadvantaged groups the full measure of their civil rights and liberties, thereby further entrenching inequalities.” The NY AI Act would cover all “consumers,” defined as any New York state resident, including residents who are employees and employers.4 The NY AI Act states that “[t]he legislature must act to ensure that all uses of AI, especially those that affect important life chances, are free from harmful biases, protect our privacy, and work for the public good.”5 It further asserts that, as the “home to thousands of technology start-ups,” including those that experiment with AI, New York must prioritize safe innovation in the AI sector by providing clear guidance for AI development, testing, and validation both before a product is launched and throughout the product’s life.6
Setting the NY AI Act apart from other proposed and enacted state AI laws,7 the NY AI Act includes a private right of action allowing New York state residents to file claims against technology companies for violations. The NY AI Act also provides for enforcement by the state’s attorney general. In addition, under the proposed law, consumers have the right to opt out of automated decision-making or appeal its results.
The NY AI Act defines “algorithmic discrimination” as any condition in which the use of an AI system contributes to unjustified differential treatment or impacts, disfavoring people based on their actual or perceived age, race, ethnicity, creed, religion, color, national origin, citizenship or immigration status, sexual orientation, gender identity, gender expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, pregnancy, pregnancy outcomes, disability, height, weight, reproductive health care or autonomy, status as a victim of domestic violence, or other classification protected under state or federal laws.8
The NY AI Act demands that “deployers” using a high-risk AI system9 for a consequential decision10 comply with certain obligations. “Deployers” is defined as “any person doing business in [New York] state that deploys a high-risk artificial intelligence decision system.”11 This includes New York employers. For instance, deployers must disclose to the end user in clear, conspicuous, and consumer-friendly terms that they are using an AI system that makes consequential decisions at least five business days prior to the use of such system. The deployer must allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for a human representative to make the decision. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer must render a decision to the consumer within 45 days.12
Further, any deployer that employs a high-risk AI system for a consequential decision must inform the end user within five days in a clear, conspicuous, and consumer-friendly manner if a consequential decision has been made entirely by or with assistance of an automated system. The deployer must then provide and explain a process for the end user to appeal the decision, which must at minimum allow the end user to (a) formally contest the decision, (b) provide information to support their position, and (c) obtain meaningful human review of the decision.13
Additionally, deployers must complete an audit before using a high-risk AI system, six months after deployment, and at least every 18 months thereafter for each calendar year a high-risk AI system is in use. Regardless of final findings, the deployers shall deliver all audits conducted to the attorney general.
As mentioned above, enforcement is permitted by the attorney general or a private right of action by consumer citizens. If a violation occurs, the attorney general may request an injunction to enjoin and restrain the continuance of the violation.14 Whenever the court shall determine that a violation occurred, the court may impose a civil penalty of not more than US$20,000 for each violation. Further, there shall be a private right of action for any person harmed by any violation of the NY AI Act. The court shall award compensatory damages and legal fees to the prevailing party.15
The NY AI Act also offers whistleblower protections, prohibits social scoring AI systems, and prohibits waiving legal rights.16
Assembly Bill 768
Also on 8 January 2025, New York State Assembly Member Alex Bores introduced the Protection Act. Like the NY AI Act, the Protection Act seeks to prevent the use of AI algorithms to discriminate against protected classes.
The Protection Act defines “algorithmic discrimination” as any condition in which the use of an AI decision system results in any unlawful differential treatment or impact that disfavors any individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, English language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected pursuant to state or federal law.17
The Protection Act requires a “bias and governance audit” consisting of an impartial evaluation by an independent auditor, which shall include, at a minimum, the testing of an AI decision system to assess such system’s disparate impact on employees because of such employee’s age, race, creed, color, ethnicity, national origin, disability, citizenship or immigration status, marital or familial status, military status, religion, or sex, including sexual orientation, gender identity, gender expression, pregnancy, pregnancy outcomes, and reproductive healthcare choices.18
If enacted, beginning 1 January 2027, the Protection Act would require each deployer of a high-risk AI decision system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.19 Specifically, deployers would be required to implement and maintain a risk management policy and program that is regularly reviewed and updated. The Protection Act references external sources employers can look to for guidance and compliance, such as the “AI Risk Management Framework” published by the National Institute of Standards and Technology and the ISO/IEC 42001 of the International Organization for Standardization.20
On 1 January 2027, employers deploying a high-risk AI decision system that makes or is a substantial factor in making a consequential decision concerning a consumer would also have to:
Notify the consumer that the deployer has used a high-risk AI decision system to make, or be a substantial factor in making, a consequential decision.
Provide to the consumer a statement disclosing: (I) the purpose of the high-risk AI decision system; and (II) the nature of the consequential decision.21
Make available a statement summarizing the types of high-risk AI decision systems that are currently used by the deployer.
Explain how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination.
Notify the consumer of the nature, source, and extent of the information collected and used by the deployer.22
New York City Council Local Law Int. No. 1894-A
While the NY AI Act and Protection Act are not yet enacted, New York City employers should ensure they are following Local Law Int. No. 1984-A (the NYC AI Law), which became effective on 5 July 2023. The NYC AI Law aims at protecting job candidates and employees from unlawful discriminatory bias based on race, ethnicity, or sex when employers and employment agencies use automated employment decision-making tools (AEDTs) as part of employment decisions.
Compared to the proposed state laws, the NYC AI Law narrowly applies to employers and employment agencies in New York City that use AEDTs to screen candidates or employees for positions located in the city. Similar to the proposed state legislation, bias audits and notice are required whenever an AEDT is used. Notice must be provided to candidates and employees of the use of AEDTs at least 10 business days in advance. Under the NYC AI Law, an AEDT is:
[A]ny computational process, derived from machine learning, statistical modeling, data analytics, or [AI], that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.
The NYC AI Law demands audits be completed by an independent auditor who details the sources of data (testing or historical) used in the audit. The results of the bias audit must be published on the website of employers and employment agencies, or an active hyperlink to a website with this information must be provided, for at least six months after the latest use of the AEDT for an employment decision. The summary of results must include (i) the date of the most recent bias audit of the AEDT; (ii) the source and explanation of the data; (iii) the number of individuals the AEDT assessed that fall within an unknown category; and (iv) the number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories.23 The penalties for noncompliance with the NYC AI Law include penalties of US$500 to US$1,500 per violation, and there is no cap on the civil penalties. Further, the NYC AI Law authorizes a private right of action, in court or through administrative agencies, for aggrieved candidates and employees.
Takeaways for Employers
Employers should work to be in compliance with the existing NYC AI Law and prepare for future state legislation.24
Employers should:
Assess AI Systems: Identify any AI systems your company develops or deploys, particularly those used in consequential decisions related to employment.
Review Data Management Policies: Ensure your data management policies comply with data security protection standards.
Prepare for Audits: Familiarize yourself with the audit requirements and begin preparing for potential audits of high-risk AI systems.
Develop Internal Processes: Establish internal processes for employee disclosures related to AI system violations.
Monitor Legislation: Stay informed about proposed bills, such as AB326525 and AB3356,26 and continually review guidance from federal agencies.
Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well-positioned to provide guidance and assistance to clients on AI developments.
Footnotes
1 Please see the following alert for more information on the proposed Texas legislation: Kathleen D. Parker, et al., The Texas Responsible AI Governance Act and Its Potential Impact on Employers, K&L GATES HUB (Jan. 13, 2025), https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025.
2 S. 1169, 2025-2026 Gen. Assemb., Reg. Sess., § 85 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/S1169.
3 A.B. 768, 2025-2026 Gen. Assemb., Reg. Sess., § 1550 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A768.
4 S. 1169, supra note 2, § 85.
5Id. § 2(b).
6Id. § 2(c).
7 Please see the following alert for more information on state AI laws: Michael J. Stortz, et al., Litigation Minute: State Generative AI Statutes and the Private Right of Action, K&L GATES HUB (Jun. 17, 2024), https://www.klgates.com/Litigation-Minute-State-Statutes-and-the-Private-Right-of-Action-6-17-2024
8 S. 1169, supra note 2. § 85(1).
9 Id. § 85(12) “High-Risk AI System” means any AI system that, when deployed: (A) is a substantial factor in making a consequential decision; or (B) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
10 Id. § 85(4) “Consequential Decision” means a decision or judgment that has a material, legal or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, any of the following: (A) employment, workers’ management, or self-employment, including, but not limited to, all of the following: (i) pay or promotion; (ii) hiring or termination; and (iii) automated task allocation. (B) education and vocational training, including, but not limited to, all of the following: (i) assessment or grading, including, but not limited to, detecting student cheating or plagiarism; (ii) accreditation; (iii) certification; (iv) admissions; and (v) financial aid or scholarships. (C) housing or lodging, including rental or short-term housing or lodging. (D) essential utilities, including electricity, heat, water, internet or telecommunications access, or transportation. (E) family planning, including adoption services or reproductive services, as well as assessments related to child protective services. (F) health care or health insurance, including mental health care, dental, or vision. (G) financial services, including a financial service provided by a mortgage company, mortgage broker, or creditor. (H) law enforcement activities, including the allocation of law enforcement personnel or assets, the enforcement of laws, maintaining public order or managing public safety. (I) government services. (J) legal services.
11 A.B. 768, supra note 3, § 1550(7).
12 S. 1169, supra note 2, § 86(a).
13Id. § 86(2).
14 Id. § 89(b)(1).
15 Id. § 89(b)(2).
16Id. §§ 86(b), 89(a), 86(4).
17 A.B. 768, supra note 3, § 1550(1).
18 Id. § 1550(3).
19 Id. § 1552(1)(a).
20 Id. § 1552(2)(a).
21 Id. § 1552(5)(a).
22 Id. § 1552(6)(a).
23 N.Y.C. Dep’t of Consumer & Worker Prot., Automated Employment Decision Tools (AEDT) – Frequently Asked Questions, https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf.
24 Please see the following alert for more information: Maria Caceres-Boneau, et al., New York Proposal to Protect Workers Displaced by Artificial Intelligence, K&L GATES HUB (Feb. 20, 2025), https://www.klgates.com/New-York-Proposal-to-Protect-Workers-Displaced-by-Artificial-Intelligence-2-18-2025
25 A.B. 3265, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3265
26 A.B. 3356, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3356
RUSSIA DISCOVERS THE TCPA?: Russian Appellate Court Allows Consumer to Sue Bank for $61.00 Over Unwanted Calls
While the idea of suing over unwanted phone calls is nothing new for litigious Americans its quite novel elsewhere in the world–and a man in Russia might be the first to have invented the claim across the pond.
Apparently a Russian appellate court has recognized a constitutional right to privacy that can be invaded when a bank sends unwanted marketing messages after being asked to stop.
In the case a Russian guy asked the bank to stop calling but it ignored him. He sued for “moral damage” of 5,000 rubles– about $61.00. The lower court through out the case but the appellate court found the claim to have merit and ordered a trial on the issue of the calls.
Here in America, of course, consumers can–and often do– sue for unwanted phone calls under the Telephone Consumer Protection Act (TCPA). And unlike the limited damages recognized in Russia, the TCPA allows consumers to collect $500-$1,500.00 per unwanted call or text.
But there are limits in America as there are in Russia.
As one Russian authority stated in response to the ruling:
“Unfortunately, people themselves often forget that they gave consent to the processing of their data and to receive advertising information. In such cases, advertising is distributed legally. And consent has no statute of limitations if the contract did not specify its term, even if you signed it 20 years ago.”
True in Russia as it is in America.
Many websites collect consent for advertising and contact and then sell those consents far and wide as permitted in the fine print. As a result many companies will buy these “leads” and make totally legal phone calls that the consumer had forgotten–or perhaps never really understood– they requested.
While this is fascinating we will have to wait and see whether the idea of suing over unwanted calls catches on anywhere else.
Source : https://m.realnoevremya.com/articles/8741-russians-allowed-to-punish-banks-for-spam?_url=%2Farticles%2F8741-russians-allowed-to-punish-banks-for-spam#from_desktop
DENIED!!: Eleventh Circuit Refuses to Permit Intervention in IMC Case– IS This The End For One-to-One? (Probably)
So big news today.
This morning the Eleventh Circuit Court of Appeals entered an order denying the efforts of several parties– including the National Consumers League–to intervene and defend the FCC’s TCPA one-to-one consent ruling.
This development comes after the Eleventh Circuit had previously struck down the ruling and the FCC stated it would not pursue it further.
With this latest denial the fate of the TCPA one-to-one rule appears sealed. Theoretically the proposed intervenors could seek Supreme Court review of the Eleventh Circuit’s denial but the chances of success on such an effort are too low to merit discussion.
So, unless something insane happens (and these days- who knows!) the FCC’s TCPA one-to-one consent rule is officially dead!
Yay.
R.E.A.C.H. will be updating its standards in light of this change so be on the lookout for that.
Although the FCC’s TCPA one-to-one consent rule is dead the FCC’s TCPA revocation rule is not– in fact it is very much alive and in effect RIGHT NOW.
Over the weekend I saw on LinkedIn some folks suggesting the “reasonable means” provision of the ruling was stayed– ABSOLUTELY FALSE. The ONLY part of the rule that was stayed was the scope provisions– so be sure to get it right!
Speaking of getting it right, Telnyx CEO Dave Casem is now set to speak at LCOC III, along with Quote Velocity, Tree, Everquote and a ton of other big names. You CANNOT miss this show folks. Ticket prices jump soon so get in now.
Chat soon.
Crossing Borders with Electronics: Know Your Rights and Risks
With increasing digitalization of our lives and businesses, privacy concerns from border searches of phones, laptops and tables are a growing concern for professionals, executives, and frequent international travelers. U.S. Customs and Border Protection (CBP) has broad authority to inspect travelers and their belongings at ports of entry. This includes electronic devices, which may be searched without a warrant under what’s known as the “border search exception” to the Fourth Amendment.
In 2024, CBP conducted approximately 47,047 border searches of electronic devices, including 42,725 basic media searches, 4,322 advanced media searches, 36,506 non-U.S. citizen electronic media searches and 10,541 U.S. citizen electronic media searches. Notably, there has been a recent string of legal U.S. residents (people with work visas, permanent resident cards, etc.) facing deportation or visa revocation based on information discovered on their electronic devices.
In one case, a Lebanese physician was deported after CBP officers found photos and videos related to Hezbollah on her cellphone. In another, an Indian Ph. D student at Columbia University had her visa revoked following scrutiny of her social media activity and participation in campus protests. Last month, a French scientist was denied entry to the U.S. after Department of Homeland Security (DHS) alleged he was carrying confidential information from an American lab. However, the French government claimed he was targeted for expressing political opinions on the U.S. government.
Although U.S. citizens generally cannot be denied reentry for refusing to unlock a phone, CBP agents can detain a device for further inspection. As for non-citizens, they may face additional consequences, including delays, detention or denial of entry. The line between what is permissible and what is excessive remains unsettled, as federal courts across the country have issued conflicting rulings.
CBP classifies device searches into two categories, a basic search and an advanced search. A basic search is a manual inspection of an unlocked device and can be conducted without suspicion. An advanced search involves connecting the device to external equipment for forensic review and requires “reasonable suspicion” that the device contains unlawful material. Although CBP and ICE policies remain in effect, some courts have begun to push back on this authority, particularly in cases involving U.S. citizens.
Border Search Cases and 2018 DHS Policy
Courts consistently uphold the “border search exception”, reasoning that the government’s interest in controlling who and what enters the country is at its highest at the border. As the Supreme Court explained in United States v. Ramsey and later reaffirmed in United States v. Flores-Montano, routine, suspicion-less searches of persons and property at the border are generally considered “reasonable” by virtue of the location.
In the past two decades, as digital privacy concerns have grown, courts have increasingly grappled with how these principles apply to smartphones, laptops, and other electronic devices. To address this evolving legal landscape, the DHS issued a policy directive in 2018 requiring that forensic or advanced searches of electronic devices be supported by reasonable suspicion. However, in general, border searches of electronic devices do not require a warrant or suspicion.
United States v. Smith: change from reasonable suspicion to probable cause?
The reasonable suspicion framework was disrupted in May 2023, when Judge Jed Rakoff of the Southern District of New York issued a groundbreaking decision in United States v. Smith, holding that the government must obtain a warrant supported by probable cause before searching and copying an American citizen’s phone at the border, absent exigent circumstances.
In this case, Jatiek Smith was stopped by CBP officers at Newark Liberty International Airport in March 2021. He had been under investigation by DHS Investigations and the FBI before his arrival, and federal agents used CBP’s border authority as a means to conduct a search without seeking a warrant. Agents forced Smith into revealing his phone password under the threat of indefinite detention, copied the contents of his device, and returned it. Weeks later, the government obtained a search warrant for the data it had already reviewed and later secured a wiretap based in part on the findings from the initial search.
Judge Rakoff ruled that this process violated Smith’s Fourth Amendment rights. Relying heavily on the Supreme Court’s reasoning in Riley v. California, which held that law enforcement must obtain a warrant before searching an arrestee’s cell phone, Judge Rakoff reasoned that the vast quantity and sensitivity of digital information carried on modern devices demands greater constitutional protection and the border search exception did not justify the warrantless search and forensic copying of Smith’s phone.
Despite finding a constitutional violation, the court declined to suppress the evidence under the “good faith” exception. Judge Rakoff concluded that the agents had reasonably relied on existing CBP policy and that a subsequently obtained warrant covered much of the forensic review.
This case is currently on appeal at the Second Circuit. Smith remains a bold but isolated ruling. Judge Rakoff’s decision has not gained traction in other jurisdictions and in 2023, the Fifth Circuit declined to adopt Judge Rakoff’s reasoning in a similar case. To date, CBP has not issued any new guidance or directives in response. Whether Smith signals the start of a broader judicial shift, or remains an outlier cautionary case, will likely be determined by future decisions. In the meantime, individuals should assume that their devices may be subject to search or seizure at the border, even without a warrant. Therefore, to preserve digital privacy prepare accordingly. If a device is seized or an individual is detained, they should promptly contact a lawyer knowledgeable in border search and digital privacy law.
Key Takeaways
Travel light, digitally.Travelers should consider carrying “clean” devices that contain only the data needed for the trip. If a device is seized, having only limited data can help ensure a faster review and return, with less risk of compromising privacy or confidentiality.
Device searches are not limited to phones and laptops.Border agents may search any electronic storage device, including flash drives, portable hard drives, SIM cards, and other accessories. Travelers should consider removing or securing peripheral media before traveling.
Encryption and shutdown protocols matter.Ensure all devices are protected with full-disk encryption and power them off before arrival. (Apparently, CBP are able to remotely access devices in the customs areas.)
All documentation must be updated and valid.Non-citizens who are required to have a visa or work permit for entry need to ensure all documentation is valid (i.e. not expired or incomplete). Otherwise, the traveler will be turned away or possibly detained.
Protect your confidential and sensitive information. While device seizure at the border does not automatically signal a criminal investigation, information obtained during a border search may later be used in criminal, civil or immigration proceedings. Importantly, many travelers carry sensitive or protected data – such as trade secrets, privileged communications, HIPAA protected medical information, or confidential financial information on their devices. These categories of data may not be adequately protected from disclosure during a border search. Consulting with counsel in advance of travel can help protect this information appropriately.
Organizations should develop internal guidance.Employers, universities, and other institutions whose personnel travel internationally should consider developing clear protocols for cross-border travel with sensitive information. Consulting with counsel in advance of travel, particularly for individuals in sensitive roles, can help mitigate legal and reputational risks. It is important to know your risks and rights at the border.
New Seventh Circuit Decision Signals Greater Flexibility for Healthcare Marketing Services
On April 14, 2025, the United States Court of Appeals for the Seventh Circuit issued a decision in a case involving the federal Anti-Kickback Statute (“AKS”) and marketing services that the court framed as an appeal “test[ing] some of the outer boundaries of the [AKS]….”
In United States vs. Mark Sorensen, the Court of Appeals overturned the judgment of conviction against Mark Sorensen from the United States District Court for the Northern District of Illinois. In the district court case, Sorensen, the owner of SyMed Inc., a durable medical equipment (“DME”) distributor, was found guilty of one count of conspiracy and three counts of offering and paying kickbacks in return for the referral of Medicare beneficiaries to his DME company, which the United States claimed resulted in SyMed’s fraudulently billing $87 million and receiving $23.6 million in payments from Medicare. The district court judge denied Sorensen’s post-trial motions for acquittal and for a new trial, finding that the evidence regarding willfulness allowed the jury to find beyond a reasonable doubt that Sorensen “knew from the beginning of the agreement in 2015 that the percentage fee structure and purchase of the [doctors’] orders violated the law.” He was sentenced to 42 months in prison and ordered to forfeit $1.8 million.
The charges against Sorensen stemmed from SyMed’s arrangements with several advertising and marketing companies, a DME manufacturer, and a billing company. Under the business model for which Sorensen was convicted, the marketing companies published advertisements for orthopedic braces, to which interested patients could respond using an electronic form providing their names, addresses, and doctors’ contact information. This information was forwarded to call centers where sales agents from the marketing companies would contact the patients to discuss ordering braces and generating prescription forms. After collecting additional information, and with consent from patients to proceed, the sales agents faxed the prefilled, but unsigned, prescription forms to patients’ physicians. The prescription forms contained SyMed’s name and corporate logo and listed the devices to be ordered. SyMed paid the DME manufacturer 79 percent of the payments it received from Medicare or another payor, and kept 21 percent, from which it paid the billing company for its services. The DME manufacturer paid the marketing companies out of its 79 percent share based on the number of leads each company generated. The government argued that the payments to these marketing companies constituted illegal kickbacks under the AKS because they were intended to induce the referral of Medicare beneficiaries.
According to the Seventh Circuit, a critical fact leading to its reversal of Sorensen’s conviction was that the physicians who received these unsigned prescription forms got to decide whether to sign and return the forms to SyMed and the billing company for review—or to ignore them. According to the court’s decision, physicians declined 80 percent of the orders from one of the marketing companies and “regularly ignored forms sent by” the other marketing company.
The appellate court reversed the district court’s decision for insufficient evidence, noting that “[t]he other individuals and businesses Sorensen paid were advertisers and a manufacturer. They were neither physicians in a position to refer their patients nor other decisionmakers in positions to ‘leverage fluid, informal power and influence’ over healthcare decisions.” The Seventh Circuit characterized the marketing companies’ communications to physicians as “proposals for care, not as referrals”, noting that to the extent they could be considered “recommendations” to physicians, “they were frequently overruled.” The appellate court further stated, “[t]he key point is that, on this record, physicians always had ultimate control over their patients’ healthcare choices and applied independent judgment in exercising that control.” Consequently, the appellate court concluded that “Sorensen’s payments thus were not made for “referring” patients within the meaning of the statute.” Interestingly, the court focused more on the percentage payments to the DME manufacturer and less on the “per lead” payments to the marketing companies. This was likely due to the low use or conversion of orders for orthopedic braces by physicians using these prefilled forms by the marketing companies which, to the court, demonstrated that the physicians retained and exercised control over whether an orthopedic brace would be ordered for their patients.
In considering whether the 80 percent declination rate experienced by the one marketing company was dispositive, the Seventh Circuit declined to adopt a bright-line rule. Instead, the appellate court noted in a footnote to its decision that “[o]ur focus is on whether a payee exerts informal but substantial influence so that a physician’s choice of care becomes a formality rather than an exercise of independent medical judgment.”
The Department of Health and Human Services (“HHS”) Office of Inspector General (“OIG”) has previously considered pay per lead arrangements with advertising companies in advisory opinion (“AO”) 08-19. In AO 08-19, the HHS-OIG allowed a pay per lead arrangement involving chiropractors under the limited circumstances presented in that AO: the advertising company was not a health care provider, the advertising did not target only Federal health care program beneficiaries, fees paid by the chiropractors would not depend on whether the lead became an actual patient, and the advertisers would not steer patients to a particular chiropractor. The OIG’s analysis in AO 08-19 also relied on the fact that the advertising company did not collect any health care information, such as payor information, medical history, or diagnosis information about prospective patients using the advertiser’s platform.
The Seventh Circuit’s decision signals that payments to marketing firms for services like advertising and lead generation are less likely to be considered illegal kickbacks, provided that (1) the marketing firms do not exert direct influence over prescribing physicians, and (2) physicians retain genuine autonomy in their medical decisions, in which one factor that may be considered is the conversion rate of a marketing lead to a physician order. While the Seventh Circuit did not provide an explicit definition of the term “referrals” for purposes of the AKS, the court’s emphasis on the independent decision-making of the physicians suggests a potential limit on what actions by a third party can be considered to trigger the AKS’s prohibition against payments for referrals. This could create a clearer path forward for legitimate marketing activities while still prohibiting direct inducements to healthcare providers for specific referrals. We will monitor how other Circuits treat similar issues and report back on our findings.
Mexico Overhauls Federal Data Protection Law
Isabel Davara F. de Marcos of Davara Abogados S.C. reports that on March 20, 2025, the Mexican Congress approved a new Federal Law on the Protection of Personal Data Held by Private Parties (“LFPDPPP”), replacing the previous 2010 federal data protection law. The LFPDPPP, which became effective March 21, 2025, represents a substantial change in Mexico’s data protection framework, impacting the scope of application, legal bases for data processing, and individual rights. Relevant updates and considerations for companies operating in Mexico include:
expanded definition of personal data;
broader legal bases for processing;
stricter privacy notice requirements;
enhanced individual rights over automated processing; and
increased fines and a new judicial structure (i.e., the creation of specialized data protection courts to handle legal proceedings, including constitutional rights lawsuits).
The LFPDPPP dissolves the National Institute for Transparency, Access to Information and Personal Data Protection, transferring its authority to a newly created Secretariat of Anti-Corruption and Good Governance. This body will oversee compliance, conduct investigations, and impose sanctions.
Promulgating regulations are expected to be issued within 90 days from the law’s effective date, which are expected to clarify the scope and operational details of the law.
LEAD LESSON: Court Gives Final Approval to $6.5MM PillPack TCPA Settlement And Its All About Lead Buying
One of the biggest TCPA class certification rulings in recent years was the order certifying the suit against PillPack a few years back.
Another TCPA Certification Disaster: Business Practice in Danger After TCPA Case Certified Against PillPack in Suit Involving Oral Consent to Transfer Calls
The case involved the common practice of a lead generator getting consent to call a consumer and then pitching a third-party’s product– in this case, PillPack’s. However PillPack’s name was not on the form, just the lead generators.
The original certification order was just brutal and massive but then Fluent came forward and assisted with a decertification effort. Still enough risk existed that PillPack went forward with the classwide resolution and agreed to pay $6.5MM.
FLUENT TO THE RESCUE!: Popular Lead Supplier Bails Out Seller With Critical Consent Data Sufficient to Decertify Class
Well in Williams v. PillPack, 2025 WL 1149710 (W.D. Wash April 18, 2025) the court approved the settlement and awarded $2.1MM to class counsel. Each class claimant will receive between $212.00 and $350.00.
$6.5MM is an expensive lesson for lead buyers. And one others in the space are still learning. Lead generators cannot just place calls and transfer to lead buyers unless the buyer is also on the form! It doesn’t matter that the call was placed by the lead generator and the generator was on the form– if the lead buyer isn’t there is significant risk.