DELETE, DELETE, DELETE: FCC Looking For Public Comment on “Unnecessary Regulatory Burdens” And Boy Oh Boy Does The Czar Have Some Ideas

So American have watched mostly in horror as some thing called DOGE has dismantled critical government services, seemingly cutting jobs and–at times–entire functions without really even understanding what they were doing.
Deregulation is an incredibly sexy thing when done well. And pretty doggone ugly when done poorly.
The FCC is leaning into the sexy side of deregulation it would appear by actually seeking to educate itself as to what regulations are causing unnecessary regulatory burden and then get rid of them. Horray! And given the title of the notice– “delete, delete, delete”–I suspect we are going to see some really bold (read: useful) changes to the tome of FCC regulation weighing down American enterprise.
Nowhere are the FCC’s regs more needlessly oppressive in my view than those implementing the TCPA.
The new revocation rule–my goodness what a disaster— jumps immediately to mind.
But a ton of other ticky-tack and sometimes entirely unworkable regulations also exist out there under the TCPA.
While the bones of the DNC rule ought to stick around, basically all of the Commission’s rules in 47 CFR 64.1200 should be reevaluated to promote desired contact between businesses and consumers. Great opportunity to restore the “balanced” approached to regulation the 1992 FCC promise but that the 2008-2024 FCC stole away.
And let’s not forget the most important regulations– those the FCC has handed to the carriers (without Congressional authority) to block, censor, throttle and label our speech without guardrails or redress.  It flies DIRECTLY in the face of the FCC’s core mission to “make available, so far as possible, . . . a rapid, efficient, Nation-wide, and world-wide wire and radio communication service with adequate facilities at reasonable charges.” That has to end entirely.
I expect we will all have fun writing our wish list, like a kid writing to Santa Claus.
R.E.A.C.H. will be discussing this next meeting. But for now send me any suggestions you have for TCPA regulations that ought to be rolled back as we will certainly be submitted a comment.
DEADLINES: 
Comments Due: Friday, April 11, 2025Reply Comments Due: Monday, April 28, 2025
Full notice here: DA-25-219A1.pdf
This is a really big deal folks and I expect we will see some really big changes. So don’t be shy in sending in suggestions.

AI in Business: The Risks You Can’t Ignore

Artificial Intelligence (AI) is revolutionizing business operations, offering advancements in efficiency, decision-making, and customer engagement. However, its rapid integration into business processes brings forth a spectrum of legal and financial risks that enterprises must navigate to ensure compliance and maintain trust.
The Broad Legal Definition of AI and Its Implications
In the United States, the legal framework defines AI far more expansively than the average person might expect, potentially encompassing a wide array of software applications. Under 15 U.S.C. § 9401(3), AI is any machine-based system that:

makes predictions, recommendations, or decisions,
uses human-defined objectives, and
influences real or virtual environments.

This broad definition implies that even commonplace tools like Excel macros could be subject to AI regulations. As Neil Peretz of Enumero Law notes, such an expansive definition means that businesses across various sectors must now re-appraise all of their software usage to ensure compliance with new AI laws.
Navigating the Evolving Regulatory Landscape
The regulatory environment for AI is rapidly evolving. The European Union’s AI Act, for instance, classifies AI systems into risk categories, imposing strict compliance requirements on high-risk applications. In the United States, various states are introducing AI laws, requiring companies to stay abreast of changing regulations.
According to Jonathan Friedland, a partner with Much Shelist, P.C., who represents boards of directors of PE-backed and other privately owned companies, developments in artificial intelligence are happening so quickly that many companies of even modest size are spending significant time developing compliance programs to ensure adherence to applicable laws.
One result, according to Friedland, is that “[a]s one might expect, the sheer number of certificate programs, online courses, and degrees now offered in AI is exploding. Everyone seems to be getting into the game,” Friedland continues, “for example, the International Association of Privacy Professionals, a global organization previously focused on privacy and data protection, recently started offering its ‘Artificial Intelligence Governance Professional certification.” The challenge for companies, according to Friedland, is “to invest appropriately without overdoing it.”
Navigating Bias and Discrimination in AI Systems
Legal challenges have been associated with algorithmic bias and accountability, which claim that historical data used to train AI often reflects societal inequalities, which AI systems can further perpetuate.
Sean Griffin, of Longman & Van Grack, highlights cases where AI tools have led to allegations of discrimination, such as a lawsuit against Workday, where an applicant claimed the company’s AI system systematically rejected Black and older candidates. Similarly, Amazon discontinued an AI recruiting tool after discovering it favored male candidates, revealing the potential for AI to reinforce societal biases.
To mitigate these risks, businesses should implement regular audits of their AI systems to identify and address biases. This includes diversifying training data and establishing oversight mechanisms to ensure fairness in AI-driven decisions.
Addressing Data Privacy Concerns
AI’s reliance on vast datasets, often containing personal and sensitive information, raises significant data privacy issues. AI-powered tools might be able to infer sensitive information, such as health risks from social media activity, potentially bypassing traditional privacy safeguards.
Because AI systems potentially have access to a wide range of data, compliance with data protection regulations like the GDPR and CCPA is crucial. Businesses must ensure that data used in AI systems is collected and processed lawfully, with explicit consent where necessary. Implementing robust data governance frameworks and anonymizing data can help mitigate privacy risks.
Ensuring Transparency and Explainability
The complexity of AI models, particularly deep learning systems, often results in ‘black box’ scenarios where decision-making processes are opaque. This lack of transparency can lead to challenges in accountability and trust. Businesses should be mindful of the risks associated with engaging third parties to develop or operate their AI solutions. In many areas of decision-making, explainability is required, and a black-box approach will not suffice. For example, when denying someone for consumer credit, specific adverse action reasons need to be provided to the applicant.
To address this, businesses should strive to develop AI models that are interpretable and can provide clear explanations for their decisions. This not only aids in regulatory compliance but also enhances stakeholder trust.
Managing Cybersecurity Risks
AI systems are both targets and tools in cybersecurity. Alex Sharpe points out that cybercriminals are leveraging AI to craft sophisticated phishing attacks and automate hacking attempts. Conversely, businesses can employ AI for threat detection and rapid incident response.
The legal risks associated with AI in financial services highlight the importance of managing cybersecurity risks. Implementing robust cybersecurity measures, such as encryption, access controls, and continuous monitoring, is essential to protect AI systems from threats. Regular security assessments and updates can further safeguard against vulnerabilities.
Considering Insurance as a Risk Mitigation Tool
Given the multifaceted risks associated with AI, businesses should evaluate the extent to which certain types of insurance can help them manage and reduce risks. Policies such as commercial general liability, cyber liability, and errors and omissions insurance can offer protection against various AI-related risks.
Businesses can benefit from auditing business-specific AI risks and considering insurance as a risk mitigation tool. Regularly reviewing and updating insurance coverage ensures that it aligns with the evolving risk landscape associated with AI deployment.
Conclusion
While AI offers transformative potential for businesses, it also introduces significant legal and financial risks. By proactively addressing issues related to bias, data privacy, transparency, cybersecurity, and regulatory compliance, enterprises can harness the benefits of AI while minimizing potential liabilities.
AI tends to tell the prompter what they want to hear, whether it’s true or not, underscoring the importance of governance, accountability, and oversight in its adoption. Organizations that establish clear policies and risk management strategies will be best positioned to navigate the AI-driven future successfully.

To learn more about this topic view Corporate Risk Management / Remembering HAL 9000: Thinking about the Risks of Artificial Intelligence to an Enterprise. The quoted remarks referenced in this article were made either during this webinar or shortly thereafter during post-webinar interviews with the panelists. Readers may also be interested to read other articles about risk management and technology.
©2025. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.

Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits.
Assessments and Evaluations Overview
The new ADMT notice, opt-out and access and appeal obligations and rights go into immediate effect upon the regulation’s effective date, which follows California Office of Administrative Law (OAL) approval and would either be subject to the quarterly regulatory implementation schedule in the Government Code, or as has been the case with prior CCPA regulations immediately on OAL sign-off. We will not know if the CPPA will again seek a variance from the schedule until they submit the final rulemaking package.
Moving on to evaluations and risk assessments, the draft regulations do propose a phase-in, but only in part. Evaluations must be undertaken beginning on the regulation’s effective date, whereas assessment requirements apply to practices commencing on the effective date, but there is a 24 month period to complete, file certifications and abridged versions, and make available for inspection.
However, since Colorado, which like California, has very detailed requirements for conducting and documenting assessments, and New Hampshire, Oregon, Texas, Montana, Nebraska, and New Jersey already require assessments, Delaware and Minnesota will this summer, and Indiana, Rohde Island and Kentucky will by the new year, query whether the California phase-in is of much use. Out of the 20 state consumer privacy laws, all but Utah and Iowa require assessments.
Further, without at least a cursory assessment, how can you determine if the notice, opt-out and access and appeal rights apply?
So, what is the difference between an evaluation and an assessment?
First, they are required by different provisions. Evaluations are required by Section 7201, and risk assessments by Section 7150.
Next, there is no phase-in of evaluations as with risk assessments.
Risk assessments are much more complex and prescribed, and are at the core of a risk benefit judgment decision, and must be available for inspection and abridged summaries must be filed.
The content of the evaluation, which need not be published or subject to inspection demand outside of discovery, need only address if the process and technology is effective, in other words, materially error free, and if it discriminates against a protected class, in other words free of material bias. As such, they have similarities to assessments under the Colorado AI Act, effective next year but likely to be amended before then, and the recently passed Virginia HB 2094 AI bill that may or may not get signed by Governor Yougkin.
Thus, an evaluation alone won’t help you determine if the ADMT notice, opt-out and access and appeal rights apply, nor meet the risk assessment requirements. While it is a separate analysis, it can be incorporated into assessments assuming a company begins those immediately. 
Also, evaluations are not required for selling and processing of sensitive personal information (PI), as are assessments, and assessments are only required for identification processing to the extent AI is trained to do so, whereas any processing for identification is subject to an evaluation. Since CCBA is part of behavioral advertising, which is part of extensive profiling, sharing needs to be addressed in both evaluations and assessments.
Finally, under Section 7201, a business must implement policies, procedures, and training to ensure that the physical or biological identification or profiling works as intended for the business’s proposed use and does not discriminate based on protected classes.
So on to assessments, what activities need to be assessed?
First selling or sharing. All 18 states that require assessments require them for this; though, for the non-California states, the trigger is processing for targeted advertising rather than “sharing”, which is broader than sharing for CCBA, but the California regulations catch up with the new concept of behavioral advertising.
Next, processing of sensitive personal information. The same 18 states require assessments for the processing of sensitive data, with differing definitions. For instance, what is considered children’s personal data differs considerably. Notably, the California draft Regulation amendments would raise the age from 13 to 16, and Florida is under 18. There is also variation in the definition of health data. 
Note, while the Nevada and Washington (and potential New York) consumer health laws do not explicitly require assessments, they are practically needed, and Vermont’s data broker law requires initial risk assessments and a process for evaluating and improving the effectiveness of safeguards.
Other Risk Assessment Triggers
Assessments are mandatory before using ADMT to make or assist in making a significant decision, which is “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” This is a General Data Protection Regulation (GDPR) and European Data Protection Board (EDPB) inspired provision. The other states that require assessments also have a similar obligation, although the definitions may differ somewhat. In California, “Decisions that produce legal or similarly significant effects concerning a consumer” means decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, healthcare services, or access to essential goods or services. Only California gives guidance on what are essential goods or services, by means a parenthetical “(e.g., groceries, medicine, hygiene products, or fuel). Critics are concerned that this limits algorithmic custom pricing, sometimes derogatively referred to as surveillance pricing, or even AI of consumer behavior to decide where to open or close stores, though the aggregate or de-identified data should suffice for that. There is considerable guidance out of the EU, which can be looked to, though is clearly not binding. The EU approach is quite broad.
Speaking of looking to the EU, beware that the California and Colorado regulations diverge considerably from what is required under GDPR assessments, and keep in mind the material differences between GDPR with its lawful basis and legitimate interest tests and the US laws with opt-out concepts.
Uniquely amongst the states, California proposes the concept of extensive profiling, which covers any:
1) work or educational profiling;
2) public profiling; or
3) behavioral advertising.
Note however, that whilst behavioral advertising is said to include CCBA, it is broader and is defined as “the targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity—both across businesses, distinctly-branded websites, applications, or services, (i.e., CCBA) and within the business’s own distinctly-branded websites, applications, or services.” Significantly, this closes the gap between CCBA and the non-California regulation of targeted advertising, by including entirely 1st party behavioral advertising.
There is a carve out for “nonpersonalized advertising” as defined by CCPA Section .140(t), which means advertising and marketing that is based solely on a consumer’s personal information derived from the consumer’s current interaction with the business with the exception of the consumer’s precise geolocation. But also, note that here the exception is specifically limited to where the PI is not disclosed to third parties (i.e., not a processor or contractor). This has led some to argue that this guts the carve out. However, if personal data was disclosed to a third party, that would likely be a sale, especially given the breadth of the concept of “over valuable consideration” in the eyes of the regulators. So, the approach really is not inconsistent with the current treatment of contextual advertising.
PI to train ADMT or AI
Assessments are also proposed to be required for processing of PI to train ADMT or AI. This is another uniquely California concept, at least under state consumer privacy laws, and the California Chamber of Commerce and others, including some members of the legislature, have argued that, like other aspects of the proposed regulation’s treatment of ADMT, it goes beyond the Agency’s statutory authority. It is interesting to note that one of the topics included in the US House Committee on Energy and Commerce’s request for information to inform federal privacy legislation this week is the role of privacy and consumer protection standards in AI regulation and specifically the impact of state privacy law regulation of ADMT and profiling on US AI leadership. Another topic of focus is “the degree to which US privacy protections are fragmented at the state level and costs associated with fragmentation,” which seems to be inviting a preemption scope debate. So by the time at least this part of the regulation requires action, it may possibly be curtailed by federal law. That said, evaluations and assessments are practically necessary to guide compliance and information governance and to date repeated attempts at federal consumer privacy legislation have been unsuccessful.
Assessment Details
Most state laws do not have any specifics regarding how to conduct or document risk assessments, with the notable exception of Colorado. When it started assessment rulemaking, the Agency stated that it would look to try to create interoperability with Colorado and would also look to the guidance by the EDPB. While both can be seen to have influenced California’s proposed requirements, California adds to these.
Some of the content requirements are factual, such as purposes of processing and categories of PI. Others are more evaluative, such as the quality of the PI and the expected benefits and potential negative impacts of the processing, and how safeguards may mitigate those risks of harm. Nine examples are included in Section 7152(a)(5) to guide analysis.
Section 7152(a)(3) calls for analysis of specific operational elements for the processing.
Focus on Operational Elements
These operational elements are listed here[1] and can be seen as not only getting under the hood of the processing operations but also informing consumer expectations, and the risks and benefit analysis that is the heart of an assessment. Note, in particular, the inquiries into retention and logic, the latter meaning ‘built-in’ assumptions, limitations, and parameters that inform, power or constrain the processing, particularly as concerns ADMT.
Analysis and Conclusions
The assessment must not only document those processing details and the risk / benefit and risk mitigation analysis, but the conclusions and what was approved and/or disapproved.
The draft regulations call for participation by all relevant stakeholders, and they must be specifically named, as must the identification of the person responsible for the analysis and conclusions.
Filing and Certification
California diverges from the other states with respect to reporting requirements. Annually a responsible executive must certify to the CCPA that the business assessed all applicable processing activities, and an abridged assessment must be filed for each processing activity actually initiated. This will make it very apparent which businesses are not conducting assessments.
Further, the draft regulations limit what is required in the abridged assessments to largely factual statements:

The triggering processing activity;
The purposes;
The categories of personal information, including any sensitive categories; and
The safeguards undertaken.

Note that the risk / benefit analysis summary is not a part of the filing.
Inspection and Constitutional and Privilege Issues
Contrast that with the detailed risk / benefit analysis required by the full assessment, which, like all of the other states that require or will require assessments, is subject to inspection upon request.
This GDPR-inspired approach to showing how you made decisions calls for publication of value judgments, which, as I have opined in an article that is in your materials (see a synopsis here), is likely unconstitutional compelled speech. While the 9th Circuit in the X Corp and NetChoice cases struck down harm assessment and transparency requirements in the context of children’s online safety, the Court distinguished compelling disclosure of subjective opinions about a company’s products and activities from requiring disclosure of merely product facts. There is no 1st Amendment in GDPR-land, so we will have to wait and see if the value judgment elements of assessments can really be compelled for inspection.
Inspections also raise serious questions about attorney-client and work product privilege. Some states specifically provide that inspections of assessments is not a waiver of privilege, and/or that they will be maintained as confidential and/or are not subject to public records access requests. The draft regulations do not; however, the CCPA itself provides that the Act shall not operate to infringe on evidentiary privileges. At any event, consider labeling legal analysis and counsel as such and maintaining them apart from what is maintained for inspection.[2]

[1] Planned method for using personal information; disclosures to the consumer about processing, retention period for each category of personal information, categories of third parties with access to consumers’ personal information, relationship with the consumer, technology to be used in the processing, number of consumers whose personal information will be processed and the logic used.
[2] Note – Obtaining educational materials from Squire Patton Boggs Services Ireland, Limited, or our resellers, does not create an attorney-client relationship with any Squire Patton Boggs entity and should be used under the direction of legal counsel of your choice.

U.S. Consumer Privacy Laws Taking Effect in 2025 and Ensuing Compliance Complexities

The United States continues to operate without a comprehensive federal consumer privacy law as the American Privacy Rights Act remains subject to further amendments and uncertainty. Consequently, nineteen states enacted comprehensive consumer privacy legislation, of which eight are becoming or have become effective in 2025, and some existing state privacy laws have been amended since their enactment. This fragmented approach creates compliance complexities and operational considerations for organizations operating at state and national levels.
Comprehensive consumer privacy laws taking effect in 2025

Effective date
State comprehensive consumer privacy laws

January 1, 2025
• Delaware Personal Data Privacy Act• Iowa Consumer Data Protection Act• Nebraska Data Privacy Act• New Hampshire Senate Bill 255

July 1, 2025
• Tennessee Information Protection Act

July 31, 2025
• Minnesota Consumer Data Privacy Act

October 1, 2025
• Maryland Online Data Privacy Act

General requirements across each law
Each state law mandates distinct, jurisdiction-specific obligations on regulated organizations, which generally include the following:
Consumer rights: Each state law grants consumers certain privacy rights. While consumers’ privacy rights vary from state to state, consumers may be granted the right, subject to certain exceptions, to: (1) access, correct and delete data that an organization collects from or about them; (2) opt-out of further data processing; (3) the right to data portability and to direct the transfer of their personal information; and (4) the right to restrict and limit the use and disclosure of sensitive personal information.
Organizational compliance obligations: Each state law also imposes certain obligations on regulated entities acting as a data controller (i.e., an entity that controls the purpose and means of processing personal data) and data processors (i.e., third parties that process data under the direction and control of data controllers, such as service providers or vendors). Regulated organizations acting as data controllers may be obligated to, among other things, respond to consumer privacy requests, implement reasonable technical and organizational security measures, provide consumers with a notice of privacy practices and a mechanism through which consumers may opt out of data processing.
Key compliance considerations
In light of the complexities highlighted above, organizations should reflect on the following compliance considerations:

Whether your organization’s corporate policies are compliant with new privacy legislation.

With several new legislative updates, organizational corporate policies, such as privacy policies and privacy notices, may become dated and/or noncompliant with the most recent and looming updates. It is recommended practice for organizations to routinely evaluate their corporate policies to ensure compliance with any updated regulatory requirements and implement changes to the extent necessary.

Whether your organization is equipped to respond to consumer privacy requests.

Responding to consumer privacy requests may be problematic for organizations operating across multiple states due to variance among consumer privacy rights, related nuances and exceptions. Organizations should evaluate the various privacy rights and exceptions, if any, in states in which they operate and establish a playbook to implement an efficient and effective response.

Whether your organization is exempt from compliance.

Some privacy laws provide for entity-level and data-level exemptions, subject to certain nuances. An entity-level exemption generally exempts an organization based on the type of entity. For example, some states include an entity-level exemption for not for profit corporations or entities regulated by certain federal laws, such as the Health Insurance Portability and Accountability Act (HIPAA). A data-level exemption exempts certain data that is subject to regulation under certain federal laws, such as HIPAA and the Gramm-Leach-Bliley Act.
In addition, some states have an operational threshold that an organization must meet or exceed to be subject to the relevant act. For example, in Delaware, an organization must (1) do organization in the state or produce products or services that are targeted to Delaware residents, and (2) one of the following must apply: (i) control or process personal data of 35,000 or more consumers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction or (ii) (a) control or process personal data of 10,000 or more consumer and (b) derive more than 20% of its gross revenue from the sale of personal data.
Organizations should evaluate whether they may be exempt from certain state laws and, if so exempt, how that might impact their corporate policies and go-to-market strategies.

Speed Bump: CPPA Pulls Over Honda for Privacy Practices

It’s no surprise that the California Privacy Protection Agency (“CPPA”) has been active. They are making a strong case for being the most active state agency in the privacy arena.
Well, they just strengthened that claim in a Stipulated Final Order with American Honda Motor Company, Inc. (“Honda”) from last week. The CPPA claims that Honda’s practices were violations of the California Consumer Privacy Act and the claims are pretty surprising.
Not because they are egregious. But, mostly because it’s demonstrative of the fact that the CPPA is not giving points for “effort”.

Honda required too much information from consumers to opt-out of sale/sharing of consumer data

The CCPA allows consumers certain rights. Included in these rights are the right to opt-out of the sale or sharing of personal information, right to limit the use of sensitive personal information, and the right to delete personal information.
Honda had created a Privacy Center page to allow consumers to manage how their personal data was handled. Because Honda needed to be able to verify the information from the consumer, certain questions were asked in an attempted effort to identify the consumer and manage their personal data.
However, the CPPA felt that Honda was asking too many questions. From the order: “although Honda generally needs only two data points from the Consumer to identify the Consumer within its database, Honda’s verification process for Verifiable Consumer Requests requires the matching of more than two data points.” (emphasis in original)
According to the CPPA, the design of Honda’s Privacy Center “impairs or interferes with the Consumer’s ability to exercise those rights. The CCPA prohibits businesses from designing methods for submitting CCPA Requests that substantially subverts or impairs the Consumer’s autonomy, decisionmaking, or choice.”

Honda required too much information to allow third-party agents to opt-out on behalf of consumers

Consumers can allow third-party agents to exercise their privacy rights under the CCPA. And businesses can require the agents to a written authorization from the consumer to allow the third-party agents to do so.
But, businesses cannot “require the Consumer to directly confirm that they have provided the Authorized Agent permission to submit the request. Businesses may directly contact the Consumers directly in that manner only for Verifiable Consumer Requests.”
Honda, apparently, was treating all third-party requests the same and not limited the outreach to the consumer to the Verifiable Consumer Requests.

Honda’s cookie management tool was not offering symmetrical choices

And now we enter the nit-picking section of the program.
(This is John’s opinion, not necessarily the opinion of TCPAWorld, but hey, I’m writing this, so I get to interject my opinion.)
Honda uses a third-party cookie management tool. It’s one of, if not THE, industry leader cookie management tool.
Cookie management menu pops up. And consumer has two clicks to turn off the advertising cookies: (1) click the toggle button, and (2) click the “Confirm my choices” button.
Seems reasonable.
However, if the consumer goes back to the cookie management tool at a later point, there is one button – an “Allow All” button. This button allows all the cookies to be turned back on in a single click.
THE HORROR.
Excuse me, while I clutch my pearls.
The CCPA said the single opt-in was not symmetrical in choice. “Symmetry in choice means that the path for a Consumer to exercise a more privacy-protection option cannot be longer or more difficult or time-consuming than the path to exercise a less privacy-protection option because that would impair or interfere with the Consumer’s ability to make a choice.”
I get it. That’s the law. But, two clicks versus one click is absurd.
Especially, when the consumer can still opt out of individual categories of cookies in two clicks. It’s just the opt-in takes one click. (However, the user has to go back into the cookie management tool somehow, so arguably that’s an additional click.)

Honda couldn’t provide the CPPA with their contracts with advertising vendors

The CCPA requires companies that collect and disclose personal information to vendors to have specific requirements in their contracts around personal information.
However, per the Stipulated Order, Honda could not produce the contracts.
OK, so that one’s clearly on Honda.
The big takeaways from this order:

The CPPA is not joking. They are NOT going to give you points for trying to comply.
The CPPA is very aggressive. The administrative fine is a total of $632,500. Of that amount, $382,500 accounts for Honda’s conduct to a total of 153 consumers.

Read that again.
One Hundred Fifty Three Consumers accounted for a fine of $382,500.

Reliance on vendors is not going to save you from CCPA violations.
Basic contract management = Keep copies of contracts and produce them.

It’s a Wrap—The Latest from the Ninth Circuit on “Sign-In Wrap” Agreements

On February 27, 2025, in Chabolla v. ClassPass Inc., the U.S. Court of Appeals for the Ninth Circuit, in a split 2-1 decision, held that website users were not bound by the terms of a “sign-in wrap” agreement.
ClassPass sells subscription packages that grant subscribers access to an assortment of gyms, studios and fitness and wellness classes. The website requires visitors to navigate through several webpages to complete the purchase of a subscription. After the landing page, the first screen (“Screen 1”) states: “By clicking ‘Sign up with Facebook’ or ‘Continue,’ I agree to the Terms of Use and Privacy Policy.” The next screen (“Screen 2”) states: “By signing up you agree to our Terms of Use and Privacy Policy.” The final checkout page (“Screen 3”) states: “I agree to the Terms of Use and Privacy Policy.” On each screen, the words “Terms of Use” and “Privacy Policy” appeared as blue hyperlinks that took the user to those documents.
The court described four types of Internet contracts based on distinct “assent” mechanisms:

Browsewrap – users accept a website’s terms merely by browsing the site, although those terms are not always immediately apparent on the screen (courts consistently decline to enforce).
Clickwrap – the website presents its terms in a “pop-up screen” and users accept them by clicking or checking a box expressly affirming the same (courts routinely enforce).
Scrollwrap – users must scroll through the terms before the website allows them to click manifesting acceptance (courts usually enforce).
Sign-in wrap – the website provides a link to the terms and states that some action will bind users but does not require users to actually review those terms (courts often enforce depending on certain factors).

The court analyzed ClassPass’ consent mechanism as a sign-in wrap because its website provided a link to the company’s online terms but did not require users to read them before purchasing a subscription. Accordingly, the court held that user assent required a showing that: (1) the website provides reasonably conspicuous notice of the terms to which users will be bound; and (2) users take some action, such as clicking a button or checking a box, that unambiguously manifests their assent to those terms.
The majority found Screen 1 was not reasonably conspicuous because of the notice’s “distance from relevant action items” and its “placement outside of the user’s natural flow,” and because the font is “timid in both size and color,” “deemphasized by the overall design of the webpage,” and not “prominently displayed.”
The majority did not reach a firm conclusion on whether the notice on Screen 2 and Screen 3 is reasonably conspicuous. On one hand, Screen 2 and Screen 3 placed the notice more centrally, the notice interrupted the natural flow of the action items on Screen 2 (i.e., it was not buried on the bottom of the webpage or placed outside the action box but rather was located directly on top of or below each action button), and users had to move past the notice to continue on Screen 3. On the other hand, the notice appeared as the smallest and grayest text on the screens and the transition between screens was somewhat muddled by language regarding gift cards, which may not be relevant to a user’s transaction; thus, a reasonable user could assume the notice pertained to gift cards and hastily skim past it. 

Even if the notice on Screen 2 and Screen 3 was reasonably conspicuous, the majority deemed the notice language on both screens ambiguous. Screen 2 explained that “[b]y signing up you agree to our Terms of Use and Privacy Policy,” but there was no “sign up” button—rather, the only button on Screen 2 read “Continue.” Screen 3 read, “I agree to the Terms of Use and Privacy Policy,” and the action button that follows is labeled “Redeem now”; it does not specify the user action that would constitute assent to the terms. In other words, the notice needs to clearly articulate an action by the user that will bind the user to the terms, and there should be no ambiguity that the user has taken such action. For example, clicking a “Place Order” button unambiguously manifests assent if the user is notified that “by making a purchase, you confirm that you agree to our Terms of Use.” 
Accordingly, the court held that Screen 1 did not provide reasonably conspicuous notice and, even if Screen 2 and Screen 3 did, progress through those screens did not give rise to an unambiguous manifestation of assent.
The dissent noted that the majority opinion “sows great uncertainty” in the area of internet contracts because “minor differences between websites will yield opposite results.” Similarly, the dissent argued that the majority opinion will “destabilize law and business” because companies cannot predict how courts are going to react from one case to another. Likewise, the dissent expressed concern that the majority opinion will drive websites to the only safe harbors available to them—clickwrap or scrollwrap agreements.
While ClassPass involved user assent to an arbitration provision in the company’s online terms, the issue of user assent runs far deeper, extending to issues like consent to privacy and cookie policies—a formidable defense to claims involving alleged tracking technologies and wiretapping theories. Notwithstanding the majority’s opinion, many businesses’ sign-in wrap agreements will differ from the one at issue in the lawsuit and align more closely with the types of online agreements that courts have enforced. Nonetheless, as the dissent noted, use of a sign-in wrap agreement carries some degree of uncertainty. Scrollwrap and clickwrap agreements continue to afford businesses the most certainty.

Common Privacy Pitfalls in M&A Deals

Many expect that deal activity will increase in 2025. As we approach the end of the first quarter, it is helpful to keep in mind privacy and data security issues that can potentially derail a deal. We discussed this in a webinar last week, where we highlighted issues from the buyer’s perspective. We recap the highlights here:

Take a Smart Start Approach: Often when privacy “specialists” are brought into deals, it is without a clear understanding of the goal of the deal and post-acquisition plans. Keeping these in mind can be crucial to conducting appropriate and risk-based diligence. (Along with having a clear understanding of the structure of the deal.) Questions to ask include the extent to which the target will be integrated into the buyer. Or, whether privacy assets (mailing lists) are important to the deal. 
Conducting Diligence: Diligence can happen on a piece-meal basis. There are facts about the target that can be discovered even before the data room opens. What information has it shared about operations and products on its website? Has there been significant press? Any publicly-announced data breaches? What about privacy or data security related litigation? When submitting diligence question lists, keep the scope of the deal in mind. What are priority items that can be gathered, and how can that be done without overwhelming the target?
Pre-Closing Considerations: There are some obvious things that will need to happen before closing, like reviewing and finalizing deal documents and schedules. There may also be privacy-specific issues, such as addressing potential impediments to personal information transfers.
Post-Closing Integration: In many deals, the privacy and cybersecurity team is not involved in the integration process. Or, a different team handles these steps. Issues that might arise- and can be anticipated during the deal process- include understanding the data and processes that will be needed post integration, and the personnel who can help (whether at the target or buyer).

Putting It Into Practice: Keeping track of the intent of the deal and the key risks can help the deal flow more smoothly. This checklist can help with your next transaction.

Enforcement Update: Regulatory Attention Focused on Deletion Requests

Data protection authorities worldwide are intensifying their focus on individuals’ rights to have their personal data deleted. This heightened regulatory attention underscores the importance of organizations implementing robust compliance mechanisms to handle deletion requests effectively. For example:

In October 2023, California enacted pioneering legislation to strengthen consumer data protection. The California Delete Act (Senate Bill 362), signed into law in October 2023, establishes a centralized mechanism for consumers to request the deletion of their personal information held by data brokers. Under this law, data brokers are mandated to register annually with the California Privacy Protection Agency (CPPA) starting January 2024 and to process deletion requests submitted through the centralized platform beginning August 2026. This legislation aims to simplify the process for consumers to manage their personal data and imposes stringent requirements on data brokers to ensure compliance. Since November 2024, the CPPA has fined seven data brokers for failing to register and to pay the annual fee required under the California Delete Act.
In March 2025, Oregon released an enforcement report highlighting that “the number one right consumers have requested and been denied, is the right to delete their data.”
In March 2025, the European Data Protection Board (EDPB) initiated its Coordinated Enforcement Framework (CEF) action, centering on the right to erasure, commonly known as the “right to be forgotten,” as stipulated in Article 17 of the General Data Protection Regulation (GDPR). This initiative involves 32 Data Protection Authorities (DPAs) across Europe collaborating to assess and enhance compliance with erasure requests. Participating DPAs will engage with various data controllers, either by launching formal investigations or conducting fact-finding exercises, to scrutinize how these entities manage and respond to erasure requests, including the application of relevant conditions and exceptions. The findings from these national actions will be collectively analyzed to facilitate targeted follow-ups at both the national and EU level.

These developments reflect a broader global trend toward empowering individuals with greater control over their personal data and ensuring that organizations uphold these rights. For businesses, this signifies a need to evaluate and, if necessary, enhance their data management practices to comply with evolving regulatory standards concerning data deletion requests.
Given the intensified regulatory focus on data deletion rights, organizations worldwide should consider proactively assessing and strengthening their data protection practices. By implementing robust mechanisms to handle deletion requests effectively, businesses may not only ensure compliance with current regulations but also build trust with consumers who are increasingly concerned about their privacy rights.

KEEPING UP: Kardashian Brand Sued in TCPA Call Timing Class Action

When Kim Kardashian said, “Get up and work”, the TCPA plaintiff’s bar took that seriously. And another Kardashian sibling may be facing the consequences.
We at TCPAWorld were the first to report on the growing trend of lawsuits filed under the TCPA’s Call Timing provisions, which prohibit the initiation of telephone solicitations to residential telephone subscribers before 8 am and after 9 pm in the subscriber’s time zone. Call it a self-fulfilling prophecy or just intuition honed by decades of combined experience, but these lawsuits show no signs of slowing down.
In Melissa Gillum v. Good American, LLC. (Mar. 11, 2025, C.D. Ca), Plaintiff alleges that Khloe Kardashian’s clothing brand Good American sent the following text messages to her residential telephone number at 07:15 AM and 06:30 AM military time:

Of course, Plaintiff alleges she never authorized Good American to send her telephone solicitations before 8 am or after 9 pm.
Plaintiff also seeks to represent the following class:
All persons in the United States who from four years prior to the filing of this action through the date of class certification (1) Defendant, or anyone on Defendant’s behalf, (2) placed more than one marketing text message within any 12-month period; (3) where such marketing text messages were initiated before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).
The consensus here on TCPAWorld is that calls or text messages made with prior express consent are not “telephone solicitations” and likely not subject to Call Time restrictions. We’ll have to see how these play out but stay tuned for the latest updates!

NO SMOKING UNTIL 8 AM: R.J. Reynolds Burned By TCPA Time-Of-Day Class Action Lawsuit

Hi TCPAWorld! R. J. Reynolds Tobacco Company—the powerhouse behind Camel, Newport, Doral, Eclipse, Kent, and Pall Mall—is back in court. This time, though, it isn’t about the usual allegations against Big Tobacco. Instead, the plaintiff accuses the company of violating the TCPA’s time-of-day restrictions and causing “intrusion into the peace and quiet in a realm that is private and personal to Plaintiff and the Class members.” Vallejo v. R. J. Reynolds Tobacco Company, 8:25CV00466: Vallejo v RJ Reynolds Tobacco Complaint Link
Under the TCPA, telemarketing calls or texts can’t be made before 8 a.m. or after 9 p.m. (local time for the recipient). We’ve been seeing a lot of these time-of-day cases pop up lately:

 IN HOT WATER: Louisiana Crawfish Company Sued Over Early-Morning Text Messages – TCPAWorld
IT WAS A MATTER OF TIME: Another Company Allegedly Violated TCPA Time Restrictions. – TCPAWorld
TIME OUT!: NFL Team Tampa Bay Buccaneers Hit With Latest in A Series of Time Restriction TCPA Class Action – TCPAWorld
SOUR MORNING?: For Love and Lemons Faces TCPA Lawsuit Over Timing Violations – TCPAWorld
TOO LATE: 7-Eleven Sued in TCPA Class Action for Allegedly Failing to Comply With Call Time Limitations–And This Is Crazy If its True – TCPAWorld

Here, in Vallejo v. R. J. Reynolds Tobacco Company, however, the plaintiff claims he received early-morning marketing texts around 7:15 a.m. and 7:36 a.m., local time. The complaint further alleges that he “never signed any type of authorization permitting or allowing Defendant to send them telephone solicitations before 8 am or after 9 pm,” though it doesn’t actually say he withheld consent entirely for these messages.
The plaintiff seeks to represent the following class:
All persons in the United States who from four years prior to the filing of this action through the date of class certification (1) Defendant, or anyone on Defendant’s behalf, (2) placed more than one marketing text message within any 12-month period; (3) where such marketing text messages were initiated before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).
As I’ve said before, from my reading of the TCPA, these time-of-day restrictions apply specifically to “telephone solicitations,” meaning calls or texts made with the recipient’s prior consent or within an existing business relationship might be exempt. Since the plaintiff doesn’t deny consenting to these texts in the first place, we’ll have to keep an eye on this lawsuit to see if the Central District of California agrees with that interpretation.

COMPLAINTS ABOUT COMPLAINTS: Defendant Granted Leniency from Burdensome Discovery Production

Discovery disputes are a big part of TCPA cases and, practically speaking, it can be exceptionally difficult for defendants to produce all documents requested by TCPA plaintiffs… for several reasons. Requests for production and interrogatories tend to be worded as broadly as possible (generally to seek class information). Then, even with discovery requests that are agreed upon by the parties, the practical difficulty of obtaining and producing the requested material can range from difficult to nearly impossible.
In Nock v. PalmCo Administration, LLC, No. 1:24-CV-00662-JMC, 2025 WL 750467 (D. Md. Mar. 10, 2025), the District Court of Maryland showed leniency to the defendant, although it still ordered the defendant to at least attempt to produce nearly every material that the plaintiff had requested.
For some context, the plaintiff alleged that the defendant had violated 47 U.S.C. § 227(c), the Do Not Call (“DNC”) provision of the TCPA, and Md. Com. Law § 14-320, Maryland’s analogous DNC law. Id at *1. An informal discovery dispute was brought before the court based on the defendant’s purportedly incomplete responses to the plaintiff’s discovery requests. Id.
Firstly, the court found that an interrogatory seeking “all complaints ‘regarding [the defendant’s] marketing practices’” unreasonably burdened the defendant—since complaints relating to all marketing practices would clearly turn up material unrelated to the case’s subject matter. Id. at *2. However, the court still ordered production of all complaints related to the case’s subject matter. Id. at *3.
Secondly, the plaintiff sought production of documents that had previously been ordered by the court. Id. However, one of the categories of documents was outside the defendant’s possession—data from one of its vendors. Id. As the defendant demonstrated “reasonable efforts to obtain the requested information,” the court allowed the defendant to send one more email request to furnish missing data from the third-party vendor to fulfill the defendant’s obligations under the previous court order. Id.
Although this specific request did not fall under retention requirements, it is worth a reminder that the statutory Telemarketing Sales Rule recently expanded in what records must be kept for all telemarketing calls.
Thirdly, the plaintiff sought records of all communications between the defendant and a third-party vendor. Id. Similarly, the court was lenient with the defendant, even though the defendant had already missed a court ordered production deadline on those communications. Id. The defendant was still ordered to produce the communications within thirty days, but the court was understanding of the practical difficulties in producing all said communications. Id. at *3-4.
That is all for this order. However, the TCPA keeps seeing new rules and requirements. Most urgently, we are now less than a month away from new revocation rules coming into effect. Be ready for those changes as they are set to be implemented on April 11, 2025!

Even With FCC 1:1 Gone, the CMS 1:1 Rule is Still Standing

Obviously, a lot going on in the lead gen space over the last six weeks. The biggest change of all is the FCC’s one-to-one rule being vacated. The pivot the industry had to make immediately after that ruling affected so many businesses.
But, one thing that did not change was CMS’s requirement for one-to-one consent to share personal beneficiary data between TPMOs. This is true even though CMS’s guidance throughout the summary of the rule was all based on the FCC’s one-to-one rule.
As a reminder:

CMS requires individualized consent: Beneficiary consent for data sharing must be obtained on a specific, one-to-one basis, with clear and easily understood disclosures.
The key to obtain consent is transparency CMS mandates that beneficiaries understand 

Where their personal data is being shared.
The specific purpose of the contact they are consenting to, and
The identity of the entity that will be contacting them.

CMS Consent is Broader than the FCC’s proposed 1:1 consent: The CMS consent rule has a wider scope than the proposed 1:1 consent in the TCPA because it also applies to manual dialed calls.
Opt-In Consent is Mandatory: CMS requires an opt-in consent model, meaning the default should be that data is not shared, and the beneficiary must affirmatively choose to allow sharing.
Separate Legal Entities Require Explicit Consent: TPMOs cannot share beneficiary data with a TPMO that is a different legal entity without the beneficiary’s prior express written consent. This applies even to affiliated agents within the same marketing organization.

While the industry took a collective sigh of relief when the TCPA’s 1:1 rule was vacated, those TPMOs under CMS’s purview must remain diligent. And, new CMS rules should be announced within the next few weeks, so stay tuned.