Mass. Chapter 93A Clarifications: Understanding Demand Letters and Contract Breaches in Dworman v. PHH Mortgage

In Dworman v. PHH Mortg. Servs., the District of Massachusetts recently issued a decision that deals with various aspects of Chapter 93A jurisprudence. Some of the court’s statements about Chapter 93A, however, may benefit from clarification.
As to the dispute at issue, the plaintiff (a mortgagor) alleged that the defendants (mortgage servicers) breached a contract to forgive mortgage debt, and that defendants’ alleged failures were unfair or deceptive under Chapter 93A, Section 9. The defendants countered with allegations that the plaintiff breached their contract, and the court granted their motion for summary judgment against the plaintiff.
When addressing Chapter 93A, the court discussed the Chapter 93A “Legal Landscape” in its decision. In particular, the court concluded that, although sending a demand letter is prerequisite to a Section 9 suit, the “failure to respond or an inadequate response to a demand letter is not itself a violation of Chapter 93A.” First, a 30-day demand letter is required in most instances; however, a claimant does not need to send a demand letter to trigger Chapter 93A jurisdiction if the claim “is asserted by way of counterclaim or cross-claim, or if the prospective respondent does not maintain a place of business or does not keep assets within the commonwealth” as set forth in Section 9(3). Second, as to not responding to demand letters or providing an inadequate response, it is important to understand and appreciate that a bad faith refusal to grant relief in response to a demand letter “with knowledge or reason to know that the act or practice complained of violated said section two” may expose a defendant to double or treble damages, also as set forth in Section 9(3). Responses to demand letters may not only limit multiple damages, but may also cut off a plaintiff’s attorneys’ fees and costs.
Also, when explaining that a mere breach of contract without more does not violate Chapter 93A, the court stated that a defendant’s action must “attain a level of rascality that would raise an eyebrow of someone inured to the rough and tumble of the world of commerce.” However, the Massachusetts Supreme Judicial Court (SJC) abandoned the rascality language as uninstructive in Massachusetts Employers Ins. Exch. v. Propac-Mass, Inc., 420 Mass. 39 (1995). Instead, according to the SJC, courts should focus on the nature of the challenged conduct and on the purpose and effect of that conduct as the crucial factors in making a Chapter 93A fairness determination. That SJC standard has been used by the First Circuit Court of Appeals, along with an additional evaluation of “the equities between the parties,” the “plaintiff’s conduct,” and “[w]hat a defendant knew or [reasonably] should have known.” (Schuster v. Wynn MA, LLC, 118 F.4th 30 (2024)). As to when a breach of contract would violate Chapter 93A, there must be a “plus factor” with the breach. For example, conduct in disregard of known contractual arrangements and intended to secure benefits for the breaching party may violate Chapter 93A. In other words, conduct used as leverage to destroy another party’s rights is viewed as commercial extortion and may violate Section 2. A good faith contractual dispute regarding whether money is owed, or performance of some kind is due, may not.

Stall on Automated Decision-Making Technology Rules from the California Privacy Protection Agency

This week, the California Privacy Protection Agency (CPPA) board held its April meeting to discuss the latest set of proposed regulations, including automated decision-making technology (ADMT) regulations. Instead of finalizing these rules, the board continued its debate and considered further amendments to the draft regulations. Notably, some members proposed changing the definition of ADMT and removing behavioral advertising from ADMT and risk assessment requirements. The board also directed the CPPA to remove a selection of categories in scope for provisions covering significant decisions. The board conditionally approved these changes, but the final (we think) vote will occur at the next meeting.
These continued discussions likely mean that the final rules related to ADMT, risk assessments, and cybersecurity audits are still a long way away. The CPPA raised six topics that they want additional feedback on before presenting the final set of amendments next month:
1. The definition of “ADMT;”
2. The definition of “significant decision;”
3. The “behavioral advertising” threshold;
4. The “work or educational profiling” and “public profiling” thresholds;
5. The “training” threshold; and
6. Risk assessment submissions to the CPPA.
If the changes are substantial enough, the CPPA would open up another 45-day comment period. During the last comment period, CPPA staff reported that over 1,600 pages of comments were received, and hours of testimony were given during the public hearing. The board has until November 2025 to submit the final regulatory package to the California Office of Administrative Law.
Board member Alastair Mactaggart argues that the draft regulations go beyond the scope of the CPPA’s authority to regulate privacy by also attempting to regulate artificial intelligence. He said, “We are now on notice that if we pass these regulations, we will be sued repeatedly and by many parties.” We will continue to monitor these discussions and proposed regulations.

Yahoo ConnectID Faces Class Action Over Email Address Tracking as Alleged Wiretap Violation

Yahoo’s ConnectID is a cookieless identity solution that allows advertisers and publishers to personalize, measure, and perform ad campaigns by leveraging first-party data and 1-to-1 consumer relationships. ConnectID uses consumer email addresses (instead of third-party tracking cookies) to produce and monetize consumer data. A lawsuit filed in the U.S. District Court for the Southern District of New York says that this use and monetization is occurring without consumer consent. The complaint alleges that ConnectID allows user-level tracking across websites by utilizing the individual’s email address—i.e., ConnectID tracks the users via their email addresses without consent. The complaint further alleges that this tracking allows Yahoo to create consumer profiles with its “existing analytics, advertising, and AI products” and to collect user information even if a user isn’t a subscriber to a Yahoo product.
The complaint states, “Yahoo openly tells publishers that they need not concern themselves with obtaining user consent because it already provides ‘multiple mechanisms’ for users to manage their privacy choices. This is misleading at best.” Further, the complaint alleges that Yahoo’s Privacy Policy “makes no mention of sharing directly identifiable email addresses and, in fact, represents that email addresses will not be shared.”
The named plaintiff seeks to certify a nationwide class of all individuals with a ConnectID and whose web communications have been intercepted by Yahoo. The plaintiff asserts this class will be “well over a million individuals.” The complaint seeks relief under the New York unfair and deceptive business practices law, the California Invasion of Privacy Act, and the Federal Computer Data Access and Fraud Act.
These “wiretap” violation lawsuits are popping up all across the country. The lawsuits allege violations of state and federal wiretap statutes, often focusing on website technologies like session replay, chatbots, and pixel tracking, arguing that these trackers (and here, the tracking of email addresses) allow for unauthorized interception of communications. For more information on these predatory lawsuits, check out our recent blog post, here.
The lawsuit seeks statutory, actual, compensatory, punitive, nominal, and other damages, as well as restitution, disgorgement, injunctive relief, and attorneys’ fees. Now is the time to assess your website and the tracking technologies it uses to avoid these types of claims.

WhatsApp Patches Vulnerability That Facilitates Remote Code Execution

WhatsApp users should update the application for vulnerability CVE-2025-30401, which Meta recently patched when WhatsApp was released for Windows version 2.2450.6.
Meta cautions Windows users to update to the latest version due to the vulnerability that it is calling a “spoofing” issue that could allow attackers to execute malicious code on devices. The attackers exploit the vulnerability by sending maliciously crafted files with altered file types to users that “cause the recipient to inadvertently execute arbitrary code rather than view the attachment when manually opening the attachment inside WhatsApp.”
If you haven’t updated your WhatsApp application, now’s the time.

The FTC BOTS Act – Leveling the Ticketing Field

On March 31, 2025, President Trump signed an executive order (EO 14254) titled “Combating Unfair Practices in the Live Entertainment Market.” EO 14254 directs the Federal Trade Commission (FTC) to, amongst other provisions, rigorously enforce the Better Online Ticket Sales Act (BOTS Act or the Act) and address unfair ticket scalping practices.
Overview of the BOTS Act
Enacted in 2016, the BOTS Act aims to prevent ticket brokers from buying large numbers of event tickets and reselling them at inflated prices. The Act applies to tickets for public concerts, theater performances, sporting events, and similar activities at venues that seat over 200 and prohibits an entity from circumventing access controls or security measures used by online ticket sellers (such as Ticketmaster) to enforce ticket-purchasing limits. It also prevents the resale of tickets obtained by knowingly circumventing access controls. Violations of the Act are considered violations of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. Violators are subject to fines of up to $53,088 per violation.
Under the Act, the circumvention of access controls or security measures is construed broadly and applies to automated ticket bots and certain human actions. A ticket bot is a software program designed to rapidly purchase large quantities of tickets the moment they become available. Scalper bots specifically automate tasks like filling out forms, refreshing web pages, and completing the checkout process. Since scalper bots can complete the checkout process much faster than human users, they can buy thousands of limited-edition tickets as soon as they go on sale. Scalped tickets are then resold for higher profit because they are no longer available from the original ticket seller – this practice is known as ticket scalping.
Sellers often set limits on the number of tickets each buyer can purchase. Bots can bypass this limit by rapidly purchasing tickets across multiple accounts or using fake online profiles and IP addresses. Bots may bypass CAPTCHA and other security measures or manage multiple browser sessions simultaneously to purchase large volumes of tickets simultaneously. These tactics may run afoul of the BOTS Act if the seller has access controls or security measures to prevent such activity. The BOTS Act is not only limited to bot activity, though. A person who buys tickets by creating multiple accounts or using proxies and VPNs to disguise their IP address may also be circumventing a seller’s security measures, which may also violate the Act.
Enforcement Action Under the BOTS Act
In January 2021, the FTC filed complaints against three ticket brokers for allegedly using bots to buy tens of thousands of event tickets and then resell them at inflated prices. The FTC alleged that the defendants violated the Act in multiple ways, including using bots to search for and automatically reserve tickets, using software to conceal their IP addresses, and using bots to bypass CAPTCHA security measures. The complaint also alleged that the defendants had created hundreds of Ticketmaster accounts in the names of friends, family, and fictitious individuals and used hundreds of credit cards to bypass ticket limits. In total, the brokers were subject to a judgment of over $31 million, but due to their inability to pay, they were ultimately liable for $3.7 million in civil penalties.
The BOTS Act also empowers state attorneys general to enforce the Act if they determine that their states’ residents have been threatened or adversely affected by violations of the Act. Though there has been little notable state enforcement action to date, senators from both political parties have introduced bills to enable stronger enforcement of the Act. For instance, in May 2024, the Democratic governor of Arizona, Katie Hobbs, signed and passed a state law often referred to as the “Taylor Swift bill” to authorize the state’s attorney general to investigate unlawful uses of bots to purchase multiple event tickets or circumvent waiting periods and presale codes.
Looking Forward
The executive order instructs the FTC to “rigorously enforce” the BOTS Act and to provide state attorneys general and consumer protection officers with information and evidence to further this directive. The EO also directs the FTC to take additional actions, such as proposing regulations and enforcing against unfair methods of competition and unfair or deceptive acts and practices.
EO 14254 follows on the heels of a December 2024 FTC Rule – the Junk Fees Rule – banning junk ticket and hotel fees, which goes into effect on May 10, 2025. Under the Junk Fees Rule, businesses must clearly and conspicuously disclose the total price, including all mandatory fees, whenever they offer, display, or advertise any price of live-event tickets or short-term lodging. According to the FTC, the Junk Fees Rule enables the agency to “rigorously pursue” bait-and-switch pricing tactics, such as drip pricing and misleading fees.
Following the release of EO 14254 on April 8, 2025, two members of Congress, Diana Harshbarger (R-TN) and Troy Carter (D-LA) co-sponsored a bill in the House titled the “Mitigating Automated Internet Networks for [MAIN] Event Ticketing Act.” This bill is a companion bill to the one initially introduced in the Senate by Marsha Blackburn (R-TN) and Ben Ray Luján (D-NM). The bill would create reporting requirements for online ticket sellers to report successful bot attacks to the FTC. The proposed legislation would also create a complaint database for consumers to share their experiences with the FTC, who would, in turn, be required to share the information with state attorneys general. According to Congresswoman Harshbarger’s press release, the legislation aims to build on the BOTS Act and codify EO 14254. There is strong bipartisan support for live-event industry regulation. In light of EO 14254, the FTC’s Junk Fee Rule, and the MAIN Event Ticketing Act introduction, it is safe to say that both state and federal authorities are focused on regulating the live entertainment industry, particularly in the ticket sale context. BOTS Act enforcement may increase in the coming years, and ticket scalpers should beware.

Arkansas’ Kids Social Media Law: Another One Bites the Dust

Arkansas’ second attempt at regulating minor’s access to social media – in the form of the Social Media Safety Act (SB 689) – has again been struck down as unconstitutional. The court permanently enjoined the state from enforcing the law. It was a modified version of Arkansas’ 2023 SB 396, that was also blocked. The plaintiff in both challenges was NetChoice, a group familiar to anyone following kids’ social media laws. As a result of NetChoice’s efforts, similar laws have been blocked in California, Utah, Maryland, Mississippi, Ohio, and Texas. Courts in those states, as in Arkansas, found that the laws were unduly burdensome on free speech, with overly broad content restrictions not tailored to prevent harm to minors.
Like prior social media laws, the Arkansas Social Media Safety Act would have required social media companies to verify that users were at least 18 years old. Or obtain verifiable parental consent for the minor to create an account. Companies that did not implement such checks would face monetary penalties. Social media companies would also have been required to use third-party vendors to perform reasonable age verification, which can include digitized identification cards, government-issued identification, or any commercially reasonable method. Social media companies would also have been prohibited from retaining any identifying information after access to the platform has been granted. 
The story on children’s privacy and social media does not end here. States have continued to pass laws attempting to regulate kids’ use of social media. The Virginia legislature is seeking to amend the state’s data privacy law to restrict 16 year olds to one hour of social media use a day, along with requiring age screening mechanisms. The amendment is awaiting signature. Arkansas has also rolled out additional legislation targeting social media companies and children. Utah recently implemented app store age limits, with effective dates under the law ranging from May 2025 to December 2026. And Texas – despite prior social media challenges – has introduced House Bill 186. If passed, the law would require age verification to create accounts. Florida has also introduced legislation (SB 868) that would, among other things, permit law enforcement to view messages relevant to an investigation, allow parents to read all messages in a minor’s account, and prohibit minors from using accounts that have “disappearing” messages.
Putting it into Practice: While these laws have not thus far been successful, state legislatures continue to propose laws to regulate kids’ use of social media. We anticipate this flurry continuing, both from state law makers as well as efforts to push back on overly broad provisions.
Listen to this post

Opthamalogy Group Cannot Turn Blind Eye to TCPA Requirements

The “healthcare exemption” to the TCPA’s consent requirements is one of the more misunderstood parts of the TCPA.  And a recent case in North Carolina just gave a lesson to an ophthalmology group to help them see the requirements of the exemption a little more clearly.
Before discussing the case, let’s look at the healthcare exemption.  The healthcare exemption exempts certain calls from the consent requirements found in 47 C.F.R. § 64.1200(a)(1)(iii) which require prior express consent when using an ATDS or an artificial or prerecorded voice to dial cell phones.
The healthcare exemption only applies in certain circumstances.  First, the calls must be made by, or on behalf of, healthcare providers.  Second, there are several conditions that the calls being made by healthcare providers are required to meet, including but not limited to:

Calls or texts must be sent only to the wireless telephone number provided by the patient
Calls or texts must be limited to the “following purposes: appointment and exam confirmations and reminders, wellness checkups, hospital pre-registration instructions, pre-operative instructions, lab results, post-discharge follow-up intended to prevent readmission, prescription notifications, and home healthcare instructions”
Calls or texts must not include any telemarketing, solicitation, or advertising
Calls or texts are limited to only one message (either by call or text message) per day and no more than three messages combined per week.
The healthcare provider must honor opt-out immediately.

In Hicks v. Raleigh Ophthalmology, P.C., 2025 WL 1047708 (E.D. N.C. Apr. 8, 2025), Deanna Hicks visited her optometrist about some vision issues.  Her optometrist referred her to Raliegh Ophthalmoghy (“Raliegh”).  Hicks had never provided Raliegh with any paperwork and had no prior patient relationship with Raliegh.
However, according to the complaint, Raliegh called Hicks using a pre-recorded call to her cell phone which stated the call was from Raliegh.  Hicks used the automated menu to speak with an employee and told the employee she was not interested in booking an appointment.  She received a text message from Raliegh which also requested her to book an appointment, and she responded “STOP” to the text message.
This did not end the communication carousel that Hicks found herself on with Raliegh.  She received several more calls and she talked to live employees and asked to be removed from their list.  Even after speaking to a manager, the calls continued.  Unsurprisingly, Hicks sued, and the remaining count addressed in the opinion is that Raliegh called Hicks without her consent.
Raliegh raised three arguments in their motion to dismiss.  The first is that they had the consent of Hicks because she gave her number to the optometrist and that was consent for Raliegh to call her.  Hicks’s complaint states that she did not provide consent to Raliegh or provide them with paperwork.  The Court stated that to infer there was consent for her optometrist to call Hicks, it is not reasonable to extend that to Raliegh as a third-party healthcare provider with no preexisting relationship with Hicks.  Furthermore, consent is a fact issue and not suitable for a motion to dismiss.
Raliegh’s second argument is related to the healthcare exemption.  The Court stated the healthcare exemption is limited to calls about a certain number of topics, and Raliegh “has not identified any authority to support that the TCPA authorizes an entity to make a prerecorded call to an individual to book an appointment prior to establishing a treatment relationship with that individual, and the court is unable to locate any.”
[SIDE NOTE:  The Court did not address this, but I would also call out that these calls could be considered telemarketing under the TCPA because they were initiated for the purpose of encouraging the purchase of Raliegh’s services.  Therefore, they would fail under the healthcare exemption due to that as well.]
The Court stated that even if the first call qualified under the exemption, the exemption requires the healthcare provider to “honor opt-out requests immediately”.  Clearly, Raliegh failed to do so.  Therefore, the second argument was insufficient for dismissal.
The third argument was related to the proposed class definition, but the Court said that argument is better left for opposing class certification.  And therefore, Hicks survives the motion for dismissal.
This case illustrates the power of the healthcare exemption.  But, much like Peter Parker, with great power comes great responsibility.  To rely on the healthcare exemption, a healthcare provider, such as Raliegh, must not turn a blind eye to the requirements of the exemption.  Because if the requirements are not met completely, the future reliance on the exemption for TCPA purposes could get very hazy.

OH THE HUMANITY: Humana Crushed in TCPA Class Certification Order Over Wrong Number Robocalls

I told you Humana was in trouble.
The Medicare giant is facing massive TCPA exposure following a ruling by a federal court in Kentucky certifying a class containing over 23,000 individuals.
In Elliot v. Humana, 2025 WL 1065755 (W.D. Ky April 9, 2025) the Court certified a wrong number TCPA class defined as follows:
[a]ll persons or entities throughout the United States (1) to whom Humana placed, or caused to be placed, a call (2) directed to a number assigned to a cellular telephone service, but not assigned to a current account holder of Humana (3) in connection with which Humana used an artificial or prerecorded voice, (4) four years from the filing of this action through the date of class certification.
According to Plaintiff, Humana’s own data sets shows wrong number designations for 23,682 individuals that received at least one prerecorded call from Humana after they received a wrong number designation, a violation of the TCPA.
Not good.
Humana countered that the Plaintiff has failed to demonstrate any actual called parties that were not Humana customers but the Court stated “common sense” confirmed accounts coded “wrong #” or the like were sufficiently included in the numerosity count.
The Court next identified three common issues it asserts Humana did not adequately challenge: (1) whether Humana initiated non-emergency prerecorded calls to non customers; (2) whether Humana used a prerecorded voice to make calls to class members; and (3) whether the number called was a cellular telephone. The Court appears to acknowledge whether calls were made to a wrong number is not common but it refused to deny certification on that basis.
Humana argued the case should not be certified because “wrong number designations could refer to typographical errors, calls where a Humana team member called the wrong number, intentional contact with individuals who said, ‘wrong number because they didn’t want to talk,’ and Humana members who incorrectly reported a wrong number” but the Court was unmoved.  The Court would not “ignore the plain meaning of ‘wrong number’ to accommodate several errors or inconsistencies Humana found in self reviewing their own record.”
Eesh.
At bottom the court acknowledged that not every call recipient noted in the Humana data set might be a wrong number call recipient but it intends to use that data set as a starting point–certify a class based on the data, and then use declarations as part of the claims process to determine who was, and was not, a class member.
Not good. At all.
The interesting thing is that it is very tough to determine potential exposure here.
Per the Plaintiff, every member of the class received at least one call after the wrong number notation. That would appear to set damages at $11,500,000-$34,500,000.00.
However TCPA damages for wrong number claims don’t work that way. Calls made even before the wrong number note are also actionable–if it is a real wrong number. So the exposure might be much higher, depending on how many calls Humana was placing here.
Then again, if very few of the 23k or so individuals with wrong number notations were “real” wrong numbers then the exposure could be much much lower. The Court’s order simply does not provide enough information to assess these issues.
What we do know is that Humana is now facing a certified TCPA class action and it is fair to say there are at least 8 figures on the line here. A few obvious take aways:

This is another prerecorded call case. Using such technology is the easiest way to get yourself caught up in a TCPA class action.
Wrong number TCPA class actions continue to be the most dangerous sort of case to defend against. These cases are far more likely to be certified than other types of TCPA cases. You MUST defend yourself by engaging in SMART data practices– Humana was hung on its own records folks– and also using the FCC’s Reassigned Numbers Database (they just lowered their rates)!

Notably the Plaintiff’s counsel in this case was Tom Alvord of LawHQ— Dr. Evil and his crew. These guys are good folks.

CPPA Proposes Key Updates to Cybersecurity, Risk Assessment and ADMT CCPA Regulations Following Public Comment

The California Privacy Protection Agency (“CPPA”) recently released modified draft California Consumer Privacy Act regulations (“CCPA Regs.”) in response to public feedback, with a focus on the sections addressing cybersecurity audits, risk assessments, automated decisionmaking technology (“ADMT”) and sensitive data.
Cybersecurity Audits
New Definition of “Cybersecurity Audit Report”: The updated CCPA Regs. adds a definition of “cybersecurity audit report,” clarifying that the term refers to the specific documentation required as part of a business’s annual cybersecurity audit required under Article 9 of the CCPA Regs. (CCPA Regs. Sect. 7001(n)).
Expanded Scope of “Information Systems”: The definition of “information system” was revised to explicitly include service providers’ or contractors’ systems used on behalf of the business. This ensures businesses account for third-party environments when assessing cybersecurity risk. (CCPA Regs. Sect. 7001(w)).
Deadline-Specific Requirements for First Cybersecurity Audit: The updated CCPA Regs. include deadlines by which a business must complete its first cybersecurity audit, which is dependent upon when a business determines its processing activities present significant risk to consumers’ security. (CCPA Regs. Sect. 7121(a)).
Scope and Documentation of Cybersecurity Audits: The updated CCPA Regs. clarify that auditors may include recommendations “separate from articulating audit findings,” helping delineate reporting expectations. (CCPA Regs. Sect. 7122(a)(1)). The Regs. also include examples of what how a cybersecurity audit report must describe the audit’s scope (e.g., the processes, activities and components of the information system assessed), and extend the requirement to retain documents to both the business and the business’s auditor. (CCPA Regs. Sects. 7122(d), (j)).
Risk Assessments: Updates to Roles, Disclosures and Safeguards
Responsible Personnel and Decisionmakers: The updated CCPA Regs. now require that risk assessments identify not only the individuals who reviewed or approved the assessment but also the individual with the authority to determine whether the business will proceed with the associated data processing activity. (CCPA Regs. Sect. 7152(a)(9)).
Timing for Risk Assessments: The updated CCPA Regs. establish a calendar deadline for conducting risk assessments for data processing activities that began prior to the updated regulations’ effective date but continue afterward. (CCPA Regs. Sect. 7155(c)).
Automated Decisionmaking Technology (“ADMT”)
Consumer Rights and Business Obligations: The updated CCPA Regs. specify that, upon a consumer’s opt-out request, businesses must notify service providers and contractors of the specific ADMT use for which the consumer opted out. (CCPA Regs. Sect. 7221(n)(2)).
Removed Requirement to Document “Quality of Personal Information” in ADMT Risk Assessments: The CPPA removed a provision that would have required businesses to assess and document the quality of personal information (e.g., the accuracy, reliability and consistency of the information) used in ADMT or artificial intelligence (“AI”) systems. (CCPA Regs. Sect. 7152(a)(2)(B)).
Expanded Definitions of Sensitive Data Categories
Addition of “Neural Data”: The definition of “sensitive personal information” was revised to include “neural data,” aligning with other states’ emerging concerns around brain-computer interface technologies. (CCPA Regs. Sect. 7001(ddd)).
Exemption for Non-Identifiable Physical/Biological Traits: The CPPA added an exception to the definition of “physical or biological identification or profiling,” excluding traits that cannot be linked to a specific consumer. (CCPA Regs. Sect. 7001(hh)).
Notice at Collection – AR/VR and Device-Based Interactions
Timing Flexibility for Notice in New Environments: The updated CCPA Regs. allow businesses to provide notice either before or “at the time” a device begins collecting personal information that the business sells or shares. (CCPA Regs. Sects. 7013(e)(3)(C), (D). This change accommodates more dynamic and immersive tech environments.
Enforcement and Simplification Measures
Removed Requirements for Agency Complaint Information: The CPPA removed the obligation to inform consumers about their ability to file complaints with the CPPA or Attorney General.
Deleted Redundancies and Technical Burdens: Several provisions were struck to streamline requirements, including § 7023(f)(3), which previously required businesses to notify others that a consumer contests certain personal data accuracy claims.
Businesses subject to the CCPA should review these changes carefully, ensure internal alignment with new deadlines and definitions, and prepare for continued rulemaking as the CPPA moves toward finalizing these updates.

California’s Wait Is Nearly Over: New AI Employment Discrimination Regulations Move Toward Final Publication

The California Civil Rights Council has advanced new regulations regarding employers’ use of artificial intelligence (AI) and automated decision-making systems, clearing the way for them to take effect later this year. The new regulations will make the state one of the first to adopt comprehensive regulations regarding the growing use of such technologies to make employment decisions.

Quick Hits

The California Civil Rights Department finalized modified regulations for employers’ use of AI and automated decision-making systems.
The regulations confirm that the use of such technology to make employment decisions may violate the state’s anti-discrimination laws and clarify limits on such technology, including in conducting criminal background checks and medical/psychological inquiries.

On March 21, 2025, the Civil Rights Council, a branch of the California Civil Rights Department (CRD), voted to approve the final and modified text of California’s new “Employment Regulations Regarding Automated-Decision Systems.” The regulations were filed with the Office of Administrative Law, which must approve the regulations. At this time, it is not clear when the finalized modifications will take effect, although they are likely to become effective this year.
The CRD has been considering automated-decision system regulations for years amid concerns over employers’ increasing use of AI and other automated decision-making systems, or “Automated-Decision Systems,” to make or facilitate employment decisions, such as recruitment, hiring, and promotions.
While the final regulations have some key differences from the proposed regulations released in May 2024, they clarify that it is unlawful to use AI and automated decision-making systems to make employment decisions that discriminate against applicants or employees in a way prohibited by the California Fair Employment and Housing Act (FEHA) or other California antidiscrimination laws.
Here are some key aspects of the final regulations.
Automated-Decision Systems
The final regulations define “automated-decision system[s]” as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” This definition is narrower than the proposed regulations, which would have included any computational process that “screens, evaluates, categorizes, or otherwise makes a decision….”
Covered systems include a range of technological processes, including tests, games, or puzzles used to assess applicants or employees, processes for targeting job advertisements, screening resumes, processes to analyze “facial expression, word choice, and/or voice in online interviews,” or processes to “analyz[e] employee or applicant data acquired from third parties.” Such systems do not include “word processing software, spreadsheet software, [and] map navigation systems.”
Automated-decision systems do not include typical software or programs such as word processors, spreadsheets, map navigation systems, web hosting, firewalls, and common security software, “provided that these technologies do not make a decision regarding an employment benefit.” Notably, the final regulations do not include language from the proposed rule’s excluded technology provision that would have excluded systems used to “facilitate human decision making regarding” an employment benefit.
Other Key Terms

“Agent”—The final regulations would consider an employer’s “agent” to be an “employer” under the FEHA regulations. An “agent” would be defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity … including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.” (Emphasis added.)
“Automated-Decision System Data”—The regulations define such data as “[a]ny data used to develop or customize an automated-decision system for use by a particular employer or other covered entity.” However, the final regulations narrow what is included as “automated-decision system data,” removing language from the proposed regulations that would have included “[a]ny data used in the process of developing and/or applying machine learning, algorithms, and/or artificial intelligence” used in an automated-decision system, including “data used to train a machine learning algorithm.” (Emphasis added.)
“Artificial Intelligence”—The regulations define AI as “[a] machine-based system that infers, from the input it receives, how to generate outputs,” which can include “predictions, content, recommendations, or decisions.” The proposed regulations had included “machine learning system[s] that can, for a given set of human defined objectives, make predictions, recommendations, or decisions.”
“Machine Learning”—The term is defined as the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”

Unlawful Selection Criteria
Potentially discriminatory hiring tools have long been unlawful in California, but the final regulations confirm that those antidiscrimination laws apply to potential discrimination on the basis of protected class or disability that is carried out by AI or automated decision-making systems. Specifically, the regulations state that it is “unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected” by FEHA.
Removal of Disparate Impact
However, the final regulations do not include the proposed definition of “adverse impact” caused by an automated-decision system under the FEHA regulations. The prior proposed regulations had specified that an adverse impact includes “disparate impact” theories and may be the result of a “facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” FEHA. Further, the final regulations do not include similar language defining automated-decision systems to include systems that screen out or make decisions related to employment benefits.
Pre-Employment Practices
The final regulations further clarify that the use of online application technology that “screens out, ranks, or prioritizes applicants based on” scheduling restrictions “may discriminate against applicants based on their religious creed, disability, or medical condition,” unless it is job-related and required by business necessity and there is a mechanism for the applicant to request an accommodation.
The regulations specify that this would apply to automated-decision systems. The regulations state that use of such a system “that, for example, measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act” without reasonable accommodation may result in unlawful discrimination. Similarly, a system that “analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other” protected characteristic may result in unlawful discrimination.
Criminal Records
California law provides that before employers deny applicants based on a criminal record, the employer “must first make an individualized assessment of whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job” that would justify denying the applicant. The final regulations state that “prohibited consideration” of criminal records “includes, but is not limited to, inquiring about criminal history through an employment application, background check, or the use of an automated-decision system.” (Emphasis added.)
However, the final regulations do not include the proposed language that would have clarified that the use of an automated decision-system alone, “in the absence of additional processes or actions” is not a sufficient individualized assessment. The final regulations further do not include the proposed language that would have required employers to provide “a copy or description” of a report generated that is used to withdraw a conditional job offer.
Unlawful Medical or Psychological Inquiries
The final regulations state that rules against asking job applicants about their medical or psychological histories include “through the use of an automated-decision system.” The regulations state that such an inquiry “includes any such examination or inquiry administered through the use of an automate-decision system,” including puzzles or games that are “likely to elicit information about a disability.”
Third-Party Liability
The final regulations clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of automated decision-making systems, potentially implicating third parties that design or implement such systems. Still, the regulations specify that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to a claim of unlawful discrimination. However, the final regulations do not include the proposed language that would have created third-party liability for the design and development and advertising, promotion, or sale of such systems.
Next Steps
Once effective, the final regulations will make California one of the first jurisdictions to promulgate comprehensive regulations concerning AI and/or automated decision-making technologies, along with Colorado, Illinois, and New York City. The regulations also come as President Donald Trump is seeking to reshape federal AI policy, focusing on removing barriers to the United States being a leader in the development of the technology. The new policy shifts away from the Biden administration’s focus on safeguarding employees and consumers from potential negative impacts from the use of such technology, particularly the possibility of unlawful employment discrimination and harassment. It is expected that states and localities will continue to regulate AI to fill in the gap.

10 Ideas for MSHA Leaders in the Trump Administration to Consider

The Trump administration has made a number of changes to the Mine Safety and Health Administration (MSHA) already, and more are sure to come. So now is as good a time as ever to list some ideas for the new agency leadership to consider.

Quick Hits

The Trump administration has already implemented several changes to MSHA, with more expected, prompting a call for new leadership to consider ideas such as improving inspector consistency and deemphasizing noncritical safety standards.
Enhancing compliance outreach, especially for small or new mines, and advocating for rule changes are key priorities for mine operators.
To promote safety and health, MSHA should increase transparency in inspector training, issue more policy guidance, emphasize compliance assistance, manage inspector professionalism, continue issuing safety alerts, and hold all stakeholder meetings with online participation options.

Improving consistency regarding inspector interpretation of MSHA standards is at the top of many operator lists. Deemphasizing spending inspection time on standards that do not appreciably affect safety is high on the list of priorities, as well.
Yet another priority is enhancing compliance outreach for all operators—especially small or new mines. Operators may also want to advocate for certain rules to be changed.
Ten Ideas for MSHA to Explore
Mine operators will likely have many more good ideas to add to this list. Once a new assistant secretary at MSHA is in place, the mining industry should be ready to offer these suggestions—and more. Here are ten suggestions to get things rolling:
1. Establish a means via MSHA’s website for operators to submit to agency headquarters specific examples of misinterpretations of standards and other issues that occur in the field.
This will provide an opportunity for headquarters to weigh in. It will also allow headquarters to be informed directly by operators about what’s happening in the districts.
A specific occurrence may end up being resolved at the local level, but it is important for headquarters to know and track trends. This would present opportunities to enhance inspector training or issuance of guidance to operators.
2. Improve transparency and fair notice to operators by providing more information on how new and existing inspectors are being trained on the Mine Act and MSHA’s standards. This will help compliance, as well as increase the overall understanding regarding how inspectors are to apply the law.
3. MSHA should generally issue more policy guidance for new and existing standards.
4. End the growing practice of issuing citations for workplace examinations based on other violations being found in an area. MSHA’s “Program Policy Manual” states that the agency will not do this, yet it has become a common practice.
5. Increase the agency’s emphasis on compliance assistance to mine operators. This directly contributes to the agency’s mission of promoting safety and health by not only providing operators with information to help reduce exposure to hazards, but by enhancing the agency’s goodwill with operators and miners.
Such goodwill can help operators be fully receptive to the compliance assistance and lessons learned when enforcement is necessary. It can also facilitate further positive interactions with the agency that can help to improve safety.
6. As for specific standards to de-emphasize, we are not suggesting reinstating MSHA’s “Rules to Live By.” That list was largely used to justify heightened enforcement.
What we mean here is there are certain standards that are cited more than they should be given how minor the conditions typically are. Inspection time, which will be more critical given the shortage of inspectors, could be better spent if inspectors are not devoting energy to looking for things like whether switches are labeled in electrical boxes or portable extension cords have had a continuity check done.
7. MSHA needs to manage its inspector workforce to promote professionalism and good use of official time on duty. Inspectors who spend time berating the mine operator and its managers and using aggressive tactics to intimidate foremen and miners are hurting the agency’s effectiveness.
8. MSHA should continue to issue fatalgrams and other best practices alerts. The agency should provide updates on its fatal accident reports to note when citations issued in the investigation were modified or vacated.
9. MSHA’s ability to change its current final rules is constrained by the Mine Act, but there are certainly opportunities to do so without diminishing safety. Among the obvious rule changes needed are the prohibitions in MSHA’s crystalline silica rule on the use of respirators and rotation of miners for compliance.
10. On a positive note, MSHA should continue its stakeholder meetings at the district level and at the headquarters level. District-level meetings should always offer an online participation option, but not all do.
Stakeholder meetings are a great opportunity for the agency to provide updates and safety information, as well as answer questions from the audience.

What’s Next for MSHA Amid Government Dismantlement?

As everyone in the country watches the Trump administration’s dramatic downsizing of the federal government, there have been many comments and opinions about how this process will impact Mine Safety and Health Administration (MSHA) enforcement.

Quick Hits

Despite the Trump administration’s downsizing of the federal government, mine operators may want to continue their efforts to maintain safe workplaces due to the Mine Act’s inspection requirements and political support for MSHA’s mission.
The possibility of reduced MSHA inspections is tempered by legal requirements and political considerations, though manpower restrictions and hiring freezes may complicate the agency’s ability to meet its obligations.
While the new administration’s deregulation efforts may halt the introduction of new MSHA regulations, existing regulations like the silica rule are unlikely to be eliminated due to procedural and legal constraints.

Specifically, some have opined that this development will produce fewer inspections, eliminate unpopular regulations, and create a much more sympathetic enforcement environment for mine operators.
While we will clearly acknowledge that the current government dismantlement project is unprecedented, our past experience with other government overhauls, including the first Trump administration’s efforts, somewhat tamps down that enthusiasm. At a minimum, that experience tells us mine operators will want to continue their good-faith, effective efforts to maintain safe workplaces for their personnel.
The idea that inspections might slow down during the new administration is tempered by two important points. First, the Mine Act, passed by the U.S. Congress in 1977, requires that all surface mines receive at least two annual inspections and that underground mines receive at least four annual inspections. That’s the law, and it is theoretically very tough to get around that.
Second, the Trump campaign heavily courted voters in the coalfield states, which contain a large mining community constituency that strongly supports MSHA’s mission. Undercutting that mission would be politically difficult.
Inspection Impacts
Still, the idea of reduced inspections is not complete fantasy.
It is no secret that MSHA was already—due to manpower restrictions—having a hard time completing its annual “twos” and “fours.” The deferred resignation program, sweeping layoffs, and hiring freezes affecting the entire government will certainly complicate the agency’s efforts to complete its inspection obligations.
As of this writing, there is no indication of whether the Department of Government Efficiency (DOGE) will allow the Office of Personnel Management to issue a hiring freeze waiver to MSHA. Without that waiver, MSHA can only add one employee for every four that leave the agency.
It is worth noting that, in the past, when MSHA has not met its “twos” and “fours” commitment, the U.S. Department of Labor’s Office of Inspector General (OIG) has stepped in to force increased efforts. The administration’s firing of this inspector general, among many others, and the reduction of the agency’s OIG watchdog department has made it unlikely there will be similar pressure on inspections in the near term.
To say the situation is fluid is an understatement. Nonetheless, inspector visits should still be anticipated.
Next for Rulemaking
We have also heard a good deal of speculation that the new administration’s deregulation bent could eliminate some of the more onerous regulations confronting mine operators.
On many people’s lists are the Biden administration’s silica rule and surface mobile equipment rule, and the Obama administration’s enhanced workplace examination rule. Again, a reality check is necessary.
While it is a virtual certainty that no new regulations will be promulgated at MSHA during the next four years following an executive order requiring ten regulations to be repealed for every regulation added, it is unlikely that any existing regulation—silica or otherwise—will be eliminated.
The reason for this is the procedure must be followed in order to repeal or amend a regulation. Essentially, it requires full notice and comment rulemaking with an opportunity for stakeholders to weigh in on proposed changes.
Anecdotally, the group within the agency that would conduct and manage such a rulemaking appeal or amendment process has been decimated by the current cuts. In effect, the resources necessary to carry such a process through may not be available.
Even more significantly, Section 101(a)(9) of the Mine Act states that “[n]o mandatory health or safety standard promulgated under this title shall reduce the protection afforded miners by an existing mandatory health or safety standard.” This is a difficult requirement to overcome. It is the provision that prevented the Trump administration from making any inroads with regard to existing MSHA standards in its first four years.
Most mine operators are simply ignoring “the noise” at this point and continuing to advance their safety performance. Their safety efforts have always been based on ensuring the welfare of their miners rather than on meeting government benchmarks.