New Leader At The California Department Of Financial Protection & Innovation

Last month, Governor Gavin Newsom appointed Khalil “KC” Mohseni, as Commissioner of the California Department of Financial Protection and Innovation. Commissioner Mohseni is not an entirely new to the DFPI. He served as Chief Deputy Director of the DFPI since 2023. He has previously served as the Chief Operating Officer at the State Controller’s Office and the Deputy Director of Administration at the California Department of Housing and Community Development. Commissioner Mohseni earned a Juris Doctor degree from the University of California, Davis School of Law, and a Bachelor of Arts degree in Political Science from the University of California, Irvine. 
Although Commissioner Mohseni assumed office immediately, he will lose his position if the California Senate fails or refused to confirm his appointment within 365 days after the day on which he first began performing the duties of the office. Cal. Gov’t Code § 1774(c).
How much will Commissioner Mohseni make in his new position? $224,868 per year.

Data Breach Class Action Settlement Approval Affirmed by Ninth Circuit with Attorneys’ Fee Award Reversed and Remanded

Some data breach class actions settle quickly, with one of two settlement structures: (1) a “claims made” structure, in which the total amount paid to class members who submit valid claims is not capped, and attorneys’ fees are awarded by the court and paid separately by the defendant; or (2) a “common fund” structure, in which the defendant pays a lump sum that is used to pay class member claims, administration costs and attorneys’ fees awarded by the court. A recent Ninth Circuit decision affirmed the district court’s approval of a “claims made” settlement but reversed and remanded the attorney’s fee award. The decision highlights how the approval of the settlement terms should be independent of the attorney’s fees although some courts seem to merge them.
In re California Pizza Kitchen Data Breach Litigation, – F.4th –, 2025 WL 583419 (9th Cir. Feb. 24, 2025) involved a ransomware attack that compromised data, including Social Security numbers, of the defendant’s current and former employees. After notification of the breach, five class action lawsuits were filed, four of which were consolidated and proceeded directly to mediation. A settlement was reached providing for reimbursement for expenses and lost time, actual identity theft, credit monitoring, and $100 statutory damages for a California subclass. The defendant agreed not to object to attorneys’ fees and costs for class counsel of up to $800,000. The plaintiffs estimated the total value of the settlement at $3.7 million.
The plaintiffs who had brought the fifth (non-consolidated) case objected to the settlement. The district court held an unusually extensive preliminary approval hearing, at which the mediator testified. The court preliminarily approved the settlement, deferring its decision on attorneys’ fees until the information regarding claims submitted by class members was available. At that point, the district court, after estimating the total value of the class claims at $1.16 million (the claim rate was 1.8%), awarded the full $800,000 of attorneys’ fees and costs requested, which was 36% of the total class benefit of $2.1 million (including the $1.16 million plus settlement administration costs and attorneys’ fees and costs).
On appeal, the Ninth Circuit majority concluded that the district court did not abuse its discretion in approving the settlement. Based on the mediator’s testimony, the district court reasonably concluded that the settlement was not collusive. The Ninth Circuit explained that “the settlement offers real benefits to class members,” “the class’s standing rested on questionable footing—there is no evidence that any CPK employee’s compromised data was misused,” and “courts do not have a duty to maximize settlement value for class members.”
The attorneys’ fee award, however, was reversed and remanded. The Ninth Circuit explained that the class claims were properly valued at $950,000 (due to a miscalculation by the district court), and the fee award was 45% of the settlement value, “a significant departure from our 25% benchmark.” In remanding, the Ninth Circuit noted that a “downward adjustment” would likely be warranted on remand.
Judge Collins concurred in part and dissented in part. He would have reversed the approval of the settlement, concluding that the district court failed to adequately address the objections and the low claims rate, and citing “the disparity between the size of the settlement and the attorney’s fees.”
From a defendant’s perspective, this decision demonstrates how it can be important to convey to the court that the approval of the proposed settlement should be evaluated independently of the attorney’s fees application. If the court finds the proposed fee award too high, that should not warrant disapproval of the settlement if the proposed relief for the class members is fair and reasonable. This is true of both “claims made” and “common fund” settlement structures.

NYDFS Annual Compliance Submissions Due April 15, 2025 and New Compliance Requirements Effective on May 1, 2025

As we previously reported, in 2023 the New York State Department of Financial Services (NYDFS) amended its cybersecurity regulation, 23 NYCRR 500 (or Part 500). As of November 1, 2024, Class A Companies and Covered Entities were required to comply with numerous Part 500 compliance obligations outlined here. 
April 15, 2025 Compliance Certification Deadline
Covered Entities have been required to submit annual compliance with Part 500 since the regulation’s adoption; however, since 2024, Covered Entities now have the option to submit either a Certification of Material Compliance (certifying they materially complied with the regulation requirements that applied to them in the prior year) or an Acknowledgement of Noncompliance (identifying all sections of the regulation with which they have not complied and providing a remediation timeline).
The deadline for Covered Entities to submit annual compliance notifications for the 2024 calendar year is April 15, 2025. Submissions can be submitted through the NYDFS Portal. Covered Entities that qualify for full exemptions from Part 500 do not have to submit annual compliance notifications. For more information on the April 15 compliance deadline, guidance on which form to file, and step-by-step instructions, see NYDFS’s Submit a Compliance Filing section in the Cybersecurity Resource Center or contact your Katten attorney.
May 1, 2025 Compliance Obligations
On May 1, 2025, Covered Entities are required to meet additional requirements under Part 500, including:

Access Privileges and Management

Implement enhanced requirements regarding limiting user access privileges, including privileged account access.
Review access privileges and remove or disable accounts and access that are no longer necessary.
Disable or securely configure all protocols that permit remote control of devices.
Promptly terminate access following personnel departures.
Implement a reasonable written password policy to the extent passwords are used. 

Covered Entities and Class A Companies must also address the below items:

Vulnerability Management: conduct automated scans of information systems, and a manual review of systems not covered by such scans” to discover, analyze, and report vulnerabilities at a frequency determined by their risk assessment and promptly after any material system changes.
Mailicious Code: Implement controls to protect against malicious code.

Class A Companies must further update their information security programs to include:

Monitoring and Training: Implement (1) endpoint detection and response solution to monitor anomalous activity and (2) centralized logging and security event alert solution. CISOs can approve reasonably equivalent or more secure compensating controls, but approval must be in writing.

Virginia Poised to Become Second State to Enact Comprehensive AI Legislation

Go-To Guide:

Virginia’s HB 2094 applies to high-risk AI system developers and deployers and focuses on consumer protection. 
The bill covers AI systems that autonomously make or significantly influence consequential decisions without meaningful human oversight. 
Developers must document system limits, ensure transparency, and manage risks, while deployers must disclose AI usage and conduct impact assessments. 
Generative AI outputs must be identifiable, with limited exceptions. 
The attorney general would oversee enforcement, with penalties up to $10,000 per violation and a discretionary 45-day cure period. 
HB 2094 is narrower than the Colorado AI Act (CAIA, with clearer transparency obligations and trade secret protections, and differs from the EU AI Act, which imposes stricter, risk-based compliance rules.

On Feb. 20, 2024, the Virginia General Assembly passed the High-Risk Artificial Intelligence (AI) Developer and Deployer Act (HB 2094). If signed by Gov. Glenn Youngkin, Virginia would become the second U.S. state to implement a broad framework regulating AI use, particularly in high-risk applications.1 The bill is closely modeled on the CAIA and would take effect on July 1, 2026.
This GT Alert covers to whom the bill applies, important definitions, key differences with the CAIA, and potential future implications.
To Whom Does HB 2094 Apply?
HB 2094 applies to any person doing business in Virginia that develops or deploys a high-risk AI system. “Developers” refer to organizations that offer, sell, lease, give, or otherwise make high-risk AI systems available to deployers in Virginia. The requirements HB 2094 imposes on developers would also apply to a person who intentionally and substantially modifies an existing high-risk AI system. “Deployers” refer to organizations that deploy or use high-risk AI systems to make consequential decisions about Virginians. 
How Does HB 2094 Work?
Key Definitions
HB 2094 aims to protect Virginia residents acting in their individual capacities. It would not apply to Virginia residents who act in a commercial or employment context. Furthermore, HB 2094 defines “generative artificial intelligence systems” as AI systems that incorporate generative AI, which includes the capability of “producing and [being] used to produce synthetic content, including audio, images, text, and videos.”
HB 2094’s definition of “high-risk AI” would apply only to machine-learning-based systems that (i) serve as the principal basis for consequential decisions, meaning they operate without human oversight and (ii) that are explicitly intended to autonomously make or substantially influence such decisions. 
High-risk applications include parole, probation, pardons, other forms of release from incarceration or court supervision, and determinations related to marital status. As the bill would not apply to government entities, it is not yet clear which private sector decisions might be in scope of these high-risk applications.
Requirements
HB 2094 places obligations on AI developers and deployers to mitigate risks associated with algorithmic discrimination and ensure transparency. It establishes a duty of care, disclosure, and risk management requirements for high-risk AI system developers, along with consumer disclosure obligations and impact assessments for deployers. Developers must document known or reasonably known limitations in AI systems. Generated or substantially modified synthetic content from generative AI high-risk systems must be made identifiable and detectable using industry-standard tools, comply with applicable accessibility requirements where feasible, and ensure the synthetic content is identified at the time of generation, with exceptions for low-risk or creative applications such that it “does not hinder the display or enjoyment of such work or program.” The bill references established AI risk frameworks such as NIST AI RMF and ISO/IEC 42001. Exemptions
Certain exclusions apply under HB 2094, including AI use in response to a consumer request or to provide a requested service or product under a contract. There are also limited exceptions for financial services and broader exemptions for healthcare and insurance sectors.
Enforcement
The bill grants enforcement authority to the attorney general and establishes penalties for noncompliance. Violations may result in fines up to $1,000 per occurrence, with attorney fee shifting, while willful violations may carry fines up to $10,000 per occurrence. Each violation would be considered separately for penalty assessment. The attorney general must issue a civil investigative demand before initiating enforcement action, and a discretionary 45-day right to cure period is available to address violations. There is no private right of action under HB 2094.
Key Differences With the CAIA
While HB 2094 is closely modeled on the CAIA, it introduces notable differences. HB 2094 limits its definition of consumers to individual and household contexts, and explicitly excludes commercial and employment contexts. It defines “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions, while adding a couple new use cases to the scope of “high-risk” uses. It also provides clearer guidelines on when a developer becomes a deployer, imposes more specific documentation and transparency obligations, and enhances trade secret protections. Unlike CAIA, HB 2094 does not require reporting algorithmic discrimination to the attorney general and allows a discretionary 45-day right to cure violations. Additionally, it expands the list of high-risk uses to include decisions related to parole, probation, pardons, and marital status.
While HB 2094 aligns with aspects of the CAIA, it differs from the broader and more stringent EU AI Act, which imposes risk-based AI classifications, stricter compliance obligations, and significant penalties for violations. HB 2094 also does not contain direct incident reporting requirements, public disclosure requirements, or a small business exception. Finally, HB 2094 upholds a higher threshold than CAIA for consumer rights when a high-risk AI makes a negative decision relating to a consumer, requiring that the AI system must have processed personal data beyond what the consumer directly provided.
Conclusion
If signed into law, HB 2094 would make Virginia the second U.S. state to implement comprehensive AI regulations, setting guidelines for high-risk AI systems while seeking to address concerns about transparency and algorithmic discrimination. With enforcement potentially beginning in 2026, businesses developing or deploying AI in Virginia should proactively assess their compliance obligations and prepare for the new regulatory framework, including where the organization is also subject to obligations under the CAIA.

1 See also GT’s blog post on the Colorado AI Act. Other states have regulated specific uses of AI or associated technologies, such as California and Utah, which, respectively, regulate interaction with bots and Generative AI.

UK ICO Publishes 2025 Tech Horizons Report

On February 20, 2025, the UK Information Commissioner’s Office (“ICO”) published its annual Tech Horizons Report (the “Report”), which explores four key technologies expected to play a significant role in society over the next two to seven years. These technologies include connected transport, quantum sensing and imaging, digital diagnostics and therapeutics, and synthetic media. The Report also discusses the ongoing work of the ICO in addressing data protection and privacy concerns related to the emerging technologies featured in their previous Tech Horizons reports.
The Report provides an overview of how key innovations are seeking to reshape industries and everyday life, the privacy and data protection implications of such innovations, and the ICO’s proposed recommendations and next steps. Below are examples of some of the potential privacy and data protection implications identified by the ICO, along with certain recommendations:
Connected Transport

Connected vehicles collect extensive and wide-ranging personal data for various purposes in a “complex ecosystem” of controllers and processors. Those organizations with transparency obligations must ensure they provide clear, concise and accessible privacy notices to individuals (including passengers); however, the ICO acknowledges that providing privacy notices in the connected transport environment may be a challenge.
Organizations should identify the correct lawful bases for processing personal data and remember that, in addition to the UK General Data Protection Regulation (“UK GDPR”), the Privacy and Electronic Communications Regulations also may apply in the context of connected transport and may require consent for certain activities.
Biometric technology may be used in connected transport for purposes such as fingerprint scanners to unlock vehicles. This technology requires the processing of biometric data which must comply with the requirements to process special category data.
When vehicles are shared, privacy concerns arise regarding access to data from previous users, such as location or smartphone pairings.

The ICO recommends embedding privacy by design into hardware and services related to connected vehicles to demonstrate compliance with the UK GDPR and other data protection legislation.
Quantum Sensing and Imaging
The ICO acknowledges that in the case of novel quantum sensing and imaging for medical or research purposes, a key benefit is the extra detail and insights provided by the technology. This could be deemed as conflicting with the principle of data minimization. The ICO states that the principle “does not prevent healthcare organisations processing more detailed information about people where necessary to support positive health outcomes,” but that organizations must have a justification for collecting and processing additional information, such as a clear research benefit.
The ICO states that it will continue to find opportunities to engage with industry in this area and to explore any potential data protection risks. The ICO also encourages embedding privacy by design and default when testing and deploying quantum technologies that involve processing personal information.
Digital Diagnostics and Therapeutics

Organizations working in health care are a target for cyber attacks for a number of reasons, including the nature of data held by such organizations. The adoption of digital diagnostics and therapeutics will only increase this risk. Organizations engaged in this space must comply with all applicable security obligations, including the obligation to ensure the confidentiality, security and integrity of the personal information they process in accordance with the UK GDPR.
According to the ICO, while the use of artificial intelligence (“AI”) and automated decision-making (“ADM”) “could improve productivity and patient outcomes,” there is a risk that their use to make decisions could “adversely affect some patients.” For example, bias is a key risk when considering AI and ADM. Organizations should use appropriate technical and organizational measures to prevent AI-driven discrimination. Another material risk is the lack of transparency regarding how AI tools process patient data. The ICO states that lack of transparency in a medical context could result in patient harm, and that the use of AI does not reduce an organization’s responsibility to comply with transparency obligations under the UK GDPR.

The ICO recommends providers implement privacy by design and ensure that any third parties they are engaged with have in place appropriate privacy measures and safeguards. In addition, providers should also ensure they follow guidance regarding fairness, bias and unlawful discrimination.
Synthetic Media

Data protection laws apply to personal data used in creating synthetic media, even if the final product does not contain identifiable information.
If automated moderation is used, the ICO confirms that organizations must comply with the ADM requirements of the UK GDPR.

The ICO intends to develop its understanding of synthetic media, including how personal data is processed in the context. The ICO also will work with other regulators and continue to engage with other stakeholders such as the public and interest groups.

Virginia Legislature Passes AI Bill

On February 20, 2025, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”).
The Act is a comprehensive bill that is focused on accountability and transparency in AI systems. The Act would apply to developers and deployers of “high-risk” AI systems that do business in Virginia. An AI system would be considered high-risk if it is intended to autonomously make, or be a substantial factor in making, a consequential decision. Under the Act, a consequential decision means a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer” of: (1) parole, probation, a pardon, or any other release from incarceration or court supervision; (2) education enrollment or an education opportunity; (3) access to employment; (4) a financial or lending service; (5) access to health care services; (6) housing; (7) insurance; (8) marital status or (9) a legal service. The Act excludes a number of activities from what is considered a high-risk AI system, such as if the system is intended to perform a narrow procedural task or improve the result of a previously completed human activity.
The Act includes requirements that differ depending on whether the covered business is an AI system developer or deployer. The requirements are generally aimed at avoiding algorithmic discrimination, ensuring impact assessments, promoting AI risk management frameworks, and ensuring transparency and protection against adverse decisions. 
The Virginia Attorney General has exclusive authority to enforce the Act. Violations of the Act are subject to a civil penalty of up to $1,000, plus reasonable attorney fees, expenses and costs. The penalty can be increased up to $10,000 for willful violations. Notably, the Act states that each violation is a separate violation. The Act also provides a 45-day cure period. 
Virginia Governor Glenn Youngkin has until March 24, 2025 to sign, veto or return the bill with amendments. If enacted, the law would take effect July 1, 2026.

California’s AI Revolution: Proposed CPPA Regulations Target Automated Decision Making

On November 8, 2024, the California Privacy Protection Agency (the “Agency” or the “CPPA”) Board met to discuss and commence formal rulemaking on several regulatory subjects, including California Consumer Privacy Act (“CCPA”) updates (“CCPA Updates”) and Automated Decisionmaking Technology (ADMT).
Shortly thereafter, on November 22, 2024, the CPPA published several rulemaking documents for public review and comment that recently ended February 19, 2025. If adopted, these proposed regulations will make California the next state to regulate AI at a broad and comprehensive scale, in line with Colorado’s SB 24-205, which contains similar sweeping consumer AI protections. Upon consideration of review and comments received, the CPPA Board will decide whether to adopt or further modify the regulations at a future Board meeting. This post summarizes the proposed ADMT regulations, that businesses should review closely and be prepared to act to ensure future compliance.
Article 11 of the proposed ADMT regulations outlines actions intended to increase transparency and consumers’ rights related to the application of ADMT. The proposed rules define ADMT as “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.” The regulations further define ADMT as a technology that includes software or programs, uses the output of technology as a key factor in a human’s decisionmaking (including scoring or ranking), and includes profiling. ADMT does not include technologies that do not execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking (this includes web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, anti-virus, anti-malware, spam and robocall-filtering, spellchecking, calculators, databases, spreadsheets, or similar technologies). The proposed ADMT regulations will require businesses to notify consumers about their use of ADMT, along with their rationale for its implementation. Businesses also would have to provide explanations on ADMT output in addition to a process for consumers to request to opt-out from such ADMT use.
It is important to note that the CCPA Updates will be applicable to organizations that meet the thresholds of California civil codes 1798.140(d)(1)(A), (B) and (C). These civil codes apply to organizations that: (A) make more than $25,000,000 in gross annual revenues; (B) alone or in combination, annually buy, sell, or share the personal information of 100,000 or more consumers or households; and (C) derive 50% or more of its annual revenues from selling or sharing a consumers’ personal information. While not exhaustive of the extensive rules and regulations described in the proposed CCPA Updates, the following are the notable changes and potential business obligations under the new ADMT regulations.
Scope of Use
Businesses that use ADMT for making significant decisions concerning consumers must comply with the requirements of Article 11. “Significant decisions” include decisions that affect financial or lending services, housing, insurance, education, employment, healthcare, essential goods services, or independent contracting. “Significant decisions” may also include ADMT used for extensive profiling (including, among others, profiling in work, education, or for behavioral advertising), and for specifically training AI systems that might affect significant decisions or involve profiling.
Providing a Pre-Use Notice
Businesses that use ADMT must provide consumers with a pre-use notice that informs consumers about the use of ADMT, including its purpose, how ADMT works, and their CCPA consumer rights. The notice must be easy-to-read, available in languages the business customarily provides documentation to consumers, and accessible to those with disabilities. Business must also clearly present the notice to the consumer in the way which the business primarily interacts with the consumer, and they must do so before they use any ADMT to process the consumer’s personal information. Exceptions to these requirements will apply to ADMT used for security, fraud prevention, or safety, where businesses may omit certain details.
According to Section 7220 of the CCPA Updates, pre-use notice must contain:

A plain language explanation of the business’s purpose for using ADMT.
A description of the consumer’s right to opt-out of ADMT, as well as directions for submitting an opt-out request.
A description of the consumer’s right to access ADMT, including information on how the consumer can request access the business’ ADMT.
A notice that the business may not retaliate against a consumer who exercises their rights under the CPPA.
Any additional information (via a hyperlink or other simple method), in plain language, that discusses how the ADMT works.

Consumer Opt-Out Rights
Consumers must be able to opt-out of ADMT use for significant decisions, extensive profiling, or training purposes. Exceptions to opt-out rights include where businesses use ADMT for safety, security, or fraud prevention or for admission, acceptance, or hiring decisions, so long as it is necessary, and its efficacy has been evaluated to ensure it works as intended. Businesses must provide consumers at least two methods of opting out, one of which should reflect the way the business mainly interacts with consumers (e.g., email, internet hyperlink etc.). Any opt-out method must be easy to execute and should require minimal steps that do not involve creating accounts or providing unnecessary info. Businesses must process opt-out requests within 15 business days, and they may not retaliate against consumers for opting out. Businesses must wait at least 12 months before asking consumers who have opted out of ADMT to consent again for its use.
Providing Information on the ADMT’s Output
Consumers have the right to access information about the output of a business’s ADMT. The CPPA regulations do not define “output,” but the term likely includes outcomes produced by ADMT and the key factors influencing them.
When consumers request access to ADMT, businesses must provide information on how they use the output concerning the consumer and any key parameters affecting it. If they use the output to make significant decisions about the consumer, the business must disclose the role of the output and any human involvement. For profiling, businesses must explain the output’s role in the evaluation.
Output information includes predictions, content, recommendations, and aggregate statistics. Depending on the ADMT’s purpose, intended results, and the consumer’s request, the information provided can vary. Businesses must carefully consider these nuances to avoid over-disclosure.
Human Appeal Exception
The CPPA proposes a “human appeal exception,” by which consumers may appeal a decision to a human reviewer who has the authority to overturn the ADMT decision. Business can choose to offer a human appeal exception in lieu of providing the ability to opt out when using ADMT to make a significant decision concerning access to, denial, or provision of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services.
To take advantage of the human appeal exception, the business must designate a human reviewer who is able to understand the significant decision the consumer is appealing and the effects of the decision on the consumer. The human reviewer must consider the relevant information provided by the consumer in their appeal and may also consider any other relevant source of information. The business must design a method of appeal that is easy for consumers to execute, requiring minimal steps, and that it clearly describes to the consumer. Communications and disclosures with appealing consumers must be easy to read and understand, written in the applicable language, and reasonably accessible.
Risk Assessments
Under the CPPA’s proposed rules, every business that processes consumer personal information must conduct a risk assessment before initiating that processing, especially if the business is using ADMT to make significant decisions concerning a consumer or for extensive profiling. Businesses must conduct risk assessments to determine whether the risks to consumers’ privacy outweigh the benefits to consumers, the business, and other stakeholders.
When conducting a risk assessment, businesses must identify and document: the categories of personal information to be processed and whether they include sensitive personal information; the operational elements of its ADMT processing (e.g., collection methods, length of collection, number of consumers affected, parties who can access this information, etc.); the benefits that this processing provides to the business, its consumers, other stakeholders, and the public at large; the negative impacts to consumers’ privacy; the safeguards that it plans to implement to address said negative impacts; information on the risk assessment itself and those who conducted it; and whether the business will initiate the use of ADMT despite the identified risks.
A business will have 24 months from the effective date of these new regulations to submit the results of their risk assessment conducted from the effective date of these regulations to the date of submission. After completing its first submission, a business must submit subsequent risk assessments every calendar year. In addition, a business must review and update risk assessments to ensure accuracy at least once every three years, and it should convey updates through the required annual submission. If there is any material change to a business’ processing activity, it must immediately conduct a risk assessment. A business should retain all information collected of a business’ risk assessments for as long as the processing continues, or for five years after the completion of the assessment, whichever is later.
What Businesses Should Do Now
The CPPA’s proposed ADMT regulations under the CCPA emphasize the importance of transparency and consumer rights. By requiring businesses to disclose how they use ADMT outputs and the factors influencing the outputs, the regulations aim to ensure that consumers are well-informed, and safeguards exist to protect against discrimination. As businesses incorporate ADMT, including AI tools, for employment decision making, they should follow the proposed regulations’ directive to conduct adequate risk assessments. Regardless of the form in which these regulations go into effect, preparing a suitable AI governance program and risk assessment plan will protect the business’s interests and foster employee trust.
Please note that the information provided in the above summary is only a portion of the rules and regulations proposed by the CCPA Updates. Now that the comment period closed, the CPPA will deliberate and finalize the CCPA Updates within the year. Evidently, these proposed regulations will require more action by businesses to remain compliant. While waiting for the CPPA’s finalized update, it is important to use this time to plan and prepare for these regulations in advance.

CASE OF THE STOLEN LEADS?: Court Refuses to Enforce Lending Tree Lead That Was Not Transferred to the Mortgage Company That Called Plaintiff

So a loan officer at one mortgage company leaves the mortgage company he was with and seemingly steals leads and takes them to another mortgage company (maybe this was allowed, but I doubt it.)
While at the new mortgage company he sends out robocalls to the leads he obtained from the prior company–including leads submitted on leandingtree.com.
One of the call recipients sues under the TCPA claiming she had consented to receive calls from the first mortgage company but not the second because Lending Tree had only transferred the lead to the first company.
The second mortgage company–Fairway Independent Mortgage Company– moved for summary judgment in the case arguing that because it too was on the vast Lending Tree partners list, the consumer’s lead was valid for the calls placed by the LO while employed by it as well.
Well in Shakih v. Fairway, 2025 WL 692104 (N.D. Ill March , 2025) the Court determined a jury would have to decide the issue.
Although Plaintiff submitted the Lending Tree form and thereby agreed to be contacted by over 2,000 companies–including both of the mortgage companies at issue– the Lending Tree website provided the information would only be provided to five of those companies.
In the Court’s view a jury could easily determine the consumer’s agreement to provide consent was limited to only the five companies Lending Tree selected on the consumer’s behalf to receive calls– not to all 2,000 companies.
Lending Tree itself submitted a brief explaining that sharing leads between partners is not permitted, and the Court found this submission valuable in assessing the scope of the consent the consumer was presumed to have given.
The Court was also unmoved that the LO had previously spoken to the plaintiff while employed at his previous mortgage company–the mere fact that the LO changed jobs did not expand the scope of the consent that was previously given.
So like I said, absolutely fascinating case. The jury will have to sort it out and we will pay close attention to this one.

Raw Milk: State Legislative Updates and Challenges

Several states have recently introduced or passed legislation related to raw milk, reflecting a growing interest in unpasteurized milk despite the fact that raw milk can carry harmful bacteria such as Salmonella, E.coli, and Listeria, posing serious health risks. The U.S. Food and Drug Administration (FDA) and the Centers for Disease Control (CDC) strongly advise against consuming raw milk due to these dangers and have implemented regulations to limit its sale. 
Despite the long-standing position at both agencies, the new Secretary of the Department of Health and Human Services (HHS) Robert F. Kennedy Jr., has been a vocal advocate for raw milk promoting its benefits and criticizing regulatory restrictions. His support has brought renewed attention to the raw milk movement, influencing legislative efforts.

Arkansas Bill HB 1048: This bill would allow the sale of raw goat milk, sheep milk, and whole milk directly to consumers at the farm, at farmer’s markets, or via delivery by the farm.
Utah Bill HB414: This bill has passed the House and is now before the Senate. This bill establishes enforcement steps for raw milk suspected in foodborne illness outbreaks, aiming to protect consumers.
Other states’ Legislation: States including Iowa, Minnesota, West Virginia, Maryland, Rhode Island, Oklahoma, New York, Missouri, and Hawaii have introduced various raw milk-related bills with efforts ranging from expanding sales to implementing stricter safety regulations.

Meat Industry Pushes Back on Cultivated Meat Bans

While several states are taking legislative action to restrict or ban the sale of cultivated meat, with legislators arguing that the bans would protect the meat industry, there is a different message coming from many groups in the industry itself. Critics of the bans argue that they would “restrict free trade and threaten food safety benefits.”
Nebraska, a state that ranks among the top 10 producers of beef and pork, is among many of the states that has proposed a cultivated meat ban, and the state’s governor issued an executive order in August 2024 barring state agencies from buying cultivated meat. However, ranchers and meat industry groups are pushing back on the ban, saying that “it’s up to the consumer to make the decision about what they buy and eat.” Industry groups say that they are “not worried about competition” from cultivated meat but prefer a different approach that would require the products to be clearly labeled as lab-grown.
The North American Meat Institute has similarly opposed cultivated meat bans, writing a letter in opposition to the Florida ban in February 2024. In its letter, the organization says that the bills would be preempted by the Federal Meat Inspection Act, which regulates the processing and distribution of meat products in interstate commerce. Further, the Meat Institute argued that the bans are “bad public policy that would restrict consumer choice and stifle innovation” and that USDA oversight of cell cultivated meat products places the products on a level playing field in terms of food safety and labeling requirements.
In addition, legislators in Wyoming and South Dakota have voted against cultivated meat bans in their states, citing free trade manipulation and urging instead for more packaging and labeling regulations to support informed decisions.

Safety Perspectives From the Dallas Region: Understanding the New Two-Step OSHA Settlement Process [Podcast]

In this episode of our Safety Perspectives From the Dallas Region podcast series, shareholders John Surma (Houston) and Frank Davis (Dallas) discuss the new settlement process implemented by the Dallas Regional Office of the Occupational Safety and Health Administration (OSHA) and the Dallas Regional Solicitor’s Office. John and Frank emphasize that this process now includes a second round of negotiations following the contesting of citations. This change could lead to more favorable outcomes for employers, including reductions in penalties and the possibility of having citations withdrawn. The speakers also touch on the potential reasons behind this new approach, such as reducing the workload for the Solicitor’s Office and addressing recent legal challenges faced by administrative bodies.

Proposed HIPAA Security Rule Updates May Significantly Impact Covered Entities and Business Associates

As we noted in our previous blog here, on January 6, 2025, the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) published a Notice of Proposed Rulemaking (NPRM) proposing substantial revisions to the Health Insurance Portability and Accountability Act (HIPAA) Security Rule (45 C.F.R. Parts 160 and 164) (the “Security Rule”).
This NPRM is one of several recent actions taken on the federal level to improve health data security. A redline showing the NPRM’s proposed revisions to the existing Security Rule language is available here. Comments on this NPRM must be submitted to OCR by March 7, 2025. Over 2,800 comments have been submitted thus far. These comments include opposition from several large industry groups raising concerns about the costs of compliance, asserting that the NPRM would impose an undue financial burden without a clear need for such changes to the existing framework. Some commentators expressed concerns regarding the burden on smaller or solo practitioners, while other commentators wrote in support of the effort to improve cybersecurity and commented on suggested alterations to particular elements of the rulemaking. Although the Trump administration has not apparently publicly commented on the NPRM and the final outcome of the rulemaking remains unclear, this Insight details important changes in the NPRM and potential widespread impacts on both covered entities and business associates (collectively, “Regulated Entities”).
The NPRM, if finalized as drafted, establishes new prescriptive cybersecurity and documentation requirements. This represents a significant change for a rule whose hallmark has historically been a flexible approach based upon cybersecurity risk, considering the size and complexity of an organization’s operations. Notably, the background to the NPRM is that the Security Rule already applies to Regulated Entities, including health-related information technology (IT) and artificial intelligence (AI) organizations that process health data on behalf of covered entities. The overall impact of the proposed changes may vary because certain Regulated Entities may already have in place the more robust safeguards prescribed by the NPRM. However, for those Regulated Entities that have not previously taken all such steps, including complying with the enhanced documentation requirements, the burden of the new compliance requirements may be significant.
OCR pointed to several justifications for the proposed revisions to the Security Rule, including:

the need for strong security standards in the health care industry to improve the efficiency and effectiveness of the health care system;
the continuous evolution of technology since the Security Rule was last updated in 2013;
inconsistent and inadequate compliance with the Security Rule among Regulated Entities; and
the need to strengthen the Security Rule to address changes in the health care environment, including the increasing number of cybersecurity incidents resulting from a proliferation of evolving cyber threats.

Although not discussed in detail by OCR, the growing number of state privacy and data protection laws, risk management frameworks related to data protection, and court decisions have also contributed to the impetus for greater specificity in the Security Rule with its focus on protecting identifiable patient health information.
Notably, many of the substantive requirements in the NPRM are already incorporated in various guidelines and safeguards for protecting sensitive information, such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework and HHS’s cybersecurity performance goals (CPGs). Voluntary compliance with these recognized guidelines has been incentivized pursuant to the Health Information Technology for Economic and Clinical Health (HITECH) Act’s 2021 amendment because a Regulated Entity that adopts “recognized security practices” is entitled to have its adoption considered by OCR in determining fines and other consequences if the agency conducts a review of the Regulated Entity’s HIPAA compliance. Accordingly, OCR noted that these standards and other similar guidelines were considered in the development of the NPRM requirements. Moreover, even if they have already implemented these practices, Regulated Entities will be faced with significantly increased administrative requirements, such as regular review and enhanced documentation requirements.
Key Proposed Changes
The NPRM includes the following key revisions:
New/Updated Definitions Clarify Electronic Systems Within the Rule’s Protections
The NPRM includes 10 new definitions and 15 changed definitions. Some of the new definitions address basic concepts that OCR had not defined previously, including “risk,” “threat,” and “vulnerability.” These definitions are not groundbreaking but will help guide Regulated Entities in establishing a more uniform standard for what they should be evaluating when considering data security.
Another change to the definitions section involves OCR’s proposed updates to defining “information systems” as well as new definitions for “electronic information system” and “relevant electronic information system.” Throughout the NPRM, OCR clarifies when all electronic information systems must abide by a rule versus only the relevant electronic information systems. In effect, each definition narrows the preceding definition, with “relevant information electronic systems” encompassing the smallest group of systems.
The NPRM defines an “electronic information system” as an “interconnected set of electronic information resources under the same direct management control that shares common functionality” and “generally includes technology assets such as hardware, software, electronic media, information and data.” Conversely, “relevant electronic information systems” are only those electronic information systems that create, receive, maintain, or transmit electronic protected health information (ePHI) or that otherwise affect the confidentiality, integrity, or availability of ePHI. The catchall phrasing broadens the definition significantly, requiring Regulated Entities to consider electronic systems they rely on that do not contain any ePHI but may affect access to and/or the confidentiality or integrity of ePHI.
“Addressable” Security Implementation Specifications Would Become “Required”
The Security Rule sets forth three categories of safeguards an organization must address: (1) physical safeguards, (2) technical safeguards, and (3) administrative safeguards. Each set of safeguards comprises a number of standards, and, beyond that, each standard consists of a number of implementation specifications, which is an additional detailed instruction for implementing a particular standard.
Currently, the Security Rule categorizes implementation specifications as either “addressable” (i.e., which give Regulated Entities flexibility in how to approach them) or “required” (i.e., they must be implemented by Regulated Entities). In meeting standards that contain addressable implementation specifications, a Regulated Entity currently has the option to (1) implement the addressable implementation specifications, (2) implement one or more alternative security measures to accomplish the same purpose, or (3) not implement either an addressable implementation specification or an alternative. In any event, the Regulated Entity’s choice and rationale must be documented.
According to the NPRM, OCR has become concerned that Regulated Entities view addressable implementation specifications as optional, thereby reducing the ultimate effectiveness of the Security Rule. The NPRM proposes to remove the distinction between “addressable” and “required” specifications, making all implementation specifications required, except for a few narrow exemptions.
Technology Asset Inventories and Information System Maps Are Required
The current Security Rule requires Regulated Entities to assess threats, vulnerabilities, and risks but stops short of prescribing particular methods or means of doing so. Certain recognized security practices generally include assessing technology assets and reviewing the movement of ePHI through technological systems to ensure there are no blatant vulnerabilities or overlooked risks.
The NPRM proposes to turn these practices into explicit requirements to create a technology asset inventory and a network map. The technology asset inventory would require written documentation identifying all technology assets, including location, the person accountable for such assets, and the version of each asset. The network map must illustrate the movement of ePHI through electronic information systems, including how ePHI enters, exits, and is accessed from outside systems. Additionally, the network map must account for the technology assets used by business associates to create, receive, maintain, or transmit ePHI. Both the technology asset inventory and network map would need to be reviewed and updated at least once every 12 months.
More Specific Risk Analysis Elements and Frequency Requirements Are Imposed
The Security Rule currently requires Regulated Entities to conduct a risk analysis assessing the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI held by such entities. As mentioned above, the Security Rule itself does not actually define “risk,” leaving some latitude for Regulated Entities to determine what should be included and considered in their risk analyses. While NIST (e.g., SP 800-30), the CPGs, and other authoritative sources have, over time, developed practices for conducting risk analyses, the current Security Rule (last updated in 2013) does not reflect what many now consider to be “best practices,” nor does it provide a specific methodology for Regulated Entities to consider in analyzing risks.
The NPRM imposes specific requirements that must be included in a risk analysis and its documentation, including:

a review of the aforementioned technology asset inventory and network map;
identification of all reasonably anticipated threats to the ePHI created, received, maintained, or transmitted by the Regulated Entity;
identification of potential vulnerabilities to the relevant electronic information systems of the Regulated Entity;
an assessment and documentation of the security measures the Regulated Entity uses to ensure that the measures protect the confidentiality, integrity, and availability of the ePHI;
a reasonable determination of the likelihood that “each” of the identified threats will exploit the identified vulnerabilities; and
if applicable, a reasonable determination of the potential impact of such exploitation and the risk level of each threat.

OCR notes in its preamble that there is still flexibility in determining risk based on the specific type of Regulated Entity and that entity’s specific circumstances. A high or critical risk to one Regulated Entity might be low or moderate to another. OCR is attempting to draw a fine line between telling Regulated Entities more explicitly what they should consider as risks (and what classification of risk should be assigned) while staying true to the hallmark flexibility of the Security Rule in allowing Regulated Entities to determine criticality.
The NPRM requires that risk analyses be reviewed, verified, and updated at least once every 12 months or in response to environmental or operational changes impacting ePHI. In addition to the risk analysis, the NPRM also proposes a separate evaluation standard wherein the Regulated Entity must create a written evaluation to determine whether any and all proposed changes in environment or operations would affect the confidentiality, integrity, or availability of ePHI prior to making that change.
Patch Management Is Now Subject to Mandated Timing Requirements
The NPRM proposes a new patch management standard that requires Regulated Entities to implement policies and procedures for identifying, prioritizing, and applying software patches throughout their relevant electronic information systems. The NPRM proposes specific timing requirements for patching, updating, or upgrading relevant electronic information systems based on the criticality of the patch in question:

15 calendar days for a critical risk patch,
30 calendar days for a high-risk patch, and
a reasonable and appropriate period of time based on the Regulated Entity’s policies and procedures for all other patches.

The NPRM contains limited exceptions for patch requirements where a patch is not available or would adversely impact the confidentiality, integrity, or availability of ePHI. Regulated Entities must document if/when they rely on such an exception, and they must also implement reasonable and appropriate compensating controls to address the risk until an appropriate patch becomes available.
Workforce Controls Are Tightened, Including Training and Terminating Access
The Security Rule currently has general workforce management requirements, including procedures for reviewing system activity, policies for ensuring workforce members have appropriate access, and required security awareness training. Although Regulated Entities are currently required to identify the security official responsible for the development and implementation of the security policies and technical controls, the NPRM would require the identification to be in writing.
Despite the current rules relative to workforce security, OCR noted that many Regulated Entities are not in full compliance with such requirements. OCR cited to an investigation involving unauthorized access by a former employee of a Regulated Entity as an example of Regulated Entities not tightly controlling and securing access to their systems. The NPRM addresses that issue by outlining more explicit requirements for workforce control policies, which must be written and reviewed at least once every 12 months.
In addition, the NPRM proposes strict timing requirements for workforce access and training:

Terminated employees’ access to systems must end no later than one hour after termination.
Other Regulated Entities must be notified after a change in or termination of a workforce member’s authorization to access ePHI of those other Regulated Entities no later than 24 hours after the change or termination.
New employees must receive training within 30 days of establishing access and at least once every 12 months thereafter.

Verifying Business Associate Compliance Is Required to Protect Against Supply Chain Risks
The NPRM also includes a new requirement for verifying business associate technical safeguards. Under the NPRM, Regulated Entities must obtain written verification of the technical safeguards used by business associates/subcontractors that create, maintain, or transmit ePHI on their behalf at least every 12 months. Such verification must be written by a person with appropriate knowledge of, and experience with, generally accepted cybersecurity principles and methods, which the HHS website refers to as a “subject matter expert.”
Multi-Factor Authentication and Other Technical Controls Are Mandatory
While the Security Rule has significant overlap with the NIST Cybersecurity Framework and CPGs, the NPRM would further align the Security Rule with these frameworks relative to technical controls. For example, the NPRM would require Regulated Entities to implement minimum password strength requirements that are consistent with NIST. Additionally, the NPRM proposes multi-factor authentication requirements that are consistent with the CPGs, which identify multi-factor authentication as an “essential goal” to address common cybersecurity vulnerabilities. Under the NPRM, multi-factor authentication will require verification through at least two of the following categories:

Information known by the user, such as a password or personal identification number (PIN);
Items possessed by the user, including a token or a smart identification card; and
Personal characteristics of the user, such as a fingerprint, facial recognition, gait, typing cadence, or other biometric or behavioral characteristics.

The NPRM permits limited exceptions from multifactor authentication where (1) current technology assets do not support multi-factor authentication, and the Regulated Entity implements a plan to migrate to a technology asset that does; (2) an emergency or other occurrence makes multi-factor authentication infeasible; or (3) the technology asset is a device approved by the U.S. Food and Drug Administration.
Other proposed minimum technical safeguards in the NPRM include:

segregation of roles by increased privileges,
automatic logoff,
log-in attempt controls,
network segmentation,
encryption at rest and in transit,
anti-malware protection,
standard configuration for OS and software,
disable network ports,
audit trails and logging,
vulnerability scanning every six months, and
penetration testing every 12 months.

Contingency/Disaster Planning Is Required to Ensure Resiliency
The Security Rule requires contingency planning for responding to emergencies or occurrences that damage systems containing ePHI, including periodic testing and revision of those plans.
The NPRM outlines more concrete obligations relative to contingency planning, including requirements to identify critical electronic information systems. The NPRM proposes relatively short timing requirements, requiring the implementation of procedures to restore critical electronic information systems and data within 72 hours of a loss and requiring business associates to notify covered entities upon activation of their contingency plans within 24 hours after activation.
Regulated Entities are granted the ability to define what these critical electronic information systems are in conducting their criticality analysis and should consider the quick turnaround time for restoring access when making these determinations.
Impact of the Proposed Changes
Regardless of what security framework, controls, and processes Regulated Entities may already have in place, there are three areas where all organizations can expect to see a significant impact in terms of planning and implementation: (1) increased documentation burden; (2) increased compliance obligations; and (3) business associate agreements (BAAs) compliance. The compliance burden will certainly be significant (as many of the commentators have pointed out), but given the breadth of the NPRM, the full extent of the compliance burden will need to await a final resolution of the rulemaking process.
Increased Documentation Burden
While the Security Rule already requires that Regulated Entities develop and maintain security policies and procedures, the NPRM would expressly require that those policies and procedures, as well as proposed additional plans (e.g., security incident response plans), be documented in writing. As a result, if/when OCR is assessing a Regulated Entity’s compliance with the Security Rule, it will likely have a longer checklist of written policies and procedures it expects to see. In addition, the technology asset inventory, network map, written verification of technical safeguards used by business associates, and all of the analyses and evaluations required by the NPRM would need to be memorialized in writing. Many of these documents would require review at least once per year. Many Regulated Entities may find the new documentation requirements impose an increased administrative burden. Further, with respect to Regulated Entities that do not have sufficient internal expertise or resources to tackle the implementation of these proposed requirements, it is likely that Regulated Entities will need to engage third-party legal and IT experts to meet these requirements.
Increased Compliance Obligations
With the additional written policies and procedures come additional obligations to test and review those procedures. Policies cannot be established and stored away until OCR asks to review them; rather, security policies must be revisited and reviewed at least every 12 months. The NPRM also requires that some of these policies be put to the test to determine the adequacy of the procedures in place at least once every 12 months. This will require dedication of additional time and resources on an ongoing basis. Again, to meet these requirements, Regulated Entities may need to engage third-party legal and IT experts to support these efforts.
The NPRM also contains some new timing requirements that may necessitate the development and implementation of new processes to meet these tight deadlines:

A former employee’s access must end within one hour of the termination of the individual’s employment.
Business associates must report to covered entities within 24 hours of activating contingency plans.
Disaster plans must restore critical electronic information systems and data within 72 hours of a loss.
Critical and high-risk patches not exempted from the rule must be deployed within 15 and 30 days, respectively.

Business Associate Agreements Compliance
As business associates are directly regulated under the Security Rule, they will also be beholden to the enhanced requirements of the NPRM. In addition, as a result of many of the NPRM’s proposed changes, covered entities and business associates will owe one another new obligations.
As a result, it is likely that these new requirements under the NPRM will impact what is memorialized in BAAs. For example, Regulated Entities must obtain written verification from their business associates that they have implemented the required technical safeguards not only upon contracting but at least once a year thereafter. Regulated Entities should also consider revising their existing BAAs to make more explicit the security safeguard requirements that the NPRM imposes, such as multi-factor authentication and patch management. Further, in light of the potential significant changes to security obligations under the NPRM, parties may also wish to reconsider other provisions in their BAAs regarding risk allocation and indemnification rights, audit rights, third-party certification obligations, offshoring, and reporting triggers and timelines, among others. Depending on the volume of BAAs a Regulated Entity maintains, this renegotiation of BAAs could become a costly and time-consuming endeavor.
Recognition of New/Emerging Technologies
Finally, OCR acknowledged the constantly evolving nature of technology, including quantum computing, AI, and virtual and augmented reality. OCR reiterated its position that the Security Rule, as written, is meant to be technology-neutral; therefore, Regulated Entities should comply with the rule regardless of whether they are using new and emerging technologies. Nevertheless, OCR discussed how the Security Rule may apply in the case of quantum computing, AI, or virtual and augmented reality use and has included a request for information from industry stakeholders and others regarding:

whether HHS’s understanding of how the Security Rule applies to new technologies involving ePHI is not comprehensive and, if so, what issues should also be considered;
whether there are technologies that currently or in the future may harm the security and privacy of ePHI in ways that the Security Rule could not mitigate without modification, and, if so, what modifications would be required; and
whether there are additional policy or technical tools that HHS may use to address the security of ePHI in new technologies.

* * * * *
The future of the NPRM remains uncertain as to whether it will be finalized under the second Trump administration. While efforts to strengthen cybersecurity protections across the health care sector have gained bipartisan support, including under the first Trump administration, the estimated cost of compliance and heightened regulatory obligations under the NPRM may face challenges in light of the second Trump administration’s stated position against increased federal regulation.
Alaap B. Shah also contributed to this article.