Warby Parker Settles Data Breach Case with OCR for $1.5M
Eyeglass manufacturer and retailer Warby Parker recently settled a 2018 data breach investigation by the Office for Civil Rights (OCR) for $1.5 million. According to OCR’s press release, Warby Parker self-reported that between September and November of 2018, unauthorized third parties had access to customer accounts following a credential stuffing attack. The names, mailing and email addresses, payment card information, and prescription information of 197,986 patients was compromised.
Following the OCR’s investigation, it alleged three violations of the HIPAA Security Rule, “including a failure to conduct an accurate and thorough risk analysis to identify the potential risks and vulnerabilities to ePHI in Warby Parker’s systems, a failure to implement security measures sufficient to reduce the risks and vulnerabilities to ePHI to a reasonable and appropriate level, and a failure to implement procedures to regularly review records of information system activity.” The settlement reiterates the importance of conducting an annual security risk assessment and implementing a risk management program.
Privacy Tip #434 – Use of GenAI Tools Escaping Corporate Policies
According to a new LayerX report, most users are logging into GenAI tools through personal accounts that are not supported or tracked by an organization’s single sign on policy. These logins to AI SaaS applications are unknown to the organization and are “not subject to organizational privacy and data controls by the LLM tool.” This is because most GenAI users are “casual, and may not be fully aware of the risks of GenAI data exposure.” As a result, a small number of users that can expose large volumes of data. LayerX concludes that “[a]pproximately 18% of users paste data to GenAI tools, and about 50% of that is company information.” LayerX’s findings include that 77% of users are using ChatGPT for online LLM tools.
We have outlined on several occasions the risk of data leakage with GenAI tools, and this report confirms that risk.
In addition, the report notes that “most organizations do not have visibility as to which tools are used in their organizations, by whom, or where they need to place controls.” Further, “AI-enabled browser extensions often represent an overlooked ‘side door’ through which data can leak to GenAI tools without going through inspected web channels, and without the organization being aware of this data transfer.”
LayerX provides solid recommendations to CISO’s including:
Audit all GenAI activity by users in the organization
Proactively educate employees and alert them to the risks of GenAI tools
Apply risk-based restrictions “to enable employees to use AI securely”
Employees must do their part as well. CISOs can implement operational measures to attempt to mitigate the risk of data leakage, but employees should follow organizational policies around the use of GenAI tools, collaborate with employers on the appropriate and authorized use of GenAI tools within the organization, and take responsibility for securing company data.
TCPA CLASS ACTION FILINGS EXPLODE: The MASSIVE Final Numbers Are In for 2024–And January, 2025 Numbers Are INSANE
These numbers are just insane.
85.3% of all TCPA filings in December, 2024 were class actions.
85.3%.
I can pretty much guarantee that is the highest percentage of cases to be filed as a class action under any statute at any point in the history of our country.
Oh wait, I lied.
In November, 2024 an INSANE 95.5% —95.5%!!!–of TCPA filings were class actions.
Welcome to TCPAWorld.
To add to the fun, a total of 2788 TCPA cases were filed last year UP 67% from 2023.
A 67% rise in TCPA suits year-over-year folks.
And considering that class actions now account for over 80% of those filings 2024 saw the highest number of TCPA class actions in history!
Insanity.
Want more fun?
The numbers for January, 2025 are also in.
172 TCPA class actions in January, 2025!
In January, 2024 there were 64.
64 to 172.
That’s a 268% rise in TCPA class actions to start 2025 over 2024!!!!!
Yes, it is only one month but considering how steep a rise 2024 saw the fact that 2025 is already SMOKING 2024 filings is real cause for concern.
So to summarize:
TCPA cases were up 67% last year;
Over 80% of filings are now class actions;
TCPA class action filings were up over 250% in January, 2025 YOY.
Wow.
And each one of these lawsuits has the ability to END a company. No wonder I have dedicated my career to keeping people safe in the TCPAWorld!
My goodness.
Credit to WebRecon for all the stats!
AI Meets HIPAA Security: Understanding HHS’s Risk Strategies and Proposed Changes
In this final blog post in the Bradley series on the HIPAA Security Rule notice of proposed rulemaking (NPRM), we examine how the U.S. Department of Health and Human Services (HHS) Office for Civil Rights interprets the application of the HIPAA Security Rule to artificial intelligence (AI) and other emerging technologies. While the HIPAA Security Rule has traditionally been technology agnostic, HHS explicitly addresses security measures for these evolving technology advances. The NPRM provides guidance to incorporate AI considerations into compliance strategies and risk assessments.
AI Risk Assessments
In the NPRM, HHS would require a comprehensive, up-to-date inventory of all technology assets that identifies AI technologies interacting with ePHI. HHS clarifies that the Security Rule governs ePHI used in both AI training data and the algorithms developed or used by regulated entities. As such, HHS emphasizes that regulated entities must incorporate AI into their risk analysis and management processes and regularly update their analysis to address changes in technology or operations. Entities must assess how the AI system interacts with ePHI considering the type and the amount of data accessed, how the AI uses or discloses ePHI, and who the recipients are of AI-generated outputs.
HHS expects entities to identify, track, and assess reasonably anticipated risks associated with AI models, including risks related to data access, processing, and output. Flowing from the proposed data mapping safeguards discussed in previous blog posts, regulated entities would document where and how the AI software interacts with or processes ePHI to support risk assessments. HHS would also require regulated entities to monitor authoritative sources for known vulnerabilities to the AI system and promptly remediate them according to their patch management program. This lifecycle approach to risk analysis aims to ensure the confidentiality, integrity, and availability of ePHI as technology evolves.
Integration of AI developers into the Security Risk Analysis
More mature entities typically have built out third-party vendor risk management diligence. If finalized, the NPRM would require all regulated entities contracting with AI developers to formally incorporate Business Associate Agreement (BAA) risk assessments into their security risk analysis. Entities also would need to evaluate BAs based on written security verifications that the AI vendor has documented security controls. Regulated entities should collaborate with their AI vendors to review technology assets, including AI software that interacts with ePHI. This partnership will allow entities to identify and track reasonably anticipated threats and vulnerabilities, evaluate their likelihood and potential impact, and document security measures and risk management.
Getting Started with Current Requirements
Clinicians are increasingly integrating AI into clinical workflows to analyze health records, identify risk factors, assist in disease detection, and draft real-time patient summaries for review as the “human in the loop.” According to the most recent HIMSS cybersecurity survey, most health care organizations permit the use of generative AI with varied approaches to AI governance and risk management. Nearly half the organizations surveyed did not have an approval process for AI, and only 31% report that they are actively monitoring AI systems. As a result, the majority of respondents are concerned about data breaches and bias in AI systems.
The NPRM enhances specificity in the risk analysis process by incorporating informal HHS guidance, security assessment tools, and frameworks for more detailed specifications. Entities need to update their procurement process to confirm that their AI vendors align with the Security Rule and industry best practices, such as the NIST AI Risk Management Framework, for managing AI-related risks, including privacy, security, unfair bias, and ethical use of ePHI.
The proposed HHS requirements are not the only concerns clinicians must consider when evaluating AI vendors. HHS also has finalized a rule under Section 1557 of the Affordable Care Act requiring covered healthcare providers to identify and mitigate discrimination risks from patient care decision support tools. Regulated entities must mitigate AI-related security risks and strengthen vendor oversight in contracts involving AI software that processes ePHI to meet these new demands.
Thank you for tuning into this series of analyzing the Security Rule updates. Please contact us if there are any questions or we can assist with any steps moving forward.
Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.
Industry Groups Urge Rescission of Proposed HIPAA Security Rule Updates
In February, a coalition of healthcare organizations sent a letter to President Donald J. Trump and the U.S. Department of Health and Human Services (HHS) (the Letter), urging the immediate rescission of a proposed update to the Security Rule under HIPAA. The update is aimed at strengthening safeguards for securing electronic protected health information.
According to The HIPAA Journal, the data breach trend in the healthcare industry over the past 14 years is up, not down. This is the case despite the HIPAA Security Rule having been in effect since 2005.
The HIPAA Journal goes on to provide some sobering statistics:
Between October 21, 2009, when OCR first started publishing summaries of data breach reports on its “Wall of Shame”, and and December 31, 2023, 5,887 large healthcare data breaches have been reported. On January 22, 2023, the breach portal listed 857 data breaches as still; under investigation. This time last year there were 882 breaches listed as under investigation, which shows OCR has made little progress in clearing its backlog of investigations – something that is unlikely to change given the chronic lack of funding for the department.
There have been notable changes over the years in the main causes of breaches. The loss/theft of healthcare records and electronic protected health information dominated the breach reports between 2009 and 2015. The move to digital record keeping, more accurate tracking of electronic devices, and more widespread adoption of data encryption have been key in reducing these data breaches. There has also been a downward trend in improper disposal incidents and unauthorized access/disclosure incidents, but data breaches continue to increase due to a massive increase in hacking incidents and ransomware attacks. In 2023, OCR reported a 239% increase in hacking-related data breaches between January 1, 2018, and September 30, 2023, and a 278% increase in ransomware attacks over the same period. In 2019, hacking accounted for 49% of all reported breaches. In 2023, 79.7% of data breaches were due to hacking incidents.
The letter, signed by numerous healthcare organizations, outlines several key concerns regarding the proposed HIPAA Security Rule update, including:
Financial and Operational Burdens: The letter argues that the proposed regulation would impose significant financial and operational burdens on healthcare providers, particularly those in rural areas. The unfunded mandates associated with the new requirements could strain the resources of hospitals and healthcare systems, leading to higher healthcare costs for patients and reduced investment in other critical areas.
Conflict with Existing Law: The Letter points to an amendment to the Health Information Technology for Economic and Clinical Health (HITECH) Act, arguing the proposed enhancements to the Security Rule conflict with the HITECH Act amendment. However, the HITECH Act amendment sought to incentivize covered entities to adopt “recognized security practices” that might minimize (not necessarily eliminate) remedies for HIPAA Security Rule violations and the length and extent of audits and investigations.
Timeline and Feasibility: The letter highlights concerns about the timeline for implementing the proposed requirements. The depth and breadth of the new mandates, combined with an unreasonable timeline, present significant challenges for healthcare providers.
No doubt, the Trump Administration is intent on reducing regulation on business. However, it will be interesting to see whether it softens or even eliminates the proposed rule in response to the Letter, despite the clear trend of more numerous and damaging data breaches in the healthcare sector, and an increasing threat landscape facing all U.S. businesses.
SEC Expands Confidential Review Process for Draft Registration Statements
Go-To Guide:
SEC expands confidential review process for draft registration statements, now available for all Securities Act and Exchange Act registrations.
New policy removes “initial filing” limitation, allowing both private and public companies to submit draft registration statements confidentially.
The policy clarifies accommodation for de-SPAC transactions.
Underwriter details may now be omitted from initial draft submissions, but must be included in later drafts and public filings.
On March 3, 2025, the Securities and Exchange Commission’s Division of Corporation Finance issued new guidance expanding the availability of confidential (nonpublic) review of draft registration statements (DRS).
Background
A DRS is a confidential draft of a registration statement submitted to the SEC for review before a public filing is made, granting issuers flexibility to avoid alerting the public market of the planned offering and sharing sensitive information until a more advanced stage of the offering process, if at all.
The confidential submission process was originally established only for foreign private issuers but was introduced in 2012 under the Jumpstart Our Business Startups Act (JOBS Act) for emerging growth companies (EGCs), allowing them to submit draft registration statements for nonpublic SEC review under Section 6(e) of the Securities Act of 1933, as amended (Securities Act), in order to encourage smaller companies to enter the public markets and streamline the initial public offering (IPO) process.
In 2017, the SEC extended this benefit to all companies—whether or not they qualified as EGCs—when filing:
an IPO registration statement under the Securities Act;
an initial registration statement under Section 12(b) of the Securities Exchange Act of 1934, as amended (Exchange Act), when seeking to list securities on a national securities exchange for the first time; or
an initial submission of a registration statement under the Securities Act during the twelve-month period following the effective date of the IPO registration statement or an issuer’s Exchange Act Section 12(b) registration statement.
The March 2025 guidance extends the benefits of non-public review to all issuers by removing the “initial filing” limitation. Now, both private and public companies can submit a DRS for confidential SEC review in connection with any Securities Act or Exchange Act registration—regardless of whether they are first-time registrants. Affected companies may now forestall market scrutiny of contemplated capital markets transactions triggered by a public SEC filing and, in some cases, during the pendency of the SEC review process, which may offer an advantage for planning and marketing the transaction.
Key Enhancements
1.
Expanded Eligibility for Nonpublic Review
a.
IPOs and Initial Exchange Act Registrations
The confidential review process now applies to initial Exchange Act registrations under both Section 12(b) (exchange listings) and Section 12(g) (required registration for companies exceeding $10 million in assets with a class of equity securities held by either 2,000 shareholders or 500 non-accredited investors), broadening access to the confidential review process.
Previously, companies filing on Forms 10, 20-F, or 40-F to go public outside the traditional IPO registration statement were not permitted to request non-public review.
b.
Subsequent Offerings and Registrations
Previously, the SEC would only accept subsequent DRSs for nonpublic review if they were submitted within the 12-month period following the effective date of either (i) the issuer’s IPO registration statement under the Securities Act, or (ii) the issuer’s initial Exchange Act registration statement under Section 12(b).
Under the SEC’s new policy, issuers may now submit DRSs for confidential review in connection with any Securities Act offering or any registration of a class of securities under Section 12(b) or Section 12(g) of the Exchange Act, regardless of how much time has passed since the issuer became public.
2.
Accommodation for de-SPAC Transactions
Under rules adopted in July 2024, target companies in de-SPAC transactions (where a SPAC, which is a public company, merges with a private company) must be co-registrants when the SPAC files the registration statement.
The SEC has now clarified that these registration statements—where the SPAC is the surviving entity—are eligible for confidential submission if the target company would itself qualify for non-public review under existing policies.
3.
Foreign Private Issuers (FPIs)
FPIs may rely on this expanded non-public review process. Alternatively, if the FPI qualifies as an EGC, it can follow the EGC-specific DRS procedures. FPIs that do not qualify as EGCs may also continue to rely on the separate confidential submission policy the SEC outlined in its May 30, 2012, guidance for FPIs.
4.
Omission of Underwriter Information
Issuers are now permitted to omit underwriter names from initial draft submissions, which is consistent with a practice that has developed when an issuer has not yet selected an underwriter. However, underwriter details must still be included in subsequent confidential draft submissions and in the publicly filed registration statement.
Submitting a Draft Registration Statement
The SEC expects a substantially complete submission—meaning the draft should be as close to final as possible. However, the SEC recognizes that some financial information may not be ready (for example, if a fiscal period has not yet ended), and commented that if the issuer reasonably expects that the missing information will not be required at the time of public filing (e.g., due to permitted reporting accommodations), the SEC will proceed with its review despite the omissions, an accommodation previously limited to EGCs.
Issuers can also continue to request relief under Rule 3-13 of Regulation S-X, which allows them to omit or modify certain financial statement requirements if the omitted information is immaterial and providing it would be unduly burdensome. The SEC will assess these requests based on the issuer’s particular facts and circumstances.
Public Availability and Timing
DRS submissions remain confidential until the issuer publicly files its registration statement. At that point, previously submitted DRS submissions, along with the SEC’s comment letters and the issuer’s responses, become publicly available via EDGAR.
For IPOs and initial Exchange Act registrations, the initial public filing must be made at least 15 days before any road show or, in the absence of a road show, at least 15 days prior to the registration statement’s requested effective date. This 15-day requirement is not new and mirrors the timeline previously applied to EGCs.
For subsequent public offerings and Exchange Act registrations (regardless of how much time has passed since the company became public), the initial public filing must be made at least two business days prior registration statement’s requested effective date. However, unlike the non-confidential registration process for IPOs and initial Exchange Act registrations, the SEC indicated that an issuer responding to staff comments on a DRS will need to do so on a public filing and not in a revised DRS.
Additionally, submissions of Exchange Act registration statements on Form 10, 20-F, or 40-F will need to be publicly filed with the SEC to ensure that the required 30-day or 60-day period runs before effectiveness, in accordance with existing rules.
Coordinating with the SEC
Issuers should consider communicating directly with SEC staff regarding their anticipated transaction timelines—particularly for filings tied to specific pricing windows or deal milestones. The SEC will consider reasonable requests for expedited review for both confidential and public filings.
The SEC staff indicated that for subsequent public offerings and Exchange Act registrations it may consider reasonable requests to expedite the two-business day period that the registration statement has to be public.
Takeaways
This expanded confidential submission process provides issuers with greater flexibility, particularly companies that were previously excluded from confidential review (such as seasoned issuers).
In particular, for seasoned issuers that are unable to access shelf-registrations (due to, for example, baby-shelf limitations), the new guidelines allow issuers seeking to raise capital in a registered offering to file a DRS on Form S-1 or F-1 confidentially, and if there is a no review, to quickly pivot to pricing the deal when market conditions are ripe.
By allowing more issuers to engage in a nonpublic review process with the SEC, the new policy may facilitate more capital formation while preserving key investor protections.
David Huberman also contributed to this article.
Data Breach Class Action Settlement Approval Affirmed by Ninth Circuit with Attorneys’ Fee Award Reversed and Remanded
Some data breach class actions settle quickly, with one of two settlement structures: (1) a “claims made” structure, in which the total amount paid to class members who submit valid claims is not capped, and attorneys’ fees are awarded by the court and paid separately by the defendant; or (2) a “common fund” structure, in which the defendant pays a lump sum that is used to pay class member claims, administration costs and attorneys’ fees awarded by the court. A recent Ninth Circuit decision affirmed the district court’s approval of a “claims made” settlement but reversed and remanded the attorney’s fee award. The decision highlights how the approval of the settlement terms should be independent of the attorney’s fees although some courts seem to merge them.
In re California Pizza Kitchen Data Breach Litigation, – F.4th –, 2025 WL 583419 (9th Cir. Feb. 24, 2025) involved a ransomware attack that compromised data, including Social Security numbers, of the defendant’s current and former employees. After notification of the breach, five class action lawsuits were filed, four of which were consolidated and proceeded directly to mediation. A settlement was reached providing for reimbursement for expenses and lost time, actual identity theft, credit monitoring, and $100 statutory damages for a California subclass. The defendant agreed not to object to attorneys’ fees and costs for class counsel of up to $800,000. The plaintiffs estimated the total value of the settlement at $3.7 million.
The plaintiffs who had brought the fifth (non-consolidated) case objected to the settlement. The district court held an unusually extensive preliminary approval hearing, at which the mediator testified. The court preliminarily approved the settlement, deferring its decision on attorneys’ fees until the information regarding claims submitted by class members was available. At that point, the district court, after estimating the total value of the class claims at $1.16 million (the claim rate was 1.8%), awarded the full $800,000 of attorneys’ fees and costs requested, which was 36% of the total class benefit of $2.1 million (including the $1.16 million plus settlement administration costs and attorneys’ fees and costs).
On appeal, the Ninth Circuit majority concluded that the district court did not abuse its discretion in approving the settlement. Based on the mediator’s testimony, the district court reasonably concluded that the settlement was not collusive. The Ninth Circuit explained that “the settlement offers real benefits to class members,” “the class’s standing rested on questionable footing—there is no evidence that any CPK employee’s compromised data was misused,” and “courts do not have a duty to maximize settlement value for class members.”
The attorneys’ fee award, however, was reversed and remanded. The Ninth Circuit explained that the class claims were properly valued at $950,000 (due to a miscalculation by the district court), and the fee award was 45% of the settlement value, “a significant departure from our 25% benchmark.” In remanding, the Ninth Circuit noted that a “downward adjustment” would likely be warranted on remand.
Judge Collins concurred in part and dissented in part. He would have reversed the approval of the settlement, concluding that the district court failed to adequately address the objections and the low claims rate, and citing “the disparity between the size of the settlement and the attorney’s fees.”
From a defendant’s perspective, this decision demonstrates how it can be important to convey to the court that the approval of the proposed settlement should be evaluated independently of the attorney’s fees application. If the court finds the proposed fee award too high, that should not warrant disapproval of the settlement if the proposed relief for the class members is fair and reasonable. This is true of both “claims made” and “common fund” settlement structures.
NYDFS Annual Compliance Submissions Due April 15, 2025 and New Compliance Requirements Effective on May 1, 2025
As we previously reported, in 2023 the New York State Department of Financial Services (NYDFS) amended its cybersecurity regulation, 23 NYCRR 500 (or Part 500). As of November 1, 2024, Class A Companies and Covered Entities were required to comply with numerous Part 500 compliance obligations outlined here.
April 15, 2025 Compliance Certification Deadline
Covered Entities have been required to submit annual compliance with Part 500 since the regulation’s adoption; however, since 2024, Covered Entities now have the option to submit either a Certification of Material Compliance (certifying they materially complied with the regulation requirements that applied to them in the prior year) or an Acknowledgement of Noncompliance (identifying all sections of the regulation with which they have not complied and providing a remediation timeline).
The deadline for Covered Entities to submit annual compliance notifications for the 2024 calendar year is April 15, 2025. Submissions can be submitted through the NYDFS Portal. Covered Entities that qualify for full exemptions from Part 500 do not have to submit annual compliance notifications. For more information on the April 15 compliance deadline, guidance on which form to file, and step-by-step instructions, see NYDFS’s Submit a Compliance Filing section in the Cybersecurity Resource Center or contact your Katten attorney.
May 1, 2025 Compliance Obligations
On May 1, 2025, Covered Entities are required to meet additional requirements under Part 500, including:
Access Privileges and Management
Implement enhanced requirements regarding limiting user access privileges, including privileged account access.
Review access privileges and remove or disable accounts and access that are no longer necessary.
Disable or securely configure all protocols that permit remote control of devices.
Promptly terminate access following personnel departures.
Implement a reasonable written password policy to the extent passwords are used.
Covered Entities and Class A Companies must also address the below items:
Vulnerability Management: conduct automated scans of information systems, and a manual review of systems not covered by such scans” to discover, analyze, and report vulnerabilities at a frequency determined by their risk assessment and promptly after any material system changes.
Mailicious Code: Implement controls to protect against malicious code.
Class A Companies must further update their information security programs to include:
Monitoring and Training: Implement (1) endpoint detection and response solution to monitor anomalous activity and (2) centralized logging and security event alert solution. CISOs can approve reasonably equivalent or more secure compensating controls, but approval must be in writing.
Virginia Poised to Become Second State to Enact Comprehensive AI Legislation
Go-To Guide:
Virginia’s HB 2094 applies to high-risk AI system developers and deployers and focuses on consumer protection.
The bill covers AI systems that autonomously make or significantly influence consequential decisions without meaningful human oversight.
Developers must document system limits, ensure transparency, and manage risks, while deployers must disclose AI usage and conduct impact assessments.
Generative AI outputs must be identifiable, with limited exceptions.
The attorney general would oversee enforcement, with penalties up to $10,000 per violation and a discretionary 45-day cure period.
HB 2094 is narrower than the Colorado AI Act (CAIA, with clearer transparency obligations and trade secret protections, and differs from the EU AI Act, which imposes stricter, risk-based compliance rules.
On Feb. 20, 2024, the Virginia General Assembly passed the High-Risk Artificial Intelligence (AI) Developer and Deployer Act (HB 2094). If signed by Gov. Glenn Youngkin, Virginia would become the second U.S. state to implement a broad framework regulating AI use, particularly in high-risk applications.1 The bill is closely modeled on the CAIA and would take effect on July 1, 2026.
This GT Alert covers to whom the bill applies, important definitions, key differences with the CAIA, and potential future implications.
To Whom Does HB 2094 Apply?
HB 2094 applies to any person doing business in Virginia that develops or deploys a high-risk AI system. “Developers” refer to organizations that offer, sell, lease, give, or otherwise make high-risk AI systems available to deployers in Virginia. The requirements HB 2094 imposes on developers would also apply to a person who intentionally and substantially modifies an existing high-risk AI system. “Deployers” refer to organizations that deploy or use high-risk AI systems to make consequential decisions about Virginians.
How Does HB 2094 Work?
Key Definitions
HB 2094 aims to protect Virginia residents acting in their individual capacities. It would not apply to Virginia residents who act in a commercial or employment context. Furthermore, HB 2094 defines “generative artificial intelligence systems” as AI systems that incorporate generative AI, which includes the capability of “producing and [being] used to produce synthetic content, including audio, images, text, and videos.”
HB 2094’s definition of “high-risk AI” would apply only to machine-learning-based systems that (i) serve as the principal basis for consequential decisions, meaning they operate without human oversight and (ii) that are explicitly intended to autonomously make or substantially influence such decisions.
High-risk applications include parole, probation, pardons, other forms of release from incarceration or court supervision, and determinations related to marital status. As the bill would not apply to government entities, it is not yet clear which private sector decisions might be in scope of these high-risk applications.
Requirements
HB 2094 places obligations on AI developers and deployers to mitigate risks associated with algorithmic discrimination and ensure transparency. It establishes a duty of care, disclosure, and risk management requirements for high-risk AI system developers, along with consumer disclosure obligations and impact assessments for deployers. Developers must document known or reasonably known limitations in AI systems. Generated or substantially modified synthetic content from generative AI high-risk systems must be made identifiable and detectable using industry-standard tools, comply with applicable accessibility requirements where feasible, and ensure the synthetic content is identified at the time of generation, with exceptions for low-risk or creative applications such that it “does not hinder the display or enjoyment of such work or program.” The bill references established AI risk frameworks such as NIST AI RMF and ISO/IEC 42001. Exemptions
Certain exclusions apply under HB 2094, including AI use in response to a consumer request or to provide a requested service or product under a contract. There are also limited exceptions for financial services and broader exemptions for healthcare and insurance sectors.
Enforcement
The bill grants enforcement authority to the attorney general and establishes penalties for noncompliance. Violations may result in fines up to $1,000 per occurrence, with attorney fee shifting, while willful violations may carry fines up to $10,000 per occurrence. Each violation would be considered separately for penalty assessment. The attorney general must issue a civil investigative demand before initiating enforcement action, and a discretionary 45-day right to cure period is available to address violations. There is no private right of action under HB 2094.
Key Differences With the CAIA
While HB 2094 is closely modeled on the CAIA, it introduces notable differences. HB 2094 limits its definition of consumers to individual and household contexts, and explicitly excludes commercial and employment contexts. It defines “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions, while adding a couple new use cases to the scope of “high-risk” uses. It also provides clearer guidelines on when a developer becomes a deployer, imposes more specific documentation and transparency obligations, and enhances trade secret protections. Unlike CAIA, HB 2094 does not require reporting algorithmic discrimination to the attorney general and allows a discretionary 45-day right to cure violations. Additionally, it expands the list of high-risk uses to include decisions related to parole, probation, pardons, and marital status.
While HB 2094 aligns with aspects of the CAIA, it differs from the broader and more stringent EU AI Act, which imposes risk-based AI classifications, stricter compliance obligations, and significant penalties for violations. HB 2094 also does not contain direct incident reporting requirements, public disclosure requirements, or a small business exception. Finally, HB 2094 upholds a higher threshold than CAIA for consumer rights when a high-risk AI makes a negative decision relating to a consumer, requiring that the AI system must have processed personal data beyond what the consumer directly provided.
Conclusion
If signed into law, HB 2094 would make Virginia the second U.S. state to implement comprehensive AI regulations, setting guidelines for high-risk AI systems while seeking to address concerns about transparency and algorithmic discrimination. With enforcement potentially beginning in 2026, businesses developing or deploying AI in Virginia should proactively assess their compliance obligations and prepare for the new regulatory framework, including where the organization is also subject to obligations under the CAIA.
1 See also GT’s blog post on the Colorado AI Act. Other states have regulated specific uses of AI or associated technologies, such as California and Utah, which, respectively, regulate interaction with bots and Generative AI.
UK ICO Publishes 2025 Tech Horizons Report
On February 20, 2025, the UK Information Commissioner’s Office (“ICO”) published its annual Tech Horizons Report (the “Report”), which explores four key technologies expected to play a significant role in society over the next two to seven years. These technologies include connected transport, quantum sensing and imaging, digital diagnostics and therapeutics, and synthetic media. The Report also discusses the ongoing work of the ICO in addressing data protection and privacy concerns related to the emerging technologies featured in their previous Tech Horizons reports.
The Report provides an overview of how key innovations are seeking to reshape industries and everyday life, the privacy and data protection implications of such innovations, and the ICO’s proposed recommendations and next steps. Below are examples of some of the potential privacy and data protection implications identified by the ICO, along with certain recommendations:
Connected Transport
Connected vehicles collect extensive and wide-ranging personal data for various purposes in a “complex ecosystem” of controllers and processors. Those organizations with transparency obligations must ensure they provide clear, concise and accessible privacy notices to individuals (including passengers); however, the ICO acknowledges that providing privacy notices in the connected transport environment may be a challenge.
Organizations should identify the correct lawful bases for processing personal data and remember that, in addition to the UK General Data Protection Regulation (“UK GDPR”), the Privacy and Electronic Communications Regulations also may apply in the context of connected transport and may require consent for certain activities.
Biometric technology may be used in connected transport for purposes such as fingerprint scanners to unlock vehicles. This technology requires the processing of biometric data which must comply with the requirements to process special category data.
When vehicles are shared, privacy concerns arise regarding access to data from previous users, such as location or smartphone pairings.
The ICO recommends embedding privacy by design into hardware and services related to connected vehicles to demonstrate compliance with the UK GDPR and other data protection legislation.
Quantum Sensing and Imaging
The ICO acknowledges that in the case of novel quantum sensing and imaging for medical or research purposes, a key benefit is the extra detail and insights provided by the technology. This could be deemed as conflicting with the principle of data minimization. The ICO states that the principle “does not prevent healthcare organisations processing more detailed information about people where necessary to support positive health outcomes,” but that organizations must have a justification for collecting and processing additional information, such as a clear research benefit.
The ICO states that it will continue to find opportunities to engage with industry in this area and to explore any potential data protection risks. The ICO also encourages embedding privacy by design and default when testing and deploying quantum technologies that involve processing personal information.
Digital Diagnostics and Therapeutics
Organizations working in health care are a target for cyber attacks for a number of reasons, including the nature of data held by such organizations. The adoption of digital diagnostics and therapeutics will only increase this risk. Organizations engaged in this space must comply with all applicable security obligations, including the obligation to ensure the confidentiality, security and integrity of the personal information they process in accordance with the UK GDPR.
According to the ICO, while the use of artificial intelligence (“AI”) and automated decision-making (“ADM”) “could improve productivity and patient outcomes,” there is a risk that their use to make decisions could “adversely affect some patients.” For example, bias is a key risk when considering AI and ADM. Organizations should use appropriate technical and organizational measures to prevent AI-driven discrimination. Another material risk is the lack of transparency regarding how AI tools process patient data. The ICO states that lack of transparency in a medical context could result in patient harm, and that the use of AI does not reduce an organization’s responsibility to comply with transparency obligations under the UK GDPR.
The ICO recommends providers implement privacy by design and ensure that any third parties they are engaged with have in place appropriate privacy measures and safeguards. In addition, providers should also ensure they follow guidance regarding fairness, bias and unlawful discrimination.
Synthetic Media
Data protection laws apply to personal data used in creating synthetic media, even if the final product does not contain identifiable information.
If automated moderation is used, the ICO confirms that organizations must comply with the ADM requirements of the UK GDPR.
The ICO intends to develop its understanding of synthetic media, including how personal data is processed in the context. The ICO also will work with other regulators and continue to engage with other stakeholders such as the public and interest groups.
Virginia Legislature Passes AI Bill
On February 20, 2025, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”).
The Act is a comprehensive bill that is focused on accountability and transparency in AI systems. The Act would apply to developers and deployers of “high-risk” AI systems that do business in Virginia. An AI system would be considered high-risk if it is intended to autonomously make, or be a substantial factor in making, a consequential decision. Under the Act, a consequential decision means a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer” of: (1) parole, probation, a pardon, or any other release from incarceration or court supervision; (2) education enrollment or an education opportunity; (3) access to employment; (4) a financial or lending service; (5) access to health care services; (6) housing; (7) insurance; (8) marital status or (9) a legal service. The Act excludes a number of activities from what is considered a high-risk AI system, such as if the system is intended to perform a narrow procedural task or improve the result of a previously completed human activity.
The Act includes requirements that differ depending on whether the covered business is an AI system developer or deployer. The requirements are generally aimed at avoiding algorithmic discrimination, ensuring impact assessments, promoting AI risk management frameworks, and ensuring transparency and protection against adverse decisions.
The Virginia Attorney General has exclusive authority to enforce the Act. Violations of the Act are subject to a civil penalty of up to $1,000, plus reasonable attorney fees, expenses and costs. The penalty can be increased up to $10,000 for willful violations. Notably, the Act states that each violation is a separate violation. The Act also provides a 45-day cure period.
Virginia Governor Glenn Youngkin has until March 24, 2025 to sign, veto or return the bill with amendments. If enacted, the law would take effect July 1, 2026.
TRANSFERRED: Shelton Suit Against Freedom Forever Pulled from PA and Sent to California
Famous TCPA litigator James Shelton had home court advantaged yanked away from him yesterday when a court ordered his TCPA suit against Freedom Forever, LLC transferred to California.
In Shelton v. Freedom Forever, 2025 WL 693249 (E.D. Pa March 4, 2025) the Court ordered the case transferred where the bulk of the activity leading up to the calls at issue took place in California.
While Shelton claims to have received calls in PA, the calling parties and all applicable principles and policies were California based. Since the case was a class action–and not an individual suit–the court determined Shelton’s presence in one state was not important as an entire nation worth of individuals must be taken into account.
On balance it made more sense to have the case tried in California where the key defense witnesses were rather than in PA where only Shelton resided.
Pretty straightforward and good ruling. TCPA defendants should consider transfer motions where a superior jurisdiction may exist that aligns with the interests of justice.
Generally California is not where one wants to litigate a case but let’s assume Freedom Forever thought that through before filing their motion.