Regulation Round Up: February 2025

Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in February 2025:
28 February
FCA Handbook Changes: The Financial Conduct Authority (“FCA”) published Handbook Notice 127, which sets out changes to the FCA Handbook made by the FCA board on 30 January and 27 February 2025.
27 February
Economic Growth / Consumer Duty: The FCA published a speech on, among other things, how the FCA is working to support growth initiatives in the economy and its approach to the Consumer Duty.
FCA Regulation Round‑up: The FCA published its regulation round‑up for February 2025. Among other things, it covers the launch of a new companion tool to the Financial Services Register and future changes to the pre‑application support services the FCA offers.
26 February
Reserved Investor Funds: The Alternative Investment Funds (Reserved Investor Fund) Regulations 2025 (SI 2025/216) were published, together with an explanatory memorandum. The Reserved Investor Fund is a new UK‑based unauthorised contractual scheme with lower costs and more flexibility than the existing authorised contractual scheme.
ESG: The European Commission proposed an Omnibus package on sustainability (here and here) to amend the sustainability due diligence and reporting requirements under the Corporate Sustainability Due Diligence Directive ((EU) 2024/1760) and the Corporate Sustainability Reporting Directive ((EU) 2022/2464). Please refer to our dedicated article on this topic here.
ESG: The European Commission published a call for evidence on a draft Delegated Regulation amending the Disclosures Delegated Act ((EU) 2021/2178) (Ares (2025) 1532453), the Taxonomy Climate Delegated Act (Commission Delegated Regulation (EU) 2021/2139) and the Taxonomy Environmental Delegated Act (Commission Delegated Regulation (EU) 2023/2486).
FCA Asset Management / Alternatives Supervision: The FCA published a portfolio letter explaining its supervision priorities for asset management and alternatives firms.
Cryptoassets: ESMA published the official translations of its guidelines (ESMA35‑1872330276‑2030) on situations in which a third‑country firm is deemed to solicit clients established or situated in the EU and the supervision practices to detect and prevent circumvention of the reverse solicitation exemption under the Markets in Crypto Assets Regulation (EU) 2023/1114 (“MiCA”).
24 February
Artificial Intelligence: The FCA published a research note on AI’s role in credit decisions.
Suitability Reviews / Ongoing Services: The FCA published a webpage and press release containing the findings of its multi‑firm review of suitability reviews and whether financial advisers are delivering the ongoing services that consumers have paid for.
21 February
Cryptoassets: The Financial Stability Board published summary terms of reference for its thematic peer review on its global regulatory framework for cryptoasset activities.
20 February
PRA Policy: The Prudential Regulatory Authority (“PRA”) published a policy statement (PS3/25) on its approach to policy.
Digital Operational Resilience: Two Commission Regulations supplementing the Regulation on digital operational resilience for the financial sector ((EU) 2022/2554) (“DORA”) were published in the Official Journal of the European Union (here and here).
17 February
Cryptoassets: ESMA published a consultation paper (ESMA35‑1872330276‑2004) on guidelines for the criteria to assess knowledge and competence under MiCA.
14 February
ESG: The FCA updated its webpage on its consultation paper on extending the sustainability disclosure requirements (“SDR”) and investment labelling regime to portfolio managers. Please refer to our dedicated article on this topic here.
ESG: The City of London Law Society published its response to HM Treasury’s November 2024 consultation on the UK green taxonomy.
Authorised Funds: The FCA published a document setting out its expectations on authorised fund applications.
Financial Sanctions: The Office of Financial Sanctions Implementation published a threat assessment report covering financial services.
13 February
Financial Regulatory Forum: HM Treasury published a statement following the third meeting of the joint UK‑EU Financial Regulatory Forum on 12 February 2025.
12 February
EU Competitiveness: The European Commission adopted a Communication setting out its vision to simplify how the EU works by reducing unnecessary bureaucracy and improving how new EU rules are made and implemented to make the EU more competitive.
European Commission 2025 Work Programme: The European Commission published a communication outlining its work programme for 2025 (COM(2025) 45 final).
10 February
Artificial Intelligence: The European Commission published draft non‑binding guidelines to clarify the definition of an AI system under the EU AI Act.
5 February
ESG: The EU Platform on Sustainable Finance published a report setting out recommendations to simplify and improve the effectiveness of taxonomy reporting. Please refer to our dedicated article on this topic here.
3 February
Payments: The FCA published a portfolio letter sent to payments firms setting out its priorities for them and actions it expects them to take.
Artificial Intelligence: The House of Commons Treasury Committee launched an inquiry into AI in financial services and published a related call for evidence.
Sulaiman Malik and Michael Singh contributed to this article

Céline Dion Calls Out AI-Generated Music Featuring Her Voice Without Permission

Céline Dion Calls Out AI-Generated Music Featuring Her Voice Without Permission. Céline Dion is making it clear that the AI-generated music circulating online, which claims to feature her voice, is unauthorized and “fake.” In a statement shared on her Instagram on March 7, 2025, the Grammy-winning singer spoke out against the growing trend of AI […]

Design-Code Laws: The Future of Children’s Privacy or White Noise?

In recent weeks, there has been significant buzz around the progression of legislation aimed at restricting minors’ use of social media. This trend has been ongoing for years but continues to face resistance. This is largely due to strong arguments that all-out bans on social media use not only infringe on a minor’s First Amendment rights but, in many cases, also create an environment that allows for the violation of that minor’s privacy.
Although companies subject to these laws must be wary of the potential ramifications and challenges if such legislation is enacted, these concerns should be integrated into product development rather than driving business decisions.
Design-Code Laws
A parallel trend emerging in children’s privacy is an influx in legislation aimed at mandating companies to proactively consider the best interest of minors as they design their websites (Design-Code Laws). These Design-Code Laws would require companies to implement and maintain controls to minimize harms that minors could face using their offerings.
At the federal level, although not exclusively a Design-Code Law, the Kids Online Safety Act (KOSA) included similar elements, and like those proposed bills, placed the responsibility on covered platforms to protect children from potential harms arising from their offerings. Specifically, KOSA introduced the concept of “duty of care,” wherein covered platforms would be required to act in the best interests of minors under 18 and protect them from online harms. Additionally, KOSA would require covered platforms to adhere to multiple design requirements, including enabling default safeguard settings for minors and providing parents with tools to manage and monitor their children’s online activity. Although the bill has seemed to slow as supporters try to account for prospective challenges in each subsequent draft of the law, the bill remains active and has received renewed support from members of the current administration.
At the state level, there is more activity around Design-Code Laws, with both California and Maryland enacting legislation. California’s law, which was enacted in 2022, has yet to go into effect and continues to face opposition largely centered around the law’s alleged violation of the First Amendment. Similarly, Maryland’s 2024 law is currently being challenged. Nonetheless, seven other states (Illinois, Nebraska, New Mexico, Michigan, Minnesota, South Carolina and Vermont) have introduced similar Design-Code Laws, each taking into consideration challenges that other states have faced and attempting to further tailor the language to withstand those challenges while still addressing the core issue of protecting minors online.
Why Does This Matter?
While opposition to laws banning social media use for minors has demonstrated success in the bright line rule restricting social media use, Design-Code Laws not only have stronger support, but they will also likely continue to evolve to withstand challenges over time. Although it’s unclear exactly where the Design-Code Laws will end up (which states will enact them, which will withstand challenges and what the core elements of the laws that withstand challenges will be), the following trends are clear:

There is a desire to regulate how companies collect data from or target their offerings to minors in order to protect this audience. The scope of the Design-Code Laws often does not stop at social media companies, rather, the law is intended to regulate those companies that provide an online offering that is likely to be accessed by children under the age of 18. Given the nature and accessibility of the web, many more companies will be within the scope of this law than the hotly contested laws banning social media use.
These laws bring the issue of conducting data privacy impact assessments (DPIAs) to the forefront. Already mandated by various state and international data protection laws, DPIA requirements compel companies to establish processes to proactively identify, assess and mitigate risks associated with processing personal information. Companies dealing with minor data in these jurisdictions will need to:

Create a DPIA process if they do not have one.
Build in additional time in their product development cycle to conduct a DPIA and address the findings.
Consider how to treat product roll-out in jurisdictions that do not have the same stringent requirements as those that have implemented Design-Code Laws.

As attention to children’s privacy continues to escalate, particularly on the state level, companies must continue to be vigilant and proactive in how they address these concerns. Although the enactment of these laws may seem far off with continued challenges, the emerging trends are clear. Proactively creating processes will mitigate the effects these laws may have on existing offerings and will also allow a company to slowly build out processes that are both effective and minimize the burden on the business.

The BR Privacy & Security Download: March 2025

STATE & LOCAL LAWS & REGULATIONS
Virginia Legislature Passes Bill Regulating High-risk AI: The Virginia legislature passed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”). Using a similar approach to the Colorado AI Act passed in 2023 and California’s proposed regulations for automated decision-making technology, the Act defines “high-risk AI systems” as AI systems that make consequential decisions, which are decisions that have material legal or similarly significant effects on a consumer’s ability to obtain things such as housing, healthcare services, financial services, access to employment, and education. The Act would require developers to use reasonable care to prevent algorithmic discrimination and to provide detailed documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of AI systems would be required to implement risk management policies, conduct impact assessments before deploying high-risk AI systems, disclose AI system use to consumers, and provide opportunities for correction and appeal. The bill is currently with Virginia Governor Glenn Youngkin, and it is unclear if he will sign it. 
Connecticut Introduces AI Bill: After an effort to pass AI legislation stalled last year in the Connecticut House of Representatives, another AI bill was introduced in the Connecticut Senate in February. SB-2 would establish regulations for the development, integration, and deployment of high-risk AI systems designed to prevent algorithmic discrimination and promote transparency and accountability. SB-2 would specifically regulate high-risk AI systems, defined as AI systems making consequential decisions affecting areas like employment, education, and healthcare. The bill includes similar requirements as the Connecticut AI bill considered in 2024 and would require developers to use reasonable care to prevent algorithmic discrimination and provide documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of high-risk AI systems would be required to implement risk management policies, conduct impact assessments before deployment of high-risk AI systems, disclose AI system use to consumers, and provide opportunities for appeal and correction.
New York Governor Signs Several Privacy Bills: New York Governor Kathy Hochul signed a series of bills expanding compliance obligations for social media platforms, debt collectors who use social media platforms, and dating applications. Senate Bill 895B—effective 180 days after becoming law—requires social media platforms operating in New York to post terms of service explaining how users may flag content they believe violates the platform’s terms. Senate Bill 5703B—effective immediately—prohibits the use of social media platforms for debt collection purposes. Senate Bill 2376B—effective 90 days after becoming law—expands the scope of New York’s identity theft protection law by including in its scope the theft of medical and health insurance information. Finally, Senate Bill 1759B—effective 60 days after becoming law—requires online dating services to notify individuals who were contacted by members who were banned for using a false identity, providing them with specific information to help users prevent being defrauded. Importantly, the New York Health Information Privacy Act, which would significantly expand the obligations of businesses that may collect broadly defined “health information” through their websites, has not yet been signed.
California Reintroduces Bill Requiring Browser-Based Opt-Out Preference Signals: For the second year in a row, the California Legislature has introduced a bill requiring browsers and mobile operating systems to provide a setting that enables a consumer to send an opt-out preference signal to businesses with which the consumer interacts through the browser or mobile operating system. The California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”), provides California residents with the ability to opt out of the sale or sharing of their personal data, including through an opt-out preference signal. AB 566 would amend the CCPA to ensure that consumers have the ability to do so. AB 566 requires the opt-out preference signal setting to be easy for a reasonable person to locate and configure. The bill further gives the California Privacy Protection Agency (“CPPA”), the agency charged with enforcing the CCPA, the authority to adopt regulations to implement and administer the bill. The CPPA has sponsored AB 566.
Virginia Senate Passes Amendments to Virginia Consumer Protection Act: Virginia’s Senate Bill 1023 (“SB 1023”) amends the Virginia Consumer Data Protection Act by banning the sale of precise geolocation data. The bill defines precise location data as anything that can locate a person within 1,750 feet. Introduced by Democratic State Senator Russet Perry, the bill has garnered bipartisan support in the Virginia Senate, passing with a 35-5 vote on February 4, 2025. Perry stated that the type of data the bill intends to ban has been used to target people in domestic violence and stalking cases, as well as for scams. 
Task Force Publishes Recommendations for Improvement of Colorado AI Act: The Colorado Artificial Intelligence Impact Task Force published its Report of Recommendations for Improvement of the Colorado AI Act. The Act, which was signed into law in May 2024, has faced significant pushback from a broad range of interest groups regarding ambiguity in its definitions, scope, and obligations. The Report is designed to help lawmakers identify and implement amendments to the Act prior to its February 1, 2026, effective date. The Report does not provide substantive recommendations regarding content but instead categorizes topics of potential changes based on how likely they are to receive consensus. The report identified four topics in which consensus “appears achievable with additional time,” four topics where “achieving consensus likely depends on whether and how to implement changes to multiple interconnected sections,” and seven topics facing “firm disagreement on approach where creativity will be needed.” These topics range from key definitions under the Act to the scope of its application and exemptions.
AI Legislation on Kids Privacy and Bias Introduced in California: California Assembly Member Bauer-Kahan introduced yet another California bill targeting Artificial Intelligence (“AI”). The Leading Ethical AI Development for Kids Act (“LEAD Act”) would establish the LEAD for Kids Standards Board in the Government Operations Agency. The Board would then be required to adopt regulations governing—among other things—the criteria for conducting risk assessments for “covered products.” Covered products include an artificial intelligence system that is intended to, or highly likely to, be used by children. The Act would also require covered developers to conduct and submit risk assessments to the board. Finally, the Act would authorize a private right of action for parents and guardians of children to recover actual damages resulting from breaches of the law.

FEDERAL LAWS & REGULATIONS
House Committee Working Group Organized to Discuss Federal Privacy Law: Congressman Brett Guthrie, Chairman of the House Committee on Energy and Commerce (the “Committee”), and Congressman John Joyce, M.D., Vice Chairman of the Committee, announced the establishment of a working group to explore comprehensive data privacy legislation. The working group is made up entirely of Republican members and is the first action in this new Congressional session on comprehensive data privacy legislation. 
Kids Off Social Media Act Advances to Senate Floor: The Senate Commerce Committee advanced the Kids Off Social Media Act. The Act would prohibit social media platforms from allowing children under 13 to create accounts, prohibit platforms from algorithmically recommending content to teens under 17, and require schools to limit social media use on their networks as a condition of receiving certain funding. The Act is facing significant pushback from digital rights groups, including the Electronic Frontier Foundation and the American Civil Liberties Union, which claim that the Act would violate the First Amendment.
Business Groups Oppose Proposed Updates to HIPAA Security Rule: As previously reported, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). See Blank Rome’s Client Alert on the proposed rule. A coalition of business groups, including the College of Healthcare Information Management Executives, America’s Essential Hospitals, American Health Care Association, Association of American Medical Colleges, Federation of American Hospitals, Health Innovation Alliance, Medical Group Management Association and National Center for Assisted Living, have written to President Trump and HHS Secretary Robert F. Kennedy, Jr. opposing the proposed rule. The business groups argue that the proposed rule imposes great financial burdens on the healthcare sector, including on rural hospitals, which would divert attention and funds away from other critical areas. The business groups also argue that the proposed rule contradicts Public Law 116-321, which explicitly requires HHS to consider a regulated entity’s adoption of recognized security practices when enforcing the HIPAA Security Rule, by not addressing or incorporating this legal requirement.
National Artificial Intelligence Advisory Committee Adopts List of 10 AI Priorities: The National Artificial Intelligence Advisory Committee (“NAIC”), which was established under the 2020 National Artificial Intelligence Initiative Act, approved a draft report for the Trump administration with 10 recommendations to address AI policy issues. The recommendations cover AI issues in employment, AI awareness and literacy, and AI in education, science, health, government, and law enforcement, as well as recommendations for empowering small businesses and AI governance and supporting AI innovation in a way that would benefit Americans.
CFPB Acting Director Instructs Agency Staff to Stop Work: Consumer Financial Protection Bureau (“CFPB”) Acting Director Russel Vought instructed agency staff to “stand down” and refrain from doing any work. The communication to CFPB employees followed an instruction to suspend regulatory activities and halt CFPB rulemaking. Vought also suspended CFPB’s supervision and examination activities. This freeze would impact the CFPB’s rule on its oversight of digital payment apps as well as the CFPB’s privacy rule that created a right of data portability for customers of financial institutions.

U.S. LITIGATION
First Washington My Health My Data Lawsuit Filed: Amazon is facing a class action lawsuit alleging violations of Washington’s My Health My Data Act (“MHMDA”), along with federal wiretap laws and state privacy laws. The suit is the first one brought under MHMDA’s private right of action and centers on Amazon’s software development kit (“SDK”) embedded in third-party mobile apps. The plaintiff’s complaint alleges Amazon collected location data of users without their consent for targeted advertising. The complaint also alleges that the SDK collected time-stamped location data, mobile advertising IDs, and other information that could reveal sensitive health details. According to the lawsuit, this data could expose insights into a user’s health status, such as visits to healthcare facilities or health behaviors, without users knowing Amazon was also obtaining and monetizing this data. The lawsuit seeks injunctive relief, damages, and disgorgement of profits related to the alleged unlawful behavior. The outcome could clarify how broadly courts interpret “consumer health data” under the MHMDA.
NetChoice Files Lawsuit to Challenge Maryland Age-Appropriate Design Act: NetChoice—a tech industry group—filed a complaint in federal court in Maryland challenging the Maryland Age-Appropriate Design Code Act as violating the First Amendment. The Act was signed into law in May and became effective in October 2024. It requires online services that are likely to be accessed by children under the age of 18 to provide enhanced safeguards for, and limit the collection of data from, minors. In its Complaint, NetChoice alleges that the Act will not meaningfully improve online safety and will burden online platforms with the “impossible choice” of either proactively censoring categories of constitutionally protected speech or implementing privacy-invasive age verification systems that create serious cybersecurity risks. NetChoice has been active in challenging similar Acts across the country, including in California, where it has successfully delayed the implementation of the eponymous California Age-Appropriate Design Code Act.
Kochava Settles Privacy Class Action; Unable to Dismiss FTC Lawsuit: Kochava Inc. (“Kochava”), a mobile app analytics provider and data broker, has settled the class action lawsuits alleging Kochava collected and sold precise geolocation data of consumers that originated from mobile applications. The settlement requires Kochava to pay damages of up to $17,500 for the lead plaintiffs and attorneys’ fees of up to $1.5 million. Among other changes to its privacy practices Kochava must make, the settlement requires Kochava to implement a feature aimed at blocking the sharing or use of raw location data associated with health care facilities, schools, jails, and other sensitive venues. Relatedly, U.S. District Judge B. Lynn Winmill of the District of Idaho denied Kochava’s motion to dismiss the lawsuit brought by the Federal Trade Commission (“FTC”) for Kochava’s alleged violations of Section 5 of the FTC Act. The FTC alleges that Kochava’s data practices are unfair and deceptive under Section 5 of the FTC Act, as it sells the sensitive personal information collected through its Mobile Advertising ID system (“MAIDs”) to its customers, providing customers a “360-degree perspective” on consumers’ behavior through subscriptions to its data feeds, without the consumer’s knowledge or consent. In the order denying Kochava’s motion to dismiss, Winmill rejected Kochava’s argument that Section 5 of the FTC Act is limited to tangible injuries and wrote that the “FTC has plausibly pled that Kochava’s practices are unfair within the meaning of the FTC Act.”
Texas District Court Blocks Enforcement of Texas SCOPE Act: The U.S. District Court for the Western District of Texas (“Texas District Court”) granted a preliminary injunction blocking enforcement of Texas’ Securing Children Online through Parental Empowerment Act (“SCOPE Act”). The SCOPE Act requires digital service providers to protect children under 18 from harmful content and data collection practices. In Students Engaged in Advancing Texas v. Paxton, plaintiffs sued the Texas Attorney General to block enforcement of the SCOPE Act, arguing the law is an unconstitutional restriction of free speech. The Texas District Court ruled that the SCOPE Act is a content-based statute subject to strict scrutiny, and that with respect to certain of the SCOPE Act’s monitoring-and-filtering, targeted advertising and content monitoring and age-verification requirements, the law’s restrictions on speech failed strict scrutiny and should be facially invalidated. Accordingly, the Texas District Court issued a preliminary injunction halting the enforcement of such provisions. The remaining provisions of the law remain in effect.
California Attorney General Agrees to Narrowing of Its Social Media Law: The California Attorney General has agreed to not enforce certain parts of AB 587, now codified in the Business & Professions Code, sections 22675-22681, which set forth content moderation requirements for social media platforms (the “Social Media Law”). X Corp. (“X”) filed suit against the California Attorney General, alleging that the Social Media Law was unconstitutional, censoring speech based on what the state sees as objectionable. While the U.S. District Court for the Eastern District of California (“California District Court”) initially denied X’s request for a preliminary injunction to block the California Attorney General from enforcing the Social Media Law, the Ninth Circuit overturned that decision, holding that certain provisions of the law regarding extreme content failed the strict-scrutiny test for content-based restrictions on speech, violating the First Amendment. X and the California Attorney General have asked the California District Court to enter a final judgment based on the Ninth Circuit decision. The California Attorney General has also agreed to pay $345,576 in attorney fees and costs.

U.S. ENFORCEMENT
Arkansas Attorney General Sues Automaker over Data Privacy Practices: Arkansas Attorney General Tim Griffin announced that his office filed a lawsuit against General Motors (“GM”) and its subsidiary OnStar for allegedly deceiving Arkansans and selling data collected through OnStar from more than 100,000 Arkansas drivers’ vehicles to third parties, who then sold the data to insurance companies that used the data to deny insurance coverage and increase rates. The lawsuit alleges that GM advertised OnStar as offering the benefits of better driving, safety, and operability of its vehicles, but violated the Arkansas Deceptive Trade Practices Act by misleading consumers about how driving data was used. The lawsuit was filed in the Circuit Court of Phillips County, Arkansas.
Healthcare Companies Settle FCA Claims over Cybersecurity Requirements: Health Net and its parent company, Centene Corp. (collectively, “Health Net”), have settled with the United States Department of Justice (“DOJ”) for allegations that Health Net falsely certified compliance with cybersecurity requirements under a U.S. Department of Defense contract. Health Net had contracted with the Defense Health Agency of the U.S. Department of Defense (“DHA”) to provide managed healthcare support services for DHA’s TRICARE health benefits program. The DOJ alleged that Health Net failed to comply with its contractual obligations to implement and maintain certain federal cybersecurity and privacy controls. The DOJ alleged that Health Net violated the False Claims Act by falsely stating its compliance in related annual certifications to the DHA. The DOJ further alleged that Health Net ignored reports from internal and third-party auditors about cybersecurity risks on its systems and networks. Under the settlement, Health Net must pay the DOJ and DHA $11.25 million.
Eyewear Provider Fined $1.5M for HIPAA Violations: The U.S. Department of Health and Human Services (“HHS”), Office for Civil Rights (“OCR”) imposed a $1,500,000 civil money penalty against Warby Parker for violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule. The penalty resulted from a cyberattack involving unauthorized access to customer accounts, affecting nearly 200,000 individuals. An OCR investigation resulted from a 2018 security incident. Between September 25, 2018, and November 30, 2018, third parties accessed customer accounts using usernames and passwords obtained from breaches of other websites, a method known as “credential stuffing.” The compromised data included names, addresses, email addresses, payment card information, and eyewear prescriptions. OCR found that Warby Parker failed to conduct an accurate risk analysis, implement sufficient security measures, and regularly review information system activity.
CPPA Finalizes Sixth Data Broker Registration Enforcement Action: The California Privacy Protection Agency announced that it is seeking a $46,000 penalty against Jerico Pictures, Inc., d/b/a National Public Data, a Florida-based data broker, for allegedly failing to register and pay an annual fee as required by the California Delete Act. The Delete Act requires data brokers to register and pay an annual fee that funds the California Data Broker Registry. This action comes following a 2024 data breach in which National Public Data reportedly exposed 2.9 billion records, including names and Social Security Numbers. This is the sixth action taken by the CPPA against data brokers, with the first five actions resulting in settlements.

INTERNATIONAL LAWS & REGULATIONS
First EU AI Act Provisions Become Effective; Guidelines on Prohibited AI Adopted: The first EU AI Act (the “Act”) provisions to become effective came into force on February 2, 2025. The Act’s provisions prohibiting certain types of AI systems deemed to pose an unacceptable risk and rules on AI literacy are now applicable in the EU. Prohibited AI systems are those that present unacceptable risks to the fundamental rights and freedoms of individuals and include social scoring for public and private purposes, exploitation of vulnerable individuals with subliminal techniques, biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation, and emotion recognition in the workplace and education institutions, unless for medical or safety reasons, among other uses. The new AI literacy obligations will require organizations to put in place robust AI training programs to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems. Certain obligations related to general-purpose AI models will become effective August 2, 2025. Most other obligations under the Act will become effective August 2, 2026.
UK Introduces AI Cyber Code of Practice: The UK government has introduced a voluntary Code of Practice to address cybersecurity risks in AI systems, with the aim of establishing a global standard via the European Telecommunications Standards Institute (“ETSI”). This code is deemed necessary due to the unique security risks associated with AI, such as data poisoning and prompt injection. It offers baseline security requirements for stakeholders in the AI supply chain, emphasizing secure design, development, deployment, maintenance, and end-of-life. The Code of Practice is intended as an addendum to the Software Code of Practice. It provides guidelines for developers, system operators, data custodians, end-users, and affected entities involved in AI systems. Principles within the code include raising awareness of AI security threats, designing AI systems for security, evaluating and managing risks, and enabling human responsibility for AI systems. The code also emphasizes the importance of documenting data, models, and prompts, as well as conducting appropriate testing and evaluation.
CJEU Issues Opinion on Pseudonymized Data: The Court of Justice of the European Union (“CJEU”) issued a decision in a case involving an appeal by the European Data Protection Supervisor (“EDPS”) against a General Court decision that annulled the EDPS’s decision regarding the processing of personal data by the Single Resolution Board (“SRB”) during the resolution of Banco Popular Español SA during insolvency proceedings. The case reviewed whether data transmitted by the SRB to Deloitte constituted personal data. Personal data consisted of comments from parties interested in the proceedings that had been pseudonymized by assigning a random alphanumeric code, as well as aggregated and filtered, so that individual comments could not be distinguished within specific commentary themes. Deloitte did not have access to the codes or the original database. The court held that the data was personal data in the hands of the SRB. However, the court ruled that the EDPS was incorrect in determining that the pseudonymized data was personal data to Deloitte without analyzing whether it was reasonably possible that Deloitte could identify individuals from the data. As a takeaway, the CJEU left open the possibility that pseudonymized data could be organized and protected in such a way as to remove any reasonable possibility of re-identification with respect to a particular party, resulting in the data not constituting personal data under the GDPR.
European Commission Withdraws AI Liability Directive from Consideration; European Parliament Committee Votes to Press On: The European Commission announced it plans to withdraw the proposed EU AI Liability Directive, a draft legislation for addressing harms caused by artificial intelligence. The decision was announced in the Commission’s 2025 Work Program stating that there is no foreseeable agreement on the legislation. However, the proposed legislation has not yet been officially withdrawn. Despite the announcement, members of the European Parliament on the body’s Internal Market and Consumer Protection Committee voted to keep working on liability rules for artificial intelligence products. It remains to be seen whether the European Parliament and the EU Council can make continued progress in negotiating the proposal in the coming year.
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan and Karen H. Shin.

Privacy Tip #434 – Use of GenAI Tools Escaping Corporate Policies

According to a new LayerX report, most users are logging into GenAI tools through personal accounts that are not supported or tracked by an organization’s single sign on policy. These logins to AI SaaS applications are unknown to the organization and are “not subject to organizational privacy and data controls by the LLM tool.” This is because most GenAI users are “casual, and may not be fully aware of the risks of GenAI data exposure.” As a result, a small number of users that can expose large volumes of data. LayerX concludes that “[a]pproximately 18% of users paste data to GenAI tools, and about 50% of that is company information.” LayerX’s findings include that 77% of users are using ChatGPT for online LLM tools.
We have outlined on several occasions the risk of data leakage with GenAI tools, and this report confirms that risk.
In addition, the report notes that “most organizations do not have visibility as to which tools are used in their organizations, by whom, or where they need to place controls.” Further, “AI-enabled browser extensions often represent an overlooked ‘side door’ through which data can leak to GenAI tools without going through inspected web channels, and without the organization being aware of this data transfer.”
LayerX provides solid recommendations to CISO’s including:

Audit all GenAI activity by users in the organization
Proactively educate employees and alert them to the risks of GenAI tools
Apply risk-based restrictions “to enable employees to use AI securely”

Employees must do their part as well. CISOs can implement operational measures to attempt to mitigate the risk of data leakage, but employees should follow organizational policies around the use of GenAI tools, collaborate with employers on the appropriate and authorized use of GenAI tools within the organization, and take responsibility for securing company data.

Foley Automotive Update 06 March 2025

Foley is here to help you through all aspects of rethinking your long-term business strategies, investments, partnerships, and technology. Contact the authors, your Foley relationship partner, or our Automotive Team to discuss and learn more.
Special Update — Trump Administration and Tariff Policies

Foley & Lardner provided an update on the potential ramifications of steel and aluminum tariffs on multinational companies.
Foley & Lardner partner Gregory Husisian described sentiment among Chief Financial Officers on the Trump administration’s approach to trade policy in The Wall Street Journal article, “The Latest Dilemma Facing Finance Chiefs: What to Tell Investors About Tariffs.”
Key tariff announcements include:

USMCA-compliant automakers have a one-month exemption from the 25% tariffs on U.S. imports from Canada and Mexico that were announced on March 4. The Trump administration announced the decision on March 5, following discussions with Ford, GM, and Stellantis.
In a March 5 MEMA update regarding the temporary pause of auto tariffs on Canada and Mexico, President and CEO Bill Long stated “Conversations held today indicate positive results that USMCA-compliant parts are included, but we are awaiting official confirmation from the Administration.” In breaking news on March 6, Commerce Secretary Howard Lutnick stated to CNBC: “It’s likely that it will cover all USMCA compliant goods and services, so that which is part of President Trump’s deal with Canada and Mexico are likely to get an exemption from these tariffs. The reprieve is for one month.”
On March 4, U.S. duties on Chinese imports were doubled to 20%. China intends to implement new tariffs on U.S. imports on March 10, and the nation added over two dozen U.S. companies to export control and corporate blacklists. 
The Canadian government does not plan to repeal the 25% retaliatory tariffs on approximately C$30 billion worth of goods from U.S. exporters, announced on March 4. Canada could also implement a second round of 25% tariffs in three weeks on C$125 billion of products that include cars, trucks, steel, and aluminum. Mexico plans to announce tariffs on U.S. imports on March 9.
25% levies on U.S. imports of steel and aluminum could be implemented March 12.
Announcements could follow on April 2 regarding 25% sector-specific tariffs that would include automobile and semiconductor imports, along with broader “reciprocal tariffs” on countries that tax U.S. imports. Details have not been provided regarding the recent threat for 25% duties on European imports.
A February 25 executive order directed the government to consider possible tariffs on copper.

Automotive Key Developments

U.S. new light-vehicle sales are estimated to have reached a SAAR between 16.1 and 16.3 million units in February 2025, according to preliminary analysis from J.D. Power and Haver Analytics.
Annual U.S. auto sales could decline by 500,000 units, and up to 2 million units, if the Trump administration were to implement 25% tariffs on automotive imports from Mexico and Canada, according to automotive analysts featured in the Detroit Free Press and Bloomberg. In addition, a recession could begin “within a year” if certain tariffs “persist for any length of time.”
The Alliance for Automotive Innovation and Anderson Economic Group estimate tariffs on Mexican and Canadian imports could raise the cost of a new vehicle by up to 25%, or by a range of $4,000 to $12,000, depending on the model.
Crain’s Detroit reports product launch delays are impacting suppliers as automakers postpone investment decisions until there is more stability in areas that include “federal tariffs, regulatory policy and electric vehicle incentives.”
A number of large auto suppliers are taking steps to reduce expenses in order to support profitability amid market uncertainty, according to a report in Automotive News.
The Wall Street Journal provided overviews of the potential impact of tariffs on automakers and vehicle components, stating that “no sector is as exposed to possible Trump tariffs as the auto industry.”
The benchmark price for domestic steel has increased 25% this year to $900 a ton, ahead of a possible 25% import tariff on the metal. 
The Wall Street Journal reports the potential for tariffs on aluminum have already raised costs for buyers, as there are few U.S. suppliers capable of meeting supply needs after years of declining domestic production.
The National Highway Traffic Safety Administration laid off 4% of its staff as part of a government-wide reduction of federal employees. NHTSA had expanded its workforce by roughly 30% under the Biden administration, and it was estimated to have a staff of approximately 800 prior to the job cuts.
At the annual MEMA Original Equipment Suppliers event on February 27, the North American purchasing chief of Stellantis indicated the automaker will consider supplier requests for pricing relief. This represents a reversal of a “no more claims” policy announced in 2024.

OEMs/Suppliers

Stellantis reported a full-year 2024 net profit of $5.8 billion on net revenue of $156.9 billion, representing year-over-year declines of roughly 70% and 17%, respectively.
GM will temporarily halt production for a number of weeks at its Corvette plant in Bowling Green, Kentucky, for undisclosed reasons.
Mercedes plans to reduce capacity in Germany as part of an initiative to reduce expenses by 10% through 2027 amid heightened competition, uneven demand, and high material costs. The automaker may also reduce its sales and finance workforce in China, according to unidentified sources in Reuters. 
China’s top-selling automaker, BYD, could decide on a third plant location in Europe within the next two years. The automaker has plants underway in Szeged, Hungary, and Izmir, Türkiye.
Detroit Manufacturing Systems, LLC will acquire Android Industries, LLC and Avancez, LLC. The combined entity, Voltava LLC, will be headquartered in Auburn Hills, Michigan, and it is expected to reach over $1.5 billion in annual revenue.

Market Trends and Regulatory

J.D. Power estimates the average monthly payment for a new vehicle reached $738 in February, up 2.4% year-over-year. The analysis noted “vehicle affordability remains a challenge for the industry and is the primary reason why the sales pace, while strengthening, has not returned to pre-pandemic levels.”
The new vehicle average transaction price reached $48,118 in January 2025, according to analysis from Edmunds.
The International Longshoremen’s Association (ILA) ratified a six-year labor contract with the United States Maritime Alliance (USMX), ending months of uncertainty over the potential for a follow-up strike at U.S. East and Gulf Coast ports.
National “right to repair” legislation was introduced in Congress last month by a bipartisan group of lawmakers. The Right to Equitable and Professional Auto Industry Repair Act (H.R. 906) follows multiple recent attempts by Congress to pass similar legislation.
The 2026 Detroit Auto Show will take place January 14–25, 2026, at Huntington Place.
In response to concerns over the compliance costs associated with 2025 carbon dioxide emissions standards in the European Union, the European Commission announced automakers will now have a three-year window to meet emissions targets in the bloc.

Autonomous Technologies and Vehicle Software

Automotive News provided an update on the outlook for artificial intelligence (AI) adoption in certain automotive applications.
A number of automakers are pursuing software and AI-based technology to differentiate their vehicles’ self-driving features, according to a report in The Wall Street Journal.
Stellantis debuted a Level 3 automated driving system, STLA AutoDrive 1.0, that is expected to facilitate hands-free and eyes-off functionality at speeds of up to 37 mph. The automaker did not provide a launch date for the technology. The Society of Automotive Engineers (SAE) defines Level 3 as autonomous technology that can drive the vehicle under limited conditions without human supervision.
Mercedes is currently the only automaker with a Level 3 system approved for use in the U.S., and the automaker’s Drive Pilot is only available in Nevada and California. Honda plans to launch Level 3 automated driving system in 2026, in the 0 Series in North America.
Uber began offering its customers driverless Waymo rides in Austin, Texas.

Electric Vehicles and Low Emissions Technology

China’s Xiaomi has a goal to deliver over 300,000 EVs in 2025, and this would more than double its deliveries last year. The consumer electronics giant sells nearly all its EVs within China.
China announced new export restrictions on tungsten and other specialty metals used in applications that include EV batteries.
TechCrunch analysis indicates there are currently 34 battery factories either planned, under construction, or operational in the U.S., up from two in 2019.
Stellantis’ Brampton Assembly plant in Ontario has been temporarily shut down as the automaker reevaluates plans for the next-generation electric Jeep Compass SUV that was scheduled to begin production in early 2026. This follows a decision by Ford to delay the launch of its next-generation gas and hybrid F-150 pickup trucks.
Canada’s zero-emission vehicle sales declined by nearly 30% in January 2025 from December 2024. This follows a halt in the federal rebate program, when funding was exhausted ahead of the original termination date of March 31, 2025.
The Trump administration directed federal buildings across the U.S. to shut off EV chargers, according to communications from the General Services Administration described by unidentified sources in Bloomberg.
Upstream’s 2025 Automotive and Smart Mobility Global Cybersecurity Report found that attacks involving EV chargers increased to 6% in 2024, from 4% in 2023. According to the report, 59% of the EV charging attacks in 2024 had the potential to impact millions of devices, including chargers, mobile apps, and vehicles.
Among the top 10 battery electric vehicle (BEV) models with the fewest reported problems in the J.D. Power 2025 U.S. Electric Vehicle Experience (EVX) Ownership Study, seven were in the mass market segment. BMW iX was rated highest overall and highest in the premium BEV segment, and the Hyundai IONIQ 6 ranked highest in the mass market BEV segment.
Consumer Reports’ Best Cars of the Year for 2025 includes six models with hybrid options and one fully electric model.
BEV sales in Europe increased 34% year-over-year in January 2025, while overall new-vehicle registrations fell by 2.5%, according to data from the European Automobile Manufacturers’ Association (ACEA). BEVs achieved a 15% market share in Europe, compared to 10.9% in January 2024.

Analysis by Julie Dautermann, Competitive Intelligence Analyst

AI Meets HIPAA Security: Understanding HHS’s Risk Strategies and Proposed Changes

In this final blog post in the Bradley series on the HIPAA Security Rule notice of proposed rulemaking (NPRM), we examine how the U.S. Department of Health and Human Services (HHS) Office for Civil Rights interprets the application of the HIPAA Security Rule to artificial intelligence (AI) and other emerging technologies. While the HIPAA Security Rule has traditionally been technology agnostic, HHS explicitly addresses security measures for these evolving technology advances. The NPRM provides guidance to incorporate AI considerations into compliance strategies and risk assessments.
AI Risk Assessments
In the NPRM, HHS would require a comprehensive, up-to-date inventory of all technology assets that identifies AI technologies interacting with ePHI. HHS clarifies that the Security Rule governs ePHI used in both AI training data and the algorithms developed or used by regulated entities. As such, HHS emphasizes that regulated entities must incorporate AI into their risk analysis and management processes and regularly update their analysis to address changes in technology or operations. Entities must assess how the AI system interacts with ePHI considering the type and the amount of data accessed, how the AI uses or discloses ePHI, and who the recipients are of AI-generated outputs.
HHS expects entities to identify, track, and assess reasonably anticipated risks associated with AI models, including risks related to data access, processing, and output. Flowing from the proposed data mapping safeguards discussed in previous blog posts, regulated entities would document where and how the AI software interacts with or processes ePHI to support risk assessments. HHS would also require regulated entities to monitor authoritative sources for known vulnerabilities to the AI system and promptly remediate them according to their patch management program. This lifecycle approach to risk analysis aims to ensure the confidentiality, integrity, and availability of ePHI as technology evolves.
Integration of AI developers into the Security Risk Analysis
More mature entities typically have built out third-party vendor risk management diligence. If finalized, the NPRM would require all regulated entities contracting with AI developers to formally incorporate Business Associate Agreement (BAA) risk assessments into their security risk analysis. Entities also would need to evaluate BAs based on written security verifications that the AI vendor has documented security controls. Regulated entities should collaborate with their AI vendors to review technology assets, including AI software that interacts with ePHI. This partnership will allow entities to identify and track reasonably anticipated threats and vulnerabilities, evaluate their likelihood and potential impact, and document security measures and risk management.
Getting Started with Current Requirements
Clinicians are increasingly integrating AI into clinical workflows to analyze health records, identify risk factors, assist in disease detection, and draft real-time patient summaries for review as the “human in the loop.” According to the most recent HIMSS cybersecurity survey, most health care organizations permit the use of generative AI with varied approaches to AI governance and risk management. Nearly half the organizations surveyed did not have an approval process for AI, and only 31% report that they are actively monitoring AI systems. As a result, the majority of respondents are concerned about data breaches and bias in AI systems. 
The NPRM enhances specificity in the risk analysis process by incorporating informal HHS guidance, security assessment tools, and frameworks for more detailed specifications. Entities need to update their procurement process to confirm that their AI vendors align with the Security Rule and industry best practices, such as the NIST AI Risk Management Framework, for managing AI-related risks, including privacy, security, unfair bias, and ethical use of ePHI.
The proposed HHS requirements are not the only concerns clinicians must consider when evaluating AI vendors. HHS also has finalized a rule under Section 1557 of the Affordable Care Act requiring covered healthcare providers to identify and mitigate discrimination risks from patient care decision support tools. Regulated entities must mitigate AI-related security risks and strengthen vendor oversight in contracts involving AI software that processes ePHI to meet these new demands.
Thank you for tuning into this series of analyzing the Security Rule updates. Please contact us if there are any questions or we can assist with any steps moving forward.
Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.

Virginia Poised to Become Second State to Enact Comprehensive AI Legislation

Go-To Guide:

Virginia’s HB 2094 applies to high-risk AI system developers and deployers and focuses on consumer protection. 
The bill covers AI systems that autonomously make or significantly influence consequential decisions without meaningful human oversight. 
Developers must document system limits, ensure transparency, and manage risks, while deployers must disclose AI usage and conduct impact assessments. 
Generative AI outputs must be identifiable, with limited exceptions. 
The attorney general would oversee enforcement, with penalties up to $10,000 per violation and a discretionary 45-day cure period. 
HB 2094 is narrower than the Colorado AI Act (CAIA, with clearer transparency obligations and trade secret protections, and differs from the EU AI Act, which imposes stricter, risk-based compliance rules.

On Feb. 20, 2024, the Virginia General Assembly passed the High-Risk Artificial Intelligence (AI) Developer and Deployer Act (HB 2094). If signed by Gov. Glenn Youngkin, Virginia would become the second U.S. state to implement a broad framework regulating AI use, particularly in high-risk applications.1 The bill is closely modeled on the CAIA and would take effect on July 1, 2026.
This GT Alert covers to whom the bill applies, important definitions, key differences with the CAIA, and potential future implications.
To Whom Does HB 2094 Apply?
HB 2094 applies to any person doing business in Virginia that develops or deploys a high-risk AI system. “Developers” refer to organizations that offer, sell, lease, give, or otherwise make high-risk AI systems available to deployers in Virginia. The requirements HB 2094 imposes on developers would also apply to a person who intentionally and substantially modifies an existing high-risk AI system. “Deployers” refer to organizations that deploy or use high-risk AI systems to make consequential decisions about Virginians. 
How Does HB 2094 Work?
Key Definitions
HB 2094 aims to protect Virginia residents acting in their individual capacities. It would not apply to Virginia residents who act in a commercial or employment context. Furthermore, HB 2094 defines “generative artificial intelligence systems” as AI systems that incorporate generative AI, which includes the capability of “producing and [being] used to produce synthetic content, including audio, images, text, and videos.”
HB 2094’s definition of “high-risk AI” would apply only to machine-learning-based systems that (i) serve as the principal basis for consequential decisions, meaning they operate without human oversight and (ii) that are explicitly intended to autonomously make or substantially influence such decisions. 
High-risk applications include parole, probation, pardons, other forms of release from incarceration or court supervision, and determinations related to marital status. As the bill would not apply to government entities, it is not yet clear which private sector decisions might be in scope of these high-risk applications.
Requirements
HB 2094 places obligations on AI developers and deployers to mitigate risks associated with algorithmic discrimination and ensure transparency. It establishes a duty of care, disclosure, and risk management requirements for high-risk AI system developers, along with consumer disclosure obligations and impact assessments for deployers. Developers must document known or reasonably known limitations in AI systems. Generated or substantially modified synthetic content from generative AI high-risk systems must be made identifiable and detectable using industry-standard tools, comply with applicable accessibility requirements where feasible, and ensure the synthetic content is identified at the time of generation, with exceptions for low-risk or creative applications such that it “does not hinder the display or enjoyment of such work or program.” The bill references established AI risk frameworks such as NIST AI RMF and ISO/IEC 42001. Exemptions
Certain exclusions apply under HB 2094, including AI use in response to a consumer request or to provide a requested service or product under a contract. There are also limited exceptions for financial services and broader exemptions for healthcare and insurance sectors.
Enforcement
The bill grants enforcement authority to the attorney general and establishes penalties for noncompliance. Violations may result in fines up to $1,000 per occurrence, with attorney fee shifting, while willful violations may carry fines up to $10,000 per occurrence. Each violation would be considered separately for penalty assessment. The attorney general must issue a civil investigative demand before initiating enforcement action, and a discretionary 45-day right to cure period is available to address violations. There is no private right of action under HB 2094.
Key Differences With the CAIA
While HB 2094 is closely modeled on the CAIA, it introduces notable differences. HB 2094 limits its definition of consumers to individual and household contexts, and explicitly excludes commercial and employment contexts. It defines “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions, while adding a couple new use cases to the scope of “high-risk” uses. It also provides clearer guidelines on when a developer becomes a deployer, imposes more specific documentation and transparency obligations, and enhances trade secret protections. Unlike CAIA, HB 2094 does not require reporting algorithmic discrimination to the attorney general and allows a discretionary 45-day right to cure violations. Additionally, it expands the list of high-risk uses to include decisions related to parole, probation, pardons, and marital status.
While HB 2094 aligns with aspects of the CAIA, it differs from the broader and more stringent EU AI Act, which imposes risk-based AI classifications, stricter compliance obligations, and significant penalties for violations. HB 2094 also does not contain direct incident reporting requirements, public disclosure requirements, or a small business exception. Finally, HB 2094 upholds a higher threshold than CAIA for consumer rights when a high-risk AI makes a negative decision relating to a consumer, requiring that the AI system must have processed personal data beyond what the consumer directly provided.
Conclusion
If signed into law, HB 2094 would make Virginia the second U.S. state to implement comprehensive AI regulations, setting guidelines for high-risk AI systems while seeking to address concerns about transparency and algorithmic discrimination. With enforcement potentially beginning in 2026, businesses developing or deploying AI in Virginia should proactively assess their compliance obligations and prepare for the new regulatory framework, including where the organization is also subject to obligations under the CAIA.

1 See also GT’s blog post on the Colorado AI Act. Other states have regulated specific uses of AI or associated technologies, such as California and Utah, which, respectively, regulate interaction with bots and Generative AI.

UK ICO Publishes 2025 Tech Horizons Report

On February 20, 2025, the UK Information Commissioner’s Office (“ICO”) published its annual Tech Horizons Report (the “Report”), which explores four key technologies expected to play a significant role in society over the next two to seven years. These technologies include connected transport, quantum sensing and imaging, digital diagnostics and therapeutics, and synthetic media. The Report also discusses the ongoing work of the ICO in addressing data protection and privacy concerns related to the emerging technologies featured in their previous Tech Horizons reports.
The Report provides an overview of how key innovations are seeking to reshape industries and everyday life, the privacy and data protection implications of such innovations, and the ICO’s proposed recommendations and next steps. Below are examples of some of the potential privacy and data protection implications identified by the ICO, along with certain recommendations:
Connected Transport

Connected vehicles collect extensive and wide-ranging personal data for various purposes in a “complex ecosystem” of controllers and processors. Those organizations with transparency obligations must ensure they provide clear, concise and accessible privacy notices to individuals (including passengers); however, the ICO acknowledges that providing privacy notices in the connected transport environment may be a challenge.
Organizations should identify the correct lawful bases for processing personal data and remember that, in addition to the UK General Data Protection Regulation (“UK GDPR”), the Privacy and Electronic Communications Regulations also may apply in the context of connected transport and may require consent for certain activities.
Biometric technology may be used in connected transport for purposes such as fingerprint scanners to unlock vehicles. This technology requires the processing of biometric data which must comply with the requirements to process special category data.
When vehicles are shared, privacy concerns arise regarding access to data from previous users, such as location or smartphone pairings.

The ICO recommends embedding privacy by design into hardware and services related to connected vehicles to demonstrate compliance with the UK GDPR and other data protection legislation.
Quantum Sensing and Imaging
The ICO acknowledges that in the case of novel quantum sensing and imaging for medical or research purposes, a key benefit is the extra detail and insights provided by the technology. This could be deemed as conflicting with the principle of data minimization. The ICO states that the principle “does not prevent healthcare organisations processing more detailed information about people where necessary to support positive health outcomes,” but that organizations must have a justification for collecting and processing additional information, such as a clear research benefit.
The ICO states that it will continue to find opportunities to engage with industry in this area and to explore any potential data protection risks. The ICO also encourages embedding privacy by design and default when testing and deploying quantum technologies that involve processing personal information.
Digital Diagnostics and Therapeutics

Organizations working in health care are a target for cyber attacks for a number of reasons, including the nature of data held by such organizations. The adoption of digital diagnostics and therapeutics will only increase this risk. Organizations engaged in this space must comply with all applicable security obligations, including the obligation to ensure the confidentiality, security and integrity of the personal information they process in accordance with the UK GDPR.
According to the ICO, while the use of artificial intelligence (“AI”) and automated decision-making (“ADM”) “could improve productivity and patient outcomes,” there is a risk that their use to make decisions could “adversely affect some patients.” For example, bias is a key risk when considering AI and ADM. Organizations should use appropriate technical and organizational measures to prevent AI-driven discrimination. Another material risk is the lack of transparency regarding how AI tools process patient data. The ICO states that lack of transparency in a medical context could result in patient harm, and that the use of AI does not reduce an organization’s responsibility to comply with transparency obligations under the UK GDPR.

The ICO recommends providers implement privacy by design and ensure that any third parties they are engaged with have in place appropriate privacy measures and safeguards. In addition, providers should also ensure they follow guidance regarding fairness, bias and unlawful discrimination.
Synthetic Media

Data protection laws apply to personal data used in creating synthetic media, even if the final product does not contain identifiable information.
If automated moderation is used, the ICO confirms that organizations must comply with the ADM requirements of the UK GDPR.

The ICO intends to develop its understanding of synthetic media, including how personal data is processed in the context. The ICO also will work with other regulators and continue to engage with other stakeholders such as the public and interest groups.

Virginia Legislature Passes AI Bill

On February 20, 2025, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”).
The Act is a comprehensive bill that is focused on accountability and transparency in AI systems. The Act would apply to developers and deployers of “high-risk” AI systems that do business in Virginia. An AI system would be considered high-risk if it is intended to autonomously make, or be a substantial factor in making, a consequential decision. Under the Act, a consequential decision means a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer” of: (1) parole, probation, a pardon, or any other release from incarceration or court supervision; (2) education enrollment or an education opportunity; (3) access to employment; (4) a financial or lending service; (5) access to health care services; (6) housing; (7) insurance; (8) marital status or (9) a legal service. The Act excludes a number of activities from what is considered a high-risk AI system, such as if the system is intended to perform a narrow procedural task or improve the result of a previously completed human activity.
The Act includes requirements that differ depending on whether the covered business is an AI system developer or deployer. The requirements are generally aimed at avoiding algorithmic discrimination, ensuring impact assessments, promoting AI risk management frameworks, and ensuring transparency and protection against adverse decisions. 
The Virginia Attorney General has exclusive authority to enforce the Act. Violations of the Act are subject to a civil penalty of up to $1,000, plus reasonable attorney fees, expenses and costs. The penalty can be increased up to $10,000 for willful violations. Notably, the Act states that each violation is a separate violation. The Act also provides a 45-day cure period. 
Virginia Governor Glenn Youngkin has until March 24, 2025 to sign, veto or return the bill with amendments. If enacted, the law would take effect July 1, 2026.

California’s AI Revolution: Proposed CPPA Regulations Target Automated Decision Making

On November 8, 2024, the California Privacy Protection Agency (the “Agency” or the “CPPA”) Board met to discuss and commence formal rulemaking on several regulatory subjects, including California Consumer Privacy Act (“CCPA”) updates (“CCPA Updates”) and Automated Decisionmaking Technology (ADMT).
Shortly thereafter, on November 22, 2024, the CPPA published several rulemaking documents for public review and comment that recently ended February 19, 2025. If adopted, these proposed regulations will make California the next state to regulate AI at a broad and comprehensive scale, in line with Colorado’s SB 24-205, which contains similar sweeping consumer AI protections. Upon consideration of review and comments received, the CPPA Board will decide whether to adopt or further modify the regulations at a future Board meeting. This post summarizes the proposed ADMT regulations, that businesses should review closely and be prepared to act to ensure future compliance.
Article 11 of the proposed ADMT regulations outlines actions intended to increase transparency and consumers’ rights related to the application of ADMT. The proposed rules define ADMT as “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.” The regulations further define ADMT as a technology that includes software or programs, uses the output of technology as a key factor in a human’s decisionmaking (including scoring or ranking), and includes profiling. ADMT does not include technologies that do not execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking (this includes web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, anti-virus, anti-malware, spam and robocall-filtering, spellchecking, calculators, databases, spreadsheets, or similar technologies). The proposed ADMT regulations will require businesses to notify consumers about their use of ADMT, along with their rationale for its implementation. Businesses also would have to provide explanations on ADMT output in addition to a process for consumers to request to opt-out from such ADMT use.
It is important to note that the CCPA Updates will be applicable to organizations that meet the thresholds of California civil codes 1798.140(d)(1)(A), (B) and (C). These civil codes apply to organizations that: (A) make more than $25,000,000 in gross annual revenues; (B) alone or in combination, annually buy, sell, or share the personal information of 100,000 or more consumers or households; and (C) derive 50% or more of its annual revenues from selling or sharing a consumers’ personal information. While not exhaustive of the extensive rules and regulations described in the proposed CCPA Updates, the following are the notable changes and potential business obligations under the new ADMT regulations.
Scope of Use
Businesses that use ADMT for making significant decisions concerning consumers must comply with the requirements of Article 11. “Significant decisions” include decisions that affect financial or lending services, housing, insurance, education, employment, healthcare, essential goods services, or independent contracting. “Significant decisions” may also include ADMT used for extensive profiling (including, among others, profiling in work, education, or for behavioral advertising), and for specifically training AI systems that might affect significant decisions or involve profiling.
Providing a Pre-Use Notice
Businesses that use ADMT must provide consumers with a pre-use notice that informs consumers about the use of ADMT, including its purpose, how ADMT works, and their CCPA consumer rights. The notice must be easy-to-read, available in languages the business customarily provides documentation to consumers, and accessible to those with disabilities. Business must also clearly present the notice to the consumer in the way which the business primarily interacts with the consumer, and they must do so before they use any ADMT to process the consumer’s personal information. Exceptions to these requirements will apply to ADMT used for security, fraud prevention, or safety, where businesses may omit certain details.
According to Section 7220 of the CCPA Updates, pre-use notice must contain:

A plain language explanation of the business’s purpose for using ADMT.
A description of the consumer’s right to opt-out of ADMT, as well as directions for submitting an opt-out request.
A description of the consumer’s right to access ADMT, including information on how the consumer can request access the business’ ADMT.
A notice that the business may not retaliate against a consumer who exercises their rights under the CPPA.
Any additional information (via a hyperlink or other simple method), in plain language, that discusses how the ADMT works.

Consumer Opt-Out Rights
Consumers must be able to opt-out of ADMT use for significant decisions, extensive profiling, or training purposes. Exceptions to opt-out rights include where businesses use ADMT for safety, security, or fraud prevention or for admission, acceptance, or hiring decisions, so long as it is necessary, and its efficacy has been evaluated to ensure it works as intended. Businesses must provide consumers at least two methods of opting out, one of which should reflect the way the business mainly interacts with consumers (e.g., email, internet hyperlink etc.). Any opt-out method must be easy to execute and should require minimal steps that do not involve creating accounts or providing unnecessary info. Businesses must process opt-out requests within 15 business days, and they may not retaliate against consumers for opting out. Businesses must wait at least 12 months before asking consumers who have opted out of ADMT to consent again for its use.
Providing Information on the ADMT’s Output
Consumers have the right to access information about the output of a business’s ADMT. The CPPA regulations do not define “output,” but the term likely includes outcomes produced by ADMT and the key factors influencing them.
When consumers request access to ADMT, businesses must provide information on how they use the output concerning the consumer and any key parameters affecting it. If they use the output to make significant decisions about the consumer, the business must disclose the role of the output and any human involvement. For profiling, businesses must explain the output’s role in the evaluation.
Output information includes predictions, content, recommendations, and aggregate statistics. Depending on the ADMT’s purpose, intended results, and the consumer’s request, the information provided can vary. Businesses must carefully consider these nuances to avoid over-disclosure.
Human Appeal Exception
The CPPA proposes a “human appeal exception,” by which consumers may appeal a decision to a human reviewer who has the authority to overturn the ADMT decision. Business can choose to offer a human appeal exception in lieu of providing the ability to opt out when using ADMT to make a significant decision concerning access to, denial, or provision of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services.
To take advantage of the human appeal exception, the business must designate a human reviewer who is able to understand the significant decision the consumer is appealing and the effects of the decision on the consumer. The human reviewer must consider the relevant information provided by the consumer in their appeal and may also consider any other relevant source of information. The business must design a method of appeal that is easy for consumers to execute, requiring minimal steps, and that it clearly describes to the consumer. Communications and disclosures with appealing consumers must be easy to read and understand, written in the applicable language, and reasonably accessible.
Risk Assessments
Under the CPPA’s proposed rules, every business that processes consumer personal information must conduct a risk assessment before initiating that processing, especially if the business is using ADMT to make significant decisions concerning a consumer or for extensive profiling. Businesses must conduct risk assessments to determine whether the risks to consumers’ privacy outweigh the benefits to consumers, the business, and other stakeholders.
When conducting a risk assessment, businesses must identify and document: the categories of personal information to be processed and whether they include sensitive personal information; the operational elements of its ADMT processing (e.g., collection methods, length of collection, number of consumers affected, parties who can access this information, etc.); the benefits that this processing provides to the business, its consumers, other stakeholders, and the public at large; the negative impacts to consumers’ privacy; the safeguards that it plans to implement to address said negative impacts; information on the risk assessment itself and those who conducted it; and whether the business will initiate the use of ADMT despite the identified risks.
A business will have 24 months from the effective date of these new regulations to submit the results of their risk assessment conducted from the effective date of these regulations to the date of submission. After completing its first submission, a business must submit subsequent risk assessments every calendar year. In addition, a business must review and update risk assessments to ensure accuracy at least once every three years, and it should convey updates through the required annual submission. If there is any material change to a business’ processing activity, it must immediately conduct a risk assessment. A business should retain all information collected of a business’ risk assessments for as long as the processing continues, or for five years after the completion of the assessment, whichever is later.
What Businesses Should Do Now
The CPPA’s proposed ADMT regulations under the CCPA emphasize the importance of transparency and consumer rights. By requiring businesses to disclose how they use ADMT outputs and the factors influencing the outputs, the regulations aim to ensure that consumers are well-informed, and safeguards exist to protect against discrimination. As businesses incorporate ADMT, including AI tools, for employment decision making, they should follow the proposed regulations’ directive to conduct adequate risk assessments. Regardless of the form in which these regulations go into effect, preparing a suitable AI governance program and risk assessment plan will protect the business’s interests and foster employee trust.
Please note that the information provided in the above summary is only a portion of the rules and regulations proposed by the CCPA Updates. Now that the comment period closed, the CPPA will deliberate and finalize the CCPA Updates within the year. Evidently, these proposed regulations will require more action by businesses to remain compliant. While waiting for the CPPA’s finalized update, it is important to use this time to plan and prepare for these regulations in advance.

CARU Takes Privacy Action Against Buddy AI Children’s Learning Program

The Children’s Advertising Review Unit of BBB National Programs (CARU) announced a decision involving Buddy AI, a program that touts itself as an “Early learning AI teacher.” According to CARU, which encountered Buddy AI as part of its routine monitoring efforts, the app and website did not comply with the federal Children’s Online Privacy Protection Act (COPPA) or CARU’s children’s privacy guidelines.
Buddy AI, which is operated by AI Buddy, Inc., is a generative AI tutor for young children. The service offers voice-based lessons to help children learn English. CARU expressed concern that Buddy AI did not provide adequate notice of its information collection and use practices, make reasonable efforts to ensure that parents were directly notified of such practices, post a prominently labeled children’s privacy policy, or obtain necessary parental consent before enrolling children in the service. CARU credited Buddy AI with addressing CARU’s concerns and making changes to its app during the course of CARU’s investigation.