STATE & LOCAL LAWS & REGULATION

California Governor Signs Age Verification Law: California Governor Gavin Newsom signed the California Digital Age Assurance Act (the “Act”). Beginning January 1, 2027, the Act requires persons or entities that develop, license, or control the operating system software on devices to furnish an interface at account setup for users to indicate the birth date, age, or both, of the user and provide applications with one of four age‑range signals (under 13; 13–15; 16–17; 18+). Additionally, the law prohibits operating system providers and app stores from using compliance data in an anti-competitive manner. Application developers must request and rely on the age-range signal to meet child privacy and safety obligations and avoid using, sharing, or seeking more information than is necessary. For devices with accounts setup before January 1, 2027, the operating system provider shall comply with the Act before July 1, 2027, and include a good‑faith provision for errors tied to technical limits. The Act will be enforced by the California Attorney General, who may seek injunctions and civil penalties up to $2,500 per affected child for negligent violations and up to $7,500 per affected child for intentional violations.

Massachusetts Senate Passes Massachusetts Data Privacy Act: The Massachusetts Senate voted unanimously to pass the Massachusetts Data Privacy Act (the “MDPA”).  The MDPA applies to entities handling data of at least 60,000 consumers annually, or 20,000 consumers if data sales comprise at least 20 percent of revenue, and to entities processing reproductive or sexual health data. Key provisions include consumer rights to access, correct, delete, and port personal data, as well as to opt out of targeted advertising, data sales, and certain profiling. The MDPA restricts the collection and use of sensitive data, requires clear privacy notices, and mandates data protection assessments for high-risk processing activities. Enforcement authority is vested exclusively in the Attorney General, who may seek injunctions, damages, and civil penalties up to $5,000 per violation. If passed by the House, the Act would become effective January 1, 2027, with some provisions effective June 1, 2027.  Like the Maryland Privacy Act on which the MDPA is based, the MDPA would prohibit the sale of sensitive data, which includes precise geolocation data. The MDPA now awaits review by the Massachusetts House of Representatives.

Pennsylvania House of Representatives Approved the Consumer Data Privacy Act: The Pennsylvania House of Representatives has approved House Bill 78, the Consumer Data Privacy Act (the “Act”). The Act provides individuals with rights to access, correct, and delete personal data, data portability, and to opt out of targeted advertising, the sale of personal data, and certain profiling. Businesses with annual revenues over $10 million and data processors would be obligated to minimize data collection, ensure transparency and security, obtain consent for processing of sensitive data, honor opt‑out signals, and perform data protection assessments. The Act would be enforced exclusively by the Attorney General, which will be considered unfair competition or unfair or deceptive acts and practices under the state’s Unfair Trade Practices and Consumer Protection Law.

NYDFS Issues Guidance on Managing Third-Party Service Provider Risk: The New York Department of Financial Services (“NYDFS”) issued guidance (the “Guidance”) on managing risks related to third-party service providers (“TPSPs”). NYDFS stated that the Guidance does not impose new requirements or obligations on Covered Entities. Rather, it is intended to clarify regulatory requirements and recommend industry best practices to mitigate common risks associated with TPSPs. The Guidance emphasizes that Covered Entities must adopt a proactive, risk-based approach to TPSP governance, with active oversight from senior governing bodies and officers. Key recommendations include: (1) proactive due diligence in the selection of TPSPs, including assessing TPSPs based on access levels, data sensitivity, cybersecurity history, and compliance with standards like National Institute of Standards and Technology or International Organization for Standardization; (2) including provisions for access controls, encryption, breach notification, data location restrictions, subcontractor disclosures, exit obligations, artificial intelligence (“AI”) usage, and data handling in TPSP contracts; 3) monitoring TPSPs on an ongoing basis through audits, penetration tests, and updates on vulnerability management; and (4) ensuring secure data return or destruction and conducting final risk reviews during the offboarding process at the end of the TPSP relationship. The Guidance underscores that compliance responsibility cannot be delegated to TPSPs and that NYDFS will consider third-party risk management in its examinations and enforcement actions.

Minnesota and New Hampshire Join Regulatory Enforcement Consortium: Minnesota and New Hampshire have joined the bipartisan Consortium of Privacy Regulators (the “Consortium”), expanding the group to 10 regulators and creating further cross‑jurisdictional enforcement of state privacy laws, including Minnesota’s Consumer Data Privacy Act and New Hampshire’s Data Privacy Act. The Consortium coordinates investigations of possible violations, shares resources and expertise, and organizes enforcement of common consumer protections that appear across member laws, such as rights over how businesses use consumers’ personal data, including rights to access, delete, correct, and opt out of certain data uses, and stop the sale of personal information, alongside corresponding business obligations. Aligned with this movement towards implementation, enforcement, and compliance, last year, New Hampshire’s Attorney General established a Data Privacy Unit, and Minnesota’s Attorney General is currently expanding the Consumer Protection Division to enforce Minnesota’s law. Members of the consortium now include the California Privacy Protection Agency and state Attorneys General from California, Colorado, Connecticut, Delaware, Indiana, New Hampshire, New Jersey, Minnesota, and Oregon.


FEDERAL LAWS & REGULATION

Federal Cybersecurity Initiatives Lapse During Shutdown: Two U.S. cybersecurity initiatives—the Cybersecurity Information Sharing Act (“CISA”) of 2015 and the State and Local Cybersecurity Grant Program—expired due to congressional gridlock. CISA provides legal protections for organizations sharing cyber threat data, while the grant program, created during the pandemic, allocated $1 billion to help states and localities defend against cyberattacks. Congress failed to act on the reauthorization of the programs prior to the federal government shutdown. 

FTC Do Not Call List and Other Consumer Protection Services Unavailable During Shutdown: Due to a lapse in government funding, the Federal Trade Commission (“FTC”) announced a shutdown of several consumer protection services starting at midnight on October 1, 2025. Platforms such as ReportFraud.ftc.gov, IdentityTheft.gov, and Econsumer.gov, which handle domestic and international fraud and identity theft complaints, are temporarily unavailable. The National Do Not Call Registry is also offline for both consumers and telemarketers. While some online services remain accessible for submissions, the FTC stated that no action will be taken until the government reopens. 

Joint Commission and Coalition for Health AI Issue Guidance on Responsible Use of AI in Healthcare: The Joint Commission and Coalition for Health AI (“CHAI”) have issued guidance to promote the responsible deployment of AI tools in healthcare. The guidance outlines seven core elements for responsible AI use: (1) AI policies and governance structures that establish formal oversight to manage AI implementation, risk, and compliance; (2) patient privacy and transparency to ensure patients are informed about AI’s role in their care; (3) data security and use protections to prevent misuse and breaches; (4) ongoing quality monitoring that continuously evaluates AI tools post-deployment to ensure safe, reliable performance and mitigate bias; (5) voluntary, blinded reporting of AI safety events to encourage confidential reporting of AI-related incidents; (6) risk and bias assessments that identify and address biases in AI tools; and (7) role-based education and training to promote AI literacy among healthcare staff to ensure safe and effective use.

Bipartisan Bill to Regulate Minor Use of Chatbots Introduced: A bipartisan group of U.S. senators has introduced the GUARD Act to regulate the use of AI chatbots and companions by minors. The bill aims to protect children from exploitative or harmful AI interactions by imposing strict requirements and prohibitions on companies that develop or distribute such technologies. If passed, the GUARD Act would require age verification when creating accounts and periodically thereafter, mandate that chatbot access be tied to a verified user account, prohibit harmful content, require that companies provide users with notice that AI chatbots are not human, and implement safeguards to protect user data. Violations could result in fines up to $100,000. 


U.S. LITIGATION

2nd VPPA Case Against NBA Tossed: Judge Jennifer L. Rochon of the Southern District of New York dismissed with prejudice a putative class action against the NBA under the Video Privacy Protection Act (“VPPA”). Plaintiff Michael Salazar alleged that the NBA disclosed his video-viewing information to Meta via the Meta Pixel on NBA.com, transmitting data such as Facebook ID and video titles. The Court held that under binding Second Circuit precedent (Solomon v. Flipps Media, Inc. and Hughes v. NFL), Pixel-based disclosures do not constitute “personally identifiable information” under the VPPA. The “ordinary person” standard requires that the disclosed information would allow an average recipient—not just a sophisticated technology company—to identify a consumer’s video-watching habits. Here, the Court found that an ordinary person could not use the transmitted Facebook ID or code to identify Salazar’s video activity. Arguments that tools like ChatGPT or Google could bridge this gap were rejected as insufficient. Accordingly, the NBA’s motion to dismiss was granted. This decision reinforces the Second Circuit’s narrow interpretation of VPPA liability for Pixel-based data sharing.

Court Dismisses Challenge to New York Algorithmic Pricing Transparency Law: The Southern District of New York dismissed the National Retail Federation’s challenge to New York’s Algorithmic Pricing Disclosure Act, which requires merchants to disclose when a published price is set by an algorithm using a consumer’s personal data. The plaintiff argued that this compelled disclosure violated the First Amendment. The Court applied the Zauderer standard, which governs compelled commercial disclosures of “purely factual and uncontroversial information.” Judge Rakoff found that the required statement, “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA”, is factual, accurate, and not misleading or controversial. The Court held that the law is reasonably related to New York’s legitimate interest in informing consumers and is neither unjustified nor unduly burdensome. Because the plaintiff failed to plausibly allege a First Amendment violation, the Court granted the motion to dismiss and denied the request for a preliminary injunction as moot.

New Jersey Supreme Court Agrees to Review Daniel’s Law: The New Jersey Supreme Court accepted a certified question from the U.S. Court of Appeals for the Third Circuit concerning Daniel’s Law (N.J.S.A. 56:8-166.1), which restricts the disclosure of certain personal information of judges, prosecutors, and law enforcement officers. The Supreme Court reformulated the question to focus on the mental state required to establish liability under Daniel’s Law. The Court ordered the parties to submit briefs addressing this specific issue, setting a schedule for filings and indicating that oral argument would follow. This decision is significant because it will clarify whether liability under Daniel’s Law requires proof of intent, knowledge, recklessness, or strict liability; a determination that will impact both enforcement and compliance for data brokers and other entities subject to the statute.


U.S. ENFORCEMENT

FTC Files Complaint Against Operator of Anonymous Messaging App: The Federal Trade Commission (“FTC”) has taken action against Iconic Hearts Holdings, Inc. (“Iconic Hearts”), the operator of the Sendit anonymous messaging app, and its CEO for violating the Children’s Online Privacy Protection Act Rule, the FTC Act, and Restore Online Shoppers’ Confidence Act. In the complaint filed by the U.S. Department of Justice (“DOJ”) upon referral by the FTC, the DOJ alleged that Iconic Hearts knew that numerous Sendit users were under the age of 13 but failed to notify parents that it collected personal information from children, including their phone numbers, birthdates, photos, and usernames for Snapchat, Instagram, TikTok, and other accounts, and did not obtain parents’ verifiable consent to such data collection. The complaint also alleged that Iconic Hearts made misrepresentations and used fake messages to trick child and teen users into purchasing premium subscriptions and failed to clearly disclose the terms of its subscription plans.

Florida Attorney General Sues Streaming Device Company for Violations of Children’s Privacy: The Florida Attorney General, through its Office of Parental Rights, has filed a civil enforcement action against Roku, Inc. and its Florida subsidiary (collectively, “Roku”) for violations of the Florida Digital Bill of Rights (“FDBOR”) and the Florida Deceptive and Unfair Trade Practices Act (“FDUTPA”). The complaint alleges that Roku willfully disregarded the presence of children on its platform and collected and sold sensitive personal data, including precise geolocation as well as viewing habits, voice recordings, and other information from children, without providing effective notice or obtaining the necessary consents. The complaint also alleges that Roku enabled reidentification of deidentified data by providing the deidentified data to third parties (e.g., advertisers and data brokers) without contractually requiring these third parties to not reidentify the data.

NYC Sues Major Social Media Platforms for Addictive Features. The City of New York (the “City”), the City School District of NY, and NYC Health and Hospitals Corporation filed a complaint in the Southern District of New York against Meta (Facebook/Instagram), Snap Inc. (Snapchat), TikTok/ByteDance, and Google/YouTube, alleging the companies intentionally designed and promoted addictive features targeting minors. The complaint alleges that features such as infinite scroll and auto‑play video, algorithm‑driven “For You” or recommendation feeds, bursts of “Likes” and notifications, beauty/appearance filters, coupled with weak age‑verification and parental controls promote addictive behavior and make it difficult to quit or limit use. The City argues these design choices exploit children’s developmental vulnerabilities and contribute to compulsive use and a broader youth mental health crisis, including anxiety, depression, eating disorders, sleep disruption, and classroom and school‑environment impacts that have forced the City to divert significant resources to counseling and crisis services. The complaint pleads public nuisance and negligence, and requests injunctive relief, an order enjoining the companies’ future contributions to the alleged public nuisance, and equitable relief, funding for prevention and treatment, actual and compensatory damages, and punitive damages.

OCR Settles with Healthcare Provider for Sharing Patient Stories in Violation of HIPAA: The U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) settled with five healthcare providers, collectively known as Cadia Healthcare Facilities (“Cadia”), for violations of the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy and Breach Notification Rules. The settlement resolves OCR’s investigation of Cadia’s disclosure of a total of 150 patients’ names, photographs, and information pertaining to the patients’ conditions, treatment, and recovery through success stories posted to Cadia’s website without obtaining HIPAA authorizations from the patients. Under the settlement, Cadia must pay $182,000 to OCR and implement a corrective action plan that will be monitored by OCR for two years. Cadia must develop, maintain, and revise its HIPAA privacy and breach notification policies and procedures, train its entire workforce, including marketing staff, on those requirements, and issue breach notifications to all individuals whose protected health information was disclosed on facility websites, social media, or other marketing materials without proper authorization.

New York Attorney General Settles with Auto Insurers for Data Breach: The New York Attorney General has settled with eight car insurance companies for data breaches impacting more than 825,000 New Yorkers. The companies allowed consumers to obtain car insurance price quotes using an online tool. After entering limited personal information, the other fields on the tool were pre-populated with other personal information, such as an individual’s driver’s license number and similar information about other drivers in their household. The New York Attorney General’s investigation found that threat actors were able to exploit this “pre-fill” function, and some of the exposed data was later used to file unemployment claims during the COVID-19 pandemic. Under the settlement, the companies must pay a total of $14.2 million in penalties and adopt various security measures, including maintaining a comprehensive information security program, data inventory, reasonable authentication procedures, a logging and monitoring system, reasonable policies and procedures designed to detect suspicious activities, and stronger threat response procedures.  

New York Attorney General Settles with Accounting Firm for Data Breaches: The New York Attorney General has settled with Wojeski & Company (“Wojeski”), a public accounting firm, for two data breaches. Wojeski discovered a ransomware attack in July of 2023 caused by a phishing email (“2023 Incident”). Wojeski discovered another data breach in May of 2024, when an employee of the firm engaged to investigate the 2023 Incident improperly accessed customer data located in the files that Wojeski had sent for review. Wojeski did not notify customers of either incident until November 2024. The incidents impacted 6,232 individuals in total and impacted names, dates of birth, Social Security numbers, drivers’ license numbers, email addresses, phone numbers, financial account numbers, medical benefits, and entitlement information. Under the settlement, Wojeski must pay $60,000 in penalties and take certain security measures, including encrypting personal information, providing cybersecurity training to all employees, and maintaining a comprehensive information security program, data inventory, reasonable account management, authentication, and incident response procedures.


INTERNATIONAL LAWS & REGULATION

New Zealand’s Privacy Amendment Act 2025 Signed Into Law: New Zealand’s Privacy Amendment Act 2025 (the “Act”), which amends the Privacy Act 2020, was signed into law and received Royal Assent on September 23, 2025. The Act introduces a new notification requirement for organizations collecting personal information indirectly, meaning from a source other than the data subject. Under Information Privacy Principle 3A, effective May 1, 2026, entities that collect personal information about an individual from sources other than the individual must take reasonable steps to notify the individual of the collection, its purpose, intended recipients, the entity’s identity, and the individual’s rights to access and correct their data. This obligation does not apply if the individual has already been made aware of these matters. The Act also clarifies exemptions for intelligence and security agencies and updates the Privacy Commissioner’s functions, including the ability to assess foreign privacy laws for adequacy.

EDPB and European Commission Issue Guidance on Interplay of GDPR and DMA:  The European Data Protection Board (the “EDPB”) and the European Commission have adopted joint guidelines on the interplay between the Digital Markets Act (the “DMA”) and the General Data Protection Regulation (the “GDPR”). While the DMA targets unfair practices and the effects they have on business users, the GDPR covers the protection and processing of people’s personal data. The guidelines aim to ensure that the DMA and the GDPR are interpreted and applied in a compatible manner that achieves their respective objectives, in line with relevant case law. Though not exhaustive, the guidelines address issues of significant overlap, including gatekeepers’ compliance with requirements of end-user choice and consent; distribution of software application stores and applications; rights to data portability for users and authorized third parties; consent‑based business‑user access to end‑user data; access to anonymized sharing of search data; and the interoperability of number-independent interpersonal communication services. The guidelines also mention practical coordination and consultation between the European Commission and data protection authorities to deliver coherent and effective enforcement. Currently, there is a joint public consultation that is open until December 4, 2025, where stakeholders will have an opportunity to provide comments and feedback on this first version of the guidelines.

European Launches Two AI Strategic Initiatives: The European Commission announced two new strategic initiatives, the Apply AI Strategy and the AI in Science Strategy, to accelerate AI adoption across industry and science. The Apply AI Strategy aims to help integrate AI into strategic sectors such as healthcare, energy, manufacturing, and culture. It supports small and medium-sized enterprises, promotes AI-powered screening centers, and encourages the development of frontier models tailored to specific industries. The strategy also addresses workforce readiness, infrastructure, and data access, and introduces the Apply AI Alliance to coordinate efforts across sectors. Around €1 billion will be used to support these initiatives. The AI in Science Strategy is intended to position Europe as a leader in AI-driven research. Key components of the AI in Science Strategy include €58 million for talent development, €600 million for computing power via Horizon Europe, and a goal to double annual AI research funding to over €3 billion. The strategy also focuses on identifying and curating strategic datasets for scientific use. Both strategies build on the AI Continent Action Plan launched in April 2025.

Daniel R. Saeedi, Rachel L. Schaller, Ana Tagvoryan, Gabrielle N. Ganze, P. Gavin Eastgate, Timothy W. Dickens, Karen H. Shin, Amanda M. Noonan, and Sierra N. Lactaoen contributed to this article.

Leave a Reply

Your email address will not be published. Required fields are marked *