Entrance to [Copyright] Paradise Halted by the Human-Authorship Requirement

In mid-March, a federal appeals court affirmed a ruling finding that artwork created solely by an artificial intelligence (AI) system is not entitled to copyright protection. Thaler v. Perlmutter, No. 23-5233 (D.C. Cir. Mar. 18, 2025). This decision aligns with the position taken by the US Copyright Office in its recent report in light of the ongoing evolution, application, and litigation surrounding AI systems. U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (2025).
While this decision may appear straightforward, future developments could arise through an application to the US Supreme Court or through cases addressing the extent of human involvement necessary in AI-generated works that seek copyright protection.
Key Takeaways

The Copyright Act of 1976 (Act) requires all eligible works to be authored by a human being.
The Act’s definition of “author” does not apply to machines.
The work-made-for-hire doctrine requires an existing copyright interest.
Thaler’s representation that the work was generated autonomously by a computer system weighed heavily against his challenges to the human-authorship requirement and the work-made-for-hire doctrine.
The Court rejected Dr. Thaler’s arguments that (1) the term “author” is not confined to human beings; (2) the work was made for hire; and (3) the human-authorship requirement prevents protection of works made with AI.
The Court affirmed the denial of copyright registration where the author of the work was listed as a machine.

Background
Dr. Stephen Thaler, a computer scientist, developed a generative AI system known as the “Creativity Machine,” which generated an artwork titled “A Recent Entrance to Paradise.” In his copyright registration, Dr. Thaler listed the Creativity Machine as the work’s sole author, while he claimed ownership of the work. The Copyright Office denied his application, citing its longstanding requirement that a work must be authored by a human to qualify for copyright protection. Dr. Thaler contested this decision by filing a lawsuit against the Copyright Office and its director, Shira Perlmutter, arguing that the human-authorship requirement was unconstitutional and unsupported by either statute or case law. The district court affirmed the Copyright Office’s decision, and Dr. Thaler appealed to the US Court of Appeals for the District of Columbia Circuit.
Overview of the Case
The appeals court unequivocally sided with the district court’s decision and the Copyright Office’s longstanding requirement that authors must be human.
Dr. Thaler first argued that there is no requirement for an “author” to be a human. Although the Court acknowledged that the Act does not explicitly define “author,” it examined several statutory provisions to conclude that the reading of the Act suggests that an author must be a human, not a machine. See generally, 17 U.S.C. §§ 101, 102(a), 104(a), 108(c)(2), 109(b)(1)(B)(i), 117(d)(1), 201(a), 203(a)(2), 204(a), 302, and 401(a). The provisions considered included:

An author’s legal capacity to hold property;
The copyright duration, which extends to the author’s life plus seventy years;
Inheritance rights for a widow, widower, surviving children, or grandchildren;
The requirement for a signature to transfer copyright ownership;
Nationality or domicile;
The necessity of intention; and
Definitions of “computer program,” “machine,” “device,” and “process” in the Copyright Act.

The Court concluded that these provisions only make sense if the author is a human. Machines cannot own property, do not have a lifespan that is measured in the same time as that of a human, lack surviving spouses or heirs, cannot provide authenticating signatures, and do not possess a domicile or national identity. Machines also lack the capacity for intention, and the definitions within the Act suggest that machines have an owner who can maintain and repair them. 17 U.S.C. §§ 117(d)(1) and (2). Collectively, these provisions identify an “author” as a human being, and therefore the Court rejected Dr. Thaler’s argument.
Next, Dr. Thaler argued that the work-made-for-hire doctrine allows non-human entities to be considered “authors.” However, the Court noted that AI cannot be an employee under this doctrine, nor can it transfer a copyright it did not author. The Court further explained that if Congress intended an “author” to include non-human entities, it would have explicitly stated that those who hire creators are the “authors,” rather than saying they are “considered the author for purposes of this title.”
Last, Dr. Thaler claimed that the human-authorship requirement prevents the protection of works made with AI. The Court dismissed this concern, clarifying that the requirement does not prohibit copyrighting works that are made with AI assistance; it simply requires the author to be human. The Court also noted that Dr. Thaler did not explain how prohibiting machines from being authors would reduce incentives to create original work, as machines do not respond to economic incentives. Ultimately, the Court emphasized that the human-authorship requirement is not new, and re-addressing it in light of new technology is a policy matter for Congress.
Looking Ahead
Dr. Thaler’s waived argument that he is the work’s author (by creating and using the Creativity Machine) leaves open the significant question of how much human involvement is needed in the conception and creation of a work for creators to claim a copyright. As AI technology rapidly advances and becomes more integrated into various industries, the Court’s reinforcement of the human-authorship requirement serves as a crucial reminder that AI-generated content may not qualify for copyright protection.
For companies investing in AI-driven innovation, it is essential to be mindful of these limitations. Ensuring human involvement in the creative process is not only a legal necessity, but can also be a strategic consideration for securing copyright protection. As businesses seek to protect AI-generated content, including software and creative works, they must carefully evaluate the extent of the human contribution to their works in order to meet the current legal standards. This awareness will be vital in navigating the complex intersection of AI and copyright law, as well as in fostering innovation that aligns with existing legal frameworks.

The Future for California’s Latest Generation of Privacy Regulations is Uncertain

As reported previously, the California Privacy Protection Agency (“CPPA”) closed the public comment period for its proposed cybersecurity audit, risk assessment and automated decision-making technology (“ADMT”) regulations (the “Proposed Regulations”) in late February. In advance of the CPPA’s April 4 meeting, the CPPA released a new draft of the Proposed Regulations, which proposed relatively minor substantive changes, but pushed back the dates for when certain obligations would become effective. The Agency’s Board met on April 4, 2025, to discuss the new proposals and comments received, as well as the potential for some very different alternatives, especially related to ADMT. Members of the CPPA Board debated the staff’s approach and ultimately sent the staff back to narrow the scope of the Proposed Regulations, clarify what was in and out of scope with more examples, and to further consider how to reduce the costs and burdens on businesses. While it is unclear exactly what staff will come back with, the alternatives discussed provide some hints on what a more constrained approach may look like.
Likely revisions are focused on six items discussed:

Definition of “automated decision-making technology” (ADMT)
Definition of “significant decision”
“Behavioral advertising” threshold
“Work or educational profiling” and “public profiling” threshold
“Training” thresholds
Risk Assessment Submissions

Definition of “Automated Decision-making Technology” (ADMT)
The first discussion item included three proposed alternatives to the current ADMT definition. All the alternatives narrow the definition from that of the current Proposed Regulations, some significantly:
Alternative 1: Would still cover use to assist or replace human decision making, but would provide more description on what processes apply, and add a material consumer impact requirement.
Alternative 2: Would limit the definition to where the processing substantially replaces human decision making.
Alternative 3: Would limit the definition to where the processing replaces human decision making for the purpose of “making a solely automated significant decision about a consumer.”
The Board did not reach a consensus as to how to narrow the definition of ADMT, but expressed concern with the current broad scope of the ADMT definition, and a desire to see an alternative from staff that assuaged these concerns.
Definition of “Significant Decision”
The heart of the ADMT and Profiling provisions regulate where such processing can result a “significant decision,” defined as access to, or provision or denial of certain listed types of goods and services. Board Member Alistair Mactaggart raised concerns that the phrase “access to” was overly broad and could include a wide array of information services, including maps apps and other items used to route or otherwise direct a consumer to a covered service. He provided an example wherein a consumer uses a maps app to route them an emergency room or a bank. The staff’s presentation included replacing “access to” with “selection of a consumer for,” or to delete it altogether.
Other Board Members, including Drew Liebert, raised concerns that in the employment context “allocation or assignment of work,” as a form of significant decision, could include actions like selecting a specific delivery driver based on proximity. Staff’s proposed alternatives included deleting this type of decision, as well as others including insurance and criminal justice and narrowing the scope of “essential goods or services.”
The Board directed staff to return with more examples of use cases to demonstrate what is and is not within the scope of a significant decision and how various potential definition changes could affect those examples.
Behavioral Advertising
Proposed changes in this section of the draft regulations on “extensive profiling” stand to significantly alter the scope of the Proposed Regulations, which propose to expand the current concept of Cross-Context Behavioral Advertising to include first-party behavioral data-driven ad targeting. The Board spent less time discussing this issue and ultimately seemed to direct the staff to implement the proposed change, which deletes the behavioral advertising use case from the requirements for risk assessments and ADMT completely.
Work or Educational Profiling and Public Profiling
Similar to the significant decision issue, the Board was concerned that, as written, the scope of Proposed Regulations might encompass uses cases that do not fall under the spirit of the regulations. Board Member Mactaggart, specifically, raised concerns that this section of the regulation is changing the character of the law from a privacy law to an employment law. The staff did not present any specific alternatives to the Proposed Regulations as to these types of “extensive profiling.” The Board seemed to reach consensus for requesting staff to provide additional information including use cases that might help inform the scope of the regulations.
AI and ADMT Training
The staff-suggested potential changes regarding AI and ADMT training thresholds took two forms. One would narrow the scope of the rule, by limiting the requirements to where the business knows or should know that the technology will be used for the currently restricted purposes, as opposed to the current capability of use standard, while the other would delete the training thresholds completely. The Board engaged in considerable discussions, including regarding whether the language could be changed to only require risk assessments from entities that definitively used ADMT (based on a new, narrower definition). This stemmed from similar concerns underlying the other issues, that as written, the regulations would potentially apply to entities that were not really engaged in risky privacy behaviors. However, staff explained that in order for pre-use risk assessments to remain an element of the regulations, there must be some way to include potential uses.
The Board directed staff to follow the second recommendation, which would remove the artificial intelligence applicability to the training threshold. Staff was also directed to revise the requirements to apply only to businesses that are actively using or are planning to use ADMT.
Risk Assessment Submissions
The Board’s discussion on the risk assessments went beyond the staff’s issue slide regarding the summary submission process. Specifically, the Board contemplated changes that would totally revamp the required elements of risk assessments. Primarily motivated by concerns of the cost for businesses, members of the Board asked staff whether the regulations can better reflect other jurisdictions’ risk assessment frameworks (e.g., Colorado). Staff was directed to determine the feasibility of mirroring the risk assessment language to other jurisdictions, especially Colorado, to ensure that businesses conduct risk assessments need not tailor them to each state and incur significant costs in the process.
Legal Challenge Concerns
Board Member Mactaggart also raised concerns about the legality of some of the Agency’s proposed regulations, including constitutional concerns like First Amendment rights with respect to risk assessments and whether the cybersecurity audit requirements exceed the Agency’s statutory authority. Privacy World’s Alan Friel and Glenn Brown (in their personal capacities) have previously addressed the First Amendment concerns raised by risk assessments. Board Member Mactaggart requested that Agency staff provide a report to the Board regarding these litigation risks. Other Board members expressed concern regarding the confidentiality on any such analysis. No firm plan for staff was reached in this regard.
Next Steps
A timeline was not set for developing revised Proposed Regulations and otherwise addressing Board concerns, but the potential for considering staff responses at a July Board meeting was discussed. It is unclear how extensive changes will need to be in order to get a majority of the Board to vote a version of the Proposed Regulations forward. However, if the scope of changes is consistent with the direction at least some on the Board seem to be giving staff, a new 45-day public comment period would seem likely, even if a shorter 15-day period were to be applied to other proposed edits. It would seem that the CPPA has a long way to go and will need to more narrowly construct rules that are more aligned with other U.S. states. We will continue to monitor developments on this rulemaking process or other Agency actions.
 
Samuel Marticke contributed to this article.

CMS Confirms Relocation of Physician-Owned Hospital Does Not Jeopardize Stark Law Exception

CMS confirmed that a physician-owned hospital proposing to move eight miles away from its original site and add an emergency department would continue to meet the whole hospital exception, provided all other conditions remain met.
CMS emphasized that the hospital must remain the same legal and operational entity post-relocation, with no changes in ownership or Medicare provider agreement.
The decision reflects CMS’s continued scrutiny of, yet possibly softening stance towards, physician-owned hospitals and the structural safeguards in place to protect against self-referral risks.

The Centers for Medicare & Medicaid Services (CMS) recently released Advisory Opinion No. CMS-AO-2025-1, addressing whether a physician-owned hospital’s proposed full-site relocation and addition of an emergency department would jeopardize its ability to continue to rely on the Stark Law’s “whole hospital exception.” In the advisory opinion, CMS concluded that relocation, by itself, is not necessarily disqualifying — and that no single factor is dispositive. Instead, the agency took a holistic approach in assessing whether the hospital remained the same entity post-relocation for purposes of the exception.
By retaining the same ownership, provider agreement, licensure, services, name, patient base, and bed count, CMS concluded that the hospital would remain the “same hospital” under Stark requirements and continue to qualify under the “whole hospital exception”— enabling the hospital to retain its protection for physician referrals.
This Advisory Opinion — the first issued since 2021 — provides noteworthy guidance and important considerations for hospital administrators, compliance officers, and legal counsel of physician-owned hospitals currently taking advantage of the exception considering structural changes or expansions.
Background and Legal Analysis
The Stark Law “Whole Hospital Exception”
In 2010, the Affordable Care Act tightened Stark Law rules to prevent the creation of new physician-owned hospitals (with limited exceptions) and restrict the expansion of existing ones.
According to the CMS Advisory Opinion, the hospital at issue had met the Stark Law’s whole hospital exception before the 2010 cutoff by having physician ownership and a Medicare provider agreement in place. The hospital requested that CMS confirm it would still qualify as the “same hospital” and remain in compliance with the Stark Law exception, despite its plans to relocate eight miles away and to add an emergency department.
The Hospital’s Proposal: A Relocation Without Disruption
CMS took a holistic approach in its analysis and reviewed the hospital’s comprehensive certification of facts in light of factors previously outlined in its CY 2023 OPPS/ASC proposed rule and reaffirmed in the FY 2024 IPPS final rule, namely:

Continuity of state licensure and Medicare provider agreement;
Consistent use of Medicare provider number and tax ID;
Same services and patient base;
No changes to ownership or scope of services (with some flexibility, such as adding an emergency room);
Same state regulatory framework.

The hospital certified that it had maintained physician ownership and a Medicare provider agreement continuously since December 31, 2010; the aggregate number of operating rooms, procedure rooms, and beds had remained the same since March 23, 2010 (and would remain unchanged post-relocation); the hospital’s services and patient base would remain unchanged; the hospital would continue to operate under the same name, branding, and tax ID number; there would be no ownership or leadership changes; and the hospital would continue under the same Medicare provider agreement.
Additionally, the hospital certified that its state’s law did not require a certificate of need for new construction, but any structural changes required prior notice and approval from that state’s health department. The requesting hospital also affirmed that discussions with its state officials confirmed the facility could maintain its existing state licensure after relocation.
Based on the certifications and documentation provided by the hospital, CMS concluded that neither the relocation of the facility or the addition of an emergency department would run afoul of the Stark Law’s referral and billing prohibitions. Specifically, the hospital would continue to meet the condition at 42 C.F.R. § 411.362(b)(1) as set forth in Stark’s Whole Hospital Exception.
Five Key Considerations for Hospital Leadership
One of the leading takeaways from the advisory opinion is CMS’s emphasis on a hospital’s continuity in legal identity, services, structure, and ownership when making a “whole hospital exception” determination. But beyond its specific facts, the opinion also serves as an important reminder for hospital administrators, compliance officers, and legal counsel of physician-owned hospitals that even operational changes—like relocation or new departments—can trigger significant legal and regulatory scrutiny.
Here are five strategic considerations hospital leadership should keep in mind:

Maintain Continuity: Ensure Medicare provider agreements, tax IDs, and licensure remain uninterrupted during transitions.
Document Everything: Detailed certifications and planning are crucial for regulatory assurance.
Avoid Ownership Changes: Even minor shifts in physician ownership could threaten compliance with the Whole Hospital Exception.
Engage Regulators Early: Involve CMS and state departments of health well in advance of any move or structural change.
Seek Advisory Opinions: Where doubt exists, requesting a formal CMS advisory opinion can offer clarity and protection.

Court Grants Interlocutory Appeal on AI Fair Use Issue

We previously reported on the groundbreaking AI Fair Use ruling in the Thomson Reuters Ross Intelligence case, where the court found that based on the facts of this case fair use was not a defense. Ross Intelligence moved, pursuant to 28 U.S.C. § 1292(b), for certification of the Court’s Order, for interlocutory appeal and for a stay pending that appeal. The Court has now granted that request.
The Court noted: “Though I remain confident in my February 2025 summary judgment opinion, I recognize that there are substantial grounds for difference of opinion on controlling legal issues in this case. These issues have the potential to change the shape of the trial. I thus certify the following two questions to the Third Circuit: (1) whether the West headnotes and West Key Number System are original; and (2) whether Ross’s use of the headnotes was fair use. A short opinion further explaining my reasoning will follow.”
In its brief in support of its motion, Ross argued that this case presents “urgent questions” governing AI. It asserted that the legal theories in this case “understate the importance of originality and overstate the scope of copyright protection.” It further asserts that the evident and predictable result is a pronounced chill on AI innovation, as copyright law is used to stop “fair learning”[1] based on factual statements. Based on this it concludes that Appellate review of these questions cannot wait.
A district court may certify an order for interlocutory appeal when it finds “that such order involves a controlling question of law as to which there is substantial ground for difference of opinion and that an immediate appeal from the order may materially advance the ultimate termination of the litigation.” 28 U.S.C. § 1292(b)(2).
ROSS requested certification on two questions: (1) whether the Westlaw headnotes fail the Copyright Act’s originality requirement because the notes lack “creative spark” and (2) whether ROSS’s use of .076% of Westlaw’s headnotes to help train an AI search engine is transformative or otherwise a fair use of the headnotes.
ROSS asserted that the issues presented are controlling questions of law and that there are substantial grounds for differences of opinion on each question. Interestingly, as evidence of substantial grounds for differences of opinion, ROSS cited the Court’s own two conflicting opinions in this case. See Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., 694 F. Supp. 3d 467, 478, 487 (D. Del. 2023) (Thomson Reuters I) (where the Court concluded that originality and fair use were jury questions) and Thomson Reuters II, 2025 WL 458520 (where the Court had a “belated insight” that inspired a “change of heart,” and granted summary judgment to Thomson Reuters on both originality and fair use).
LISTEN TO THIS POST

FOOTNOTES
[1] See paper on Fair Learning by Professor Lemley et al. 

Regulation Round Up: March 2025

Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in March 2025:
31 March
Securitisation: The Joint Committee of the European Supervisory Authorities (“ESAs”) published a report on the implementation and functioning of the Securitisation Regulation (2017/2402)
27 March
FCA Handbook: The Financial Conduct Authority (“FCA”) published Handbook Notice 128 which sets out changes in respect of the following areas: application and periodic fees, client asset: auditors, corporate governance code, the FSCS management expenses levy limit, digital securities depositories and the handbook administration.
26 March
ESG: The Council of the European Union agreed its negotiating mandate on the European Commission Omnibus proposal for postponing application dates in relation to the Corporate Sustainability Due Diligence Directive ((EU) 2024/1760) and the Corporate Sustainability Reporting Directive ((EU) 2022/2464). Please refer to our dedicated article on this topic here.
ESG: The EU Platform on Sustainable Finance published its response to the European’s Commission’s call for evidence on a draft delegated regulation amending the Taxonomy Delegated Acts. Please refer to our dedicated article on this topic here.
25 March
FCA Regulation Round‑Up: The FCA published its regulation round‑up for March 2025. Among other things, it covers the launch of “My FCA” and a new form relating to the financial information that wholesale firms need to submit with an application for authorisation.
Consumer Duty: The FCA published a feedback statement (FS25/2) on immediate areas for action and further plans for reviewing FCA retail conduct requirements following the introduction of the Consumer Duty.
FCA Strategy: The FCA published its strategy for 2025 to 2030, which sets out the FCA’s vision and priorities for the next five years and focuses on deepening trust, rebalancing risk, supporting growth and improving lives.
21 MarchESG: The EU Platform on Sustainable Finance (“PSF”) published a report on streamlining sustainable finance for SMEs in the light of the challenges faced by SMEs in accessing external financing for their sustainability projects.
18 March
UK MiFID: HM Treasury published a policy note on the Markets in Financial Instruments Directive (“MiFID”) Organisational Regulation (UK Commission Delegated Regulation (EU) 2017/565), together with a near‑final draft version of the Markets in Financial Instruments (Miscellaneous Amendments) Regulations 2025.
17 March
Capital Markets: The European Parliament’s Committee on Economic and Monetary Affairs (“ECON”) published a draft report on facilitating the financing of investments and reforms to boost European competitiveness and creating a Capital Markets Union (“CMU”) that considers recommendations made by the Draghi report on the future of European competitiveness.
Economic Growth: UK Finance published a Plan for Growth, which proposes reforms to help the UK financial services sector make an even stronger contribution to the government’s growth agenda while also delivering real benefits for consumers, businesses and society.
Economic Growth: HM Treasury published a policy paper that contains an action plan setting out the steps the government intends to take to reform the UK regulatory system to ensure regulators and regulation support growth.
14 March
Market Abuse: The FCA published Primary Market Bulletin No 54, which addresses strategic leaks and the unlawful disclosure of inside information.
12 March
Payments: HM Treasury published a press release announcing its decision to abolish the Payment Systems Regulator (“PSR”) as part of an efficiency drive.
Artificial Intelligence: The International Organization of Securities Commissions (“IOSCO”) published a consultation report, produced by its Fintech Task Force, on artificial intelligence (“AI”) use cases, risks and challenges in capital markets.
ESG: The FCA and Prudential Regulation Authority (“PRA”) published respective letters that they have decided not to move forward with proposed diversity and inclusion rules for financial firms. Please refer to our dedicated article on this topic here.
11 March
Motor Finance: The FCA published a statement on the next steps in its review of the motor finance discretionary commission arrangements.
ESG: The FCA published a statement confirming that their sustainability rules do not prevent investment in or finance for defence companies. Please refer to our dedicated article on this topic here.
10 March
Artificial Intelligence: The FCA drafted a letter it sent jointly with the Information Commissioner’s Office (“ICO”), to trade association chairs and CEO of firms on supporting AI, innovation and growth in financial services.
FCA: The FCA published its findings following a multi‑firm review of liquidity risk management at wholesale trading firms.
7 March
Consumer Duty: The FCA published its findings following a multi‑firm review into how firms approach providing consumer support under the Consumer Duty. It also published examples of good practice and areas for improvement.
FCA Quarterly Consultation: The FCA published its 47th quarterly consultation paper (CP25/4).
6 March
ESMA Priorities: The European Securities and Markets Authority (“ESMA”) published a letter from Verena Ross, ESMA Chair, to John Berrigan, Director General of the European Commission’s Directorate Financial Services, Financial Stability and Capital Markets Union, identifying a number of Commission deliverables for 2025 that could be deprioritised or postponed.
5 March
ESG: The European Commission published a notice (C/2025/1373) on the interpretation and implementation of certain legal provisions of the Taxonomy Environmental Delegated Act ((EU) 2023/2486), the Taxonomy Climate Delegated Act ((EU) 2021/2139) and the Taxonomy Disclosures Delegated Act ((EU) 2021/2178) was published in the Official Journal of the European Union.
Private Markets: The FCA published its findings following a multi‑firm review of valuation processes for private market assets. Please refer to our dedicated article on this topic here.
Additional Authors: Sulaiman Malik and Michael Singh

The BR Privacy & Security Download: April 2025

STATE & LOCAL LAWS & REGULATIONS
Virginia Governor Vetoes AI Bill: Virginia Governor Glenn Youngkin vetoed the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”). The Act was similar to the Colorado AI Act and would have required developers to use reasonable care to prevent algorithmic discrimination and to provide detailed documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of AI systems would have been required to implement risk management policies, conduct impact assessments before deploying high-risk AI systems, disclose AI system use to consumers, and provide opportunities for correction and appeal. The governor stated that the Act’s “rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments” and that the Act “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology” in the state. The governor also noted that existing state laws “protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more” and that an executive order issued by the governor in 2024 established safeguards and oversight for AI use.
CPPA Advances Regulations for Data Broker Deletion Mechanism: The California Privacy Protection Agency (“CPPA”) advanced proposed California Delete Act regulations through the establishment of the Delete Request and Opt-Out Platform (“DROP”). These regulations would create an accessible mechanism for consumers to request the deletion of all their non-exempt personal information held by registered data brokers via a single request to the CPPA. The proposed rules also clarify the definition of a “direct relationship” with a consumer, specifying that simply collecting personal information directly from a consumer does not constitute a direct relationship unless the consumer intends to interact with the business. This revision could bring more businesses, such as third-party cookie providers, under the definition of data brokers. Consumers will likely be able to access DROP by January 1, 2026, and data brokers will be required to access it by August 1, 2026.
Virginia Enacts Reproductive Privacy Law: Virginia enacted amendments to the Virginia Consumer Data Protection Act to prohibit the collection, disclosure, sale, or dissemination of consumers’ reproductive or sexual health data without consent. “Reproductive or sexual health information” is defined under the law as “information relating to the past, present, or future reproductive or sexual health of an individual,” including: (1) efforts to research or obtain reproductive or sexual health information services or supplies, including location information that may indicate an attempt to acquire such services or supplies; (2) reproductive or sexual health conditions, status, diseases, or diagnoses, including pregnancy, menstruation, ovulation, ability to conceive a pregnancy, whether an individual is sexually active, and whether an individual is engaging in unprotected sex; (3) reproductive and sexual health-related surgeries and procedures, including termination of a pregnancy; (4) use or purchase of contraceptives, birth control, or other medication related to reproductive health, including abortifacients; (5) bodily functions, vital signs, measurements, or symptoms related to menstruation or pregnancy, including basal temperature, cramps, bodily discharge, or hormone levels; (6) any information about diagnoses or diagnostic testing, treatment, or medications, or the use of any product or service relating to the matters described in 1 through 5; and (7) any information described in 1 through 6 that is derived or extrapolated from non-health-related information such as proxy, derivative, inferred, emergent, or algorithmic data. “Reproductive or sexual health information” does not include protected health information as defined by HIPAA.
Oregon Attorney General Releases Enforcement Report on Oregon’s Consumer Privacy Act: The Oregon Attorney General released a six-month report on the enforcement of Oregon’s comprehensive privacy law, the Consumer Privacy Act (“OCPA”), which took effect on July 1, 2024. The report provides that, as of the beginning of 2025, the Privacy Unit within the Civil Enforcement Division at Oregon’s Department of Justice (“Privacy Unit”) received 110 complaints. Most of these complaints were about online data brokers. In the last six months, the Privacy Unit initiated and closed 21 matters after sending cure notices (the OCPA provides for a 30-day cure period, which sunsets on January 1, 2026) and broader information requests. Some of the most common deficiencies identified were the lack of requisite disclosures or confusing privacy notices (e.g., not listing the OCPA rights or not naming Oregon in “your state rights” section), and lacking or burdensome rights mechanisms (e.g., the lack of a webpage link for consumers to submit opt-out requests).
Utah Becomes First State to Enact Legislation Requiring App Stores to Verify Users’ Ages:Utah has enacted the App Store Accountability Act, which mandates that major app store providers must verify the age of every user in the state. For users under 18, the law requires verifiable parental consent before any app can be downloaded, including free apps, or any in-app purchases can be made. App stores must also confirm a user’s age category (adult, older teen (16-17), younger teen (13-15), or child (under 13)). When a minor creates an account, it must be linked to a parent’s account. App store providers are responsible for building systems to verify ages, obtain parental consent, and share this data with app developers. They must also provide sufficient disclosure to parents about app ratings and content and notify them of significant changes to apps their children use, requiring renewed consent. Violations of the law will be considered deceptive trade practices, and the act creates a private right of action for harmed minors or their parents. The core requirements for age verification and parental consent are set to take effect on May 6, 2026.
Michigan Legislative Committee Advances Judicial Privacy Bill: The Michigan Senate Committee on Civil Rights, Judiciary, and Public Safety provided a favorable recommendation for a judicial privacy bill that would allow state and federal judges to request the deletion of their personal information from public listings. The Michigan bill would create a private right of action with mandatory recovery of legal fees for any entity that fails to respond to a valid deletion request. The purpose of the bill is to protect against a significant uptick in threats against judicial officers and their families. The bill is based on Jersey’s Daniel’s Law, which has sparked a wave of class action lawsuits against data brokers and online listing companies. If passed, businesses that receive a valid request from a member of the judiciary or their immediate family members under the proposed bill would have to remove from publication any covered information pertaining to the requestor.
Virginia Legislature Passes Consumer Data Protection Act Amendments Restricting Minors’ Use of Social Media; Governor Declines to Sign: The Virginia Legislature unanimously passed a bill to amend the Virginia Consumer Data Protection Act to limit minors’ use of social media to one hour per day. Specifically, the bill would require that any social media platform operator to (1) use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor younger than 16 years of age and (2) limit any such minor’s use of such social media platform to one hour per day, per service or application, and allow a parent to give verifiable parental consent to increase or decrease the daily time limit. Virginia Governor Glenn Youngkin declined to sign the bill as passed, recommending several changes to strengthen the bill. These recommendations include raising the age of covered users from 16 to 18 and requiring social media platform operators to disable infinite scroll features and auto-playing videos unless the operator has obtained verifiable parental consent. 

FEDERAL LAWS & REGULATIONS
Lawmakers Reintroduce COPPA 2.0 to Strengthen Children and Teens’ Online Privacy:U.S. Senators Bill Cassidy (R-LA) and Edward Markey (D-MA) have reintroduced the Children and Teens’ Online Privacy Protection Act (“COPPA 2.0”), aiming to update online data privacy rules to better protect children and teenagers. The bill seeks to address the youth mental health crisis by stopping data practices that contribute to it. COPPA 2.0 proposes several key measures, including a ban on targeted advertising to children and teens and the creation of an “Eraser Button,” allowing users to delete personal information. It also establishes data minimization rules to limit the excessive collection of young people’s data and revises the “actual knowledge” standard to prevent platforms from ignoring children on their sites. Furthermore, the legislation would require internet companies to obtain consent before collecting personal information from users aged 13 to 16. Previous versions of COPPA 2.0 have advanced in Congress, passing the Senate and a House committee in the past.
White House Seeks Stakeholder Input for Trump Administration’s AI Action Plan:The White House Office of Science and Technology Policy issued a Request for Information to gather public input on the administration’s AI Action Plan. This AI Action Plan intends to define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessary regulations from hindering private sector innovation. The focus is on promoting U.S. competitiveness in AI, limiting regulatory burdens, and developing safeguards that support responsible AI advancement. Stakeholders, including academia, industry groups, and private sector organizations, were encouraged to share their policy ideas on topics such as model development, cybersecurity, data privacy, regulation, national security, innovation, and international collaboration. The submitted comments will be used to inform future regulatory proposals.
Congresswoman Issues RFI for Input on U.S. Privacy Act Reform: Congresswoman Lori Trahan (D-MA) announced her effort to reform the Privacy Act of 1974, aiming to protect Americans’ data from government abuse. The proposed reforms seek to address outdated provisions in the act and enhance privacy protections for individuals in the digital age. Trahan emphasized the importance of updating the act to reflect modern technological advancements and the increasing amount of personal data collected by government agencies. The initiative includes measures to ensure greater transparency, accountability, and oversight of data collection practices. Trahan highlights the urgency of the issue as a result of access by the Department of Government Efficiency staff to personal data held by several agencies and calls for legislative action to protect citizens’ privacy rights and prevent government overreach.

U.S. LITIGATION
Court Blocks Enforcement of California Age-Appropriate Design Code: Industry group NetChoice scored yet another victory over the California Age-Appropriate Design Code Act, obtaining a second preliminary injunction temporarily blocking its enforcement. The act was passed unanimously by the California legislature in 2022 and—if enforced—would place extensive new requirements on websites and online services that are “likely to be accessed by children” under the age of 18. NetChoice won its first preliminary injunction in September 2023 on the grounds that the act would likely violate the First Amendment. In August 2024, the Ninth Circuit partially upheld this injunction, finding that NetChoice was likely to succeed in demonstrating that the act’s data protection impact assessment provisions violated the First Amendment. However, the Ninth Circuit remanded the case for determination of the constitutionality of the remaining provisions as well as whether any unconstitutional provisions could be severed from the remainder of the act. On remand, Judge Beth Labson Freeman again granted NetChoice’s motion for preliminary injunction finding that the act regulates protected speech, triggering a strict scrutiny review. Judge Freeman concluded that although California has a compelling interest in protecting the privacy and well-being of children, this interest alone is not sufficient to satisfy a strict scrutiny standard. This ruling is likely to strengthen NetChoice’s opposition of similar acts, such as the Maryland Age-Appropriate Design Code Act.
Court Rejects Allegheny Health Network’s Attempt to Force Arbitration over Meta Pixel Tracking:The U.S. District Court for the Western District of Pennsylvania ruled that Allegheny Health Network (“AHN”) cannot compel arbitration in a class action lawsuit filed by a patient under a pseudonym. The patient alleged that AHN unlawfully collected and disclosed his confidential health information to Meta Platforms. AHN initially sought to compel arbitration based on an arbitration provision within their website’s Terms of Service. However, the court denied this motion, finding that the patient did not have actual or constructive notice of the arbitration agreement. The court found that the link to the AHN’s Terms of Service, a “browsewrap” agreement, was not sufficiently conspicuous, as it was located at the bottom of the homepage among numerous other links and in a less visible footer on its “Find a Doctor” page. Additionally, the court found AHN failed to prove the patient had seen the specific Terms of Service containing the arbitration provision that was added to the website.
Supreme Court Declines Review of Sandhills Medical Data Breach Suit:The U.S. Supreme Court has declined to review a Fourth Circuit decision that ruled Sandhills Medical Foundation Inc. (“Sandhills Medical”), a federally funded health center, cannot use federal immunity to shield itself from a data breach lawsuit. The lawsuit was brought by Joann Ford following a data breach at Sandhills Medical. Sandhills Medical argued it was entitled to federal immunity under 42 U.S.C. § 233(a), which protects federally funded health centers from lawsuits related to the performance of medical, surgical, dental, or related functions. The Fourth Circuit, however, interpreted “related functions” narrowly, stating it did not cover data protection. Sandhills Medical, in its petition to the Supreme Court, contended that this ruling created a circuit split with the Ninth and Second Circuits, which have taken a broader view of the immunity. Sandhills Medical warned that the Fourth Circuit’s “unnaturally cramped” reading of the statute needed correction. Despite these arguments, the Supreme Court denied Sandhills Medical’s petition, meaning the health center will now face the lawsuit in South Carolina District Court.
Utah Attorney General Seeks Reinstatement of Utah Minor Protection in Social Media Act: Utah has requested a federal appeals court to reinstate a law that imposes restrictions on social media platforms. The Utah Minor Protection in Social Media Act (the “Act”), passed in 2024, was previously blocked by a lower court. The act aims to protect minors from harmful content and requires social media companies to verify the age of users and obtain parental consent for minors. Utah’s Attorney General argues that the law is necessary to safeguard children from online dangers and prevent exploitation. Previously, tech industry group NetChoice successfully sued to block the law, arguing it infringes on First Amendment rights and imposes undue burdens on businesses. 
Court Holds Sharing of IP Address Insufficient to Prove Harm in CIPA Case: Judge Edgardo Ramos of the Southern District of New York granted defendant Insider, Inc.’s (“Insider”) motion to dismiss claims that its use of Audiencerate’s website analytics tools constituted an unlawful ‘pen register’ in violation of California’s Invasion of Privacy Act (“CIPA”). Plaintiffs argued that Insider invaded their privacy when it installed a tracker on their browsers, sending their IP addresses to a third party, Audiencerate, without their consent. However, Judge Ramos found that this collection and disclosure of IP addresses was insufficient to establish harm for purposes of Article III standing. He found that unlike a Facebook ID, which can be used to track or identify specific individuals, an IP address cannot be used to identify an individual and can only provide geographic information “as granular as a zip code.” Therefore, disclosure of an IP address would not be highly offensive to a reasonable person. Judge Ramos further emphasized that this “conclusion is consistent with the general understanding that in the Fourth Amendment context a person has no reasonable expectation of privacy in an IP address.” Despite this ruling, CIPA class actions and demands are likely to remain a constant threat to business with California-facing websites.
Periodical Publisher Unable to Dismiss VPPA Class Action: Judge Lewis J. Liman of the Southern District of New York denied defendant Springer Nature America’s (“Nature”) motion to dismiss claims that its use of Meta Pixel violated the Video Privacy Protection Act (“VPPA”). The VPPA prohibits videotape service providers from knowingly disclosing personally identifiable information about their renters, purchasers, or subscribers. Despite being drafted to address information collected through physical video stores, the VPPA has become a potent tool in the hands of the plaintiffs’ bar to challenge websites containing video content. Although Nature is primarily a research journal publication, Judge Lewis found that it could qualify as a videotape service provider as defined under the VPPA in part because of the video content on its website and its subscription-based business model. Relying on the recent Second Circuit decision in Salazar v. National Basketball Association, Judge Liman also found that the plaintiff had alleged a concrete injury sufficient to confer standing because the disclosure of information about videos viewed was adequately similar to the public disclosure of private facts. This ruling should remind companies whose websites contain significant video content to carefully review their cookie usage and consent management capabilities.

U.S. ENFORCEMENT
CPPA Requires Data Broker to Shut Down: As part of its public investigative sweep of data broker registration compliance, the CPPA reached a settlement agreement with Background Alert, Inc. (“Background Alert”) for failing to register and pay an annual fee as required by California’s Delete Act. The Delete Act requires data brokers to register and pay an annual fee that funds the California Data Broker Registry. As part of the settlement, Background Alert must shut down its operations for three years for failing to register between February 1 and October 8, 2024. If Background Alert violates any term of the settlement, including the requirement to shut down its operations, it must pay a $50,000 fine to the CPPA.
New York Attorney General Settles with App Developer for Failure to Protect Students’ Privacy: The New York Attorney General settled with Saturn Technologies, the developer of the Saturn app, for failing to protect students’ privacy. Saturn allows high school students to create a personal calendar, interact with other users, share social media accounts, and know where other users are located based on their calendars. The New York Attorney General’s investigation found that unlike what Saturn Technologies represented, the company failed to verify users’ school email and age to ensure only high school students from the same high school interacted. The investigation also found that Saturn Technologies used copies of users’ contact books even when the user changed their phone settings to deny Saturn’s access to their contact book. Under the settlement, Saturn Technologies must pay $650,000 in penalties and change its verification process, provide enhanced privacy options for students under 18, and prompt users under 18 to review their privacy settings every six months.
New York Attorney General Sues Insurance Companies for Back-to-Back Data Breaches: The New York Attorney General sued insurance companies National General and Allstate Insurance Company for back-to-back data breaches, which exposed the driver’s license numbers of more than 165,000 New Yorkers. In 2020, attackers took advantage of a flaw on two of National General’s auto insurance quoting websites, which displayed consumers’ full driver’s license numbers in plain text. The complaint alleges that National General failed to detect the breach for two months and failed to notify consumers and the appropriate state agencies. The complaint also alleges that National General continued to leave driver’s license numbers exposed on a different quoting website for independent insurance agents, resulting in another data breach in 2021. This action is the New York Attorney General’s latest effort to hold auto insurance companies accountable for failing to protect consumers’ personal information against an industry-wide campaign by attackers targeting online auto insurance quoting applications.
California Attorney General Announces Investigative Sweep of Location Data Industry: The California Attorney General announced an ongoing investigative sweep into the location data industry. The California Attorney General sent letters to advertising networks, mobile app providers, and data brokers that appear to be in violation of the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The enforcement sweep is intended to ensure that businesses comply with their obligations under the CCPA with respect to consumers’ rights to opt out of the sale and sharing of personal information and limit the use of sensitive personal information, which includes precise geolocation data. The letters sent by the California Attorney General notify recipients of potential violations of the CCPA and request additional information regarding how the recipients offer and effectuate such CCPA rights. Location data has become an enforcement priority for the California Attorney General given the federal landscape affecting California’s immigrant communities and reproductive and gender-affirming healthcare.
CPPA Settles with Auto Manufacturer for CCPA Violations: The CPPA settled with American Honda Motor Co. (“Honda”) for its alleged CCPA violations. The CPPA alleged that Honda (1) required consumers to verify themselves and provide excessive personal information to exercise their rights to opt out and limit; (2) used an online privacy management tool that failed to offer consumers their CCPA rights in a symmetrical way; (3) made it difficult for consumers to authorize agents to exercise their CCPA rights on their behalf; and (4) shared personal information with ad tech companies without contracts containing CCPA-required language. As part of the settlement, Honda must pay $632,500, implement new and simpler methods for submitting CCPA requests, and consult a user experience designer to evaluate its methods, train its employees, and ensure the requisite contracts are in place with third parties with whom it shares personal information. This action is a part of the CPPA’s investigative sweep of connected vehicle manufacturers and related technologies.
OCR Settles with Healthcare Provider for HIPAA Violations: The U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) settled with Oregon Health & Science University (“OHSU”) over potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule’s right of access provisions. The HIPAA Privacy Rule requires covered entities to provide individuals or their personal representatives access to their protected health information within thirty days of a request (with the possibility of a 30-day extension) for a reasonable, cost-based fee. OCR initiated an investigation against OHSU for a second complaint OCR received in January 2021 from the individual’s personal representative. OCR resolved the first complaint in September 2020, when OCR notified OHSU of its potential noncompliance with the Privacy Rule for only providing part of the requested records. However, OHSU did not provide all of the requested records until August 2021. As part of the settlement, OHSU must pay $200,000 in penalties.
Democratic FTC Commissioners Fired by Trump Administration: The Trump administration fired the Federal Trade Commission’s (“FTC”) Democratic Commissioners Alvaro Bedoya and Rebecca Kelly Slaughter. Their removal leaves the FTC with no minority party representation among the agency’s five commissioner bench. Slaughter was originally nominated by Trump in 2018 and was serving her second term. Bedoya was in his first term as commissioner. Bedoya and Slaughter indicated in public statements that they would take legal action to challenge the firings. Among potential privacy impacts of the firings is how the lack of minority party representation may affect the enforcement of the EU-U.S. Data Privacy Framework (“DPF”), which is used by many businesses to legally transfer personal data from the EU to the United States. The DPF is intended to be an independent data transfer mechanism, and the removal may heighten concerns about the independence of agencies tasked with enforcing the DPF. The move at the FTC follows the prior removal of democrats from the U.S. Privacy and Civil Liberties Oversight Board, which is charged with providing oversight of the redress mechanism for non-U.S. citizens under the DPF. 
CFPB Drops Suit Against TransUnion: The Consumer Financial Protection Bureau (“CFPB”) voluntarily dismissed with prejudice its lawsuit against TransUnion in which it alleged that TransUnion engaged in deceptive marketing practices in violation of a 2017 consent order. The CFPB provided no explanation for its decision and each party agreed to bear its own litigation costs and attorneys’ fees.

INTERNATIONAL LAWS & REGULATIONS
CJEU Rules Data Subject Is Entitled to Explanation of Automated Decision Making: The Court of Justice of the European Union (“CJEU”) ruled that a controller must describe the procedure and principles applied in any automated decision-making technology in a way that the data subject can understand what personal data was used, and how it was used, in the automated decision making. The ruling stemmed from an Austrian case where a mobile telephone operator refused to allow a customer to conclude a contract on the ground that her credit standing was insufficient. The operator relied on an assessment of the customer’s credit standing carried out by automated means by Dun & Bradstreet Austria. The court also stated that the mere communication of an algorithm does not constitute a sufficiently concise and intelligible explanation. In order to meet the requirements of transparency and intelligibility, it may be appropriate to inform the data subject of the extent to which a variation in the personal data would have led to a different result. Companies will have to be creative in assessing what information is required to ensure the explainability of automated decision-making to data subjects.
European Parliament Publishes Report on Potential Conflicts Between GDPR and EU AI Act: The European Parliament published a report on the interplay of the EU AI Act with the EU General Data Protection Regulation (“GDPR”). One of the AI Act’s main objectives is to mitigate discrimination and bias in the development, deployment, and use of “high-risk AI systems.” To achieve this, the EU AI Act allows “special categories of personal data” to be processed, based on a set of conditions (e.g., privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The report concludes that the GDPR, which imposes limits on the processing of special categories of personal data, might prove restrictive in the circumstances under which the GDPR allows the processing of special categories of personal data. The paper recommends that GDPR reforms of further guidelines on how the GDPR works with the EU AI Act would help address any conflicts.
Norwegian and Swedish Data Protection Authorities Release FAQs on Personal Data Transfers to United States: The Norwegian and Swedish data protection authorities issued FAQs on Personal Data Transfers to the United States in response to the dismissal of several members of the U.S. Privacy and Civil Liberties Oversight Board (“PCLOB”). The PCLOB is responsible for providing oversight of the redress mechanism for non-U.S. citizens under the U.S.-EU Data Protection Framework (“DPF”), which is one legal mechanism available to transfer EU personal data to the U.S. under the GDPR. Datatilsynet, the Norwegian data protection authority, stated that it understands that the intent is to appoint new PCLOB members in the future and that, even without a quorum, the PCLOB can perform some tasks related to the DPF. Accordingly, Datatilsynet stated that issues would only arise in the adequacy decision underpinning the DPF as a result of the removal of the PCLOB members if the appointment of new members takes a long time. The Swedish data protection authority, Integritetsskydds myndigheten (“IMY”) also cited confusion of the European business community following the dismissal of several members of the PCLOB. The IMY stated that the Court of Justice of the European Union has the authority to annul the DPF adequacy decision but has not taken such action. As a result, the DPF is still a valid mechanism for data transfer according to the IMY. Both data protection authorities indicated they would continue to monitor the situation in the U.S. to determine if anything occurred that affected the DPF and its underlying adequacy decision.
OECD Releases Common Reporting Framework for AI Incidents: The OECD Organization for Economic Co-operation and Development (“OECD”) released a paper titled “Towards a Common Reporting Framework for AI Incidents.” The paper outlines the need for a standardized approach to reporting AI-related incidents. It emphasizes the importance of transparency and accountability in AI systems to ensure public trust and safety. The report proposes a framework that includes guidelines for identifying, documenting, and reporting incidents involving AI technologies. The paper specifically identifies 88 potential criteria for a common AI incident reporting framework across 8 dimensions. The 8 dimensions are (1) incident metadata, such as date of occurrence, title, and description of the incident; (2) harm details focusing on severity, type, and impact; (3) people and planet, describing impacted stakeholders and associated AI principles; (4) economic context describing the economic sectors where the AI was deployed; (5) data and input, which includes a description of the inputs selected to train the AI system; (6) AI model providing information related to the model type; (7) task and output, describing the AI system tasks, automation level, and outputs; and (8) other information about the incident to catch any complementary information reported with respect to an incident. 
China Issues Draft Measures for Financial Institutions to Report Cybersecurity Incidents and for Data Compliance Audits: The People’s Bank of China (“PBOC”) released draft administrative measures for reporting cybersecurity incidents in the financial sector (“Draft Measures”). The Draft Measures provide guidelines for identifying, reporting, and managing cybersecurity incidents by financial institutions regulated by the PBOC. Reporting requirements and timing vary according to type of entity and classification of incidents. Incidents would be classified as one of four categories – especially significant, significant, large, and average. Separately, the Cyberspace Administration of China (“CAC”) issued administrative measures on data protection audit requirements (“Data Protection Audit Measures”). The Data Protection Audit Measures provide (1) the conditions under which an audit of a data handler’s compliance with relevant personal information protection legal requirements would be required; (2) selection of third-party compliance auditors; (3) frequency of compliance audits; and (4) obligations of data handlers and third-party auditors in conducting compliance audits. The Data Protection Audit Measures include guidelines setting forth the specific factors that data handlers must evaluate in an audit, including the legal basis for processing personal information, whether the data handler has complied with notice obligations, how personal information is transferred outside of China, and the technical security measures employed by the data handler to protect personal information, among other factors.
European Commission Releases Third Draft of General-Purpose AI Code of Practice: The European Commission announced the publication of the third draft of the EU General-Purpose AI Code (“Code”). The first two sections of the draft Code detail transparency and copyright obligations for all providers of general-purpose AI models, with notable exemptions from the transparency obligations for providers of certain open-source models in line with the AI Act. The third section of the Code is only relevant for a small number of providers of most advanced general-purpose AI models that could pose systemic risks, in accordance with the classification criteria in Article 51 of the AI Act. In the third section, the Code outlines measures for systemic risk assessment and mitigation, including model evaluations, incident reporting, and cybersecurity obligations. A final version of the General-Purpose AI Code of Practice is due to be presented and published to the European Commission in May.
 
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan and Karen H. Shin.

‘Catch and Revoke’ Program Takes Off: State Department AI-Driven Visa Crackdown

The U.S. State Department’s “Catch and Revoke” program uses artificial intelligence (AI) to monitor foreign nationals, particularly student visa holders. The program aims to identify individuals who express support for Hamas, Hezbollah, or other U.S.-designated terrorist organizations through social media activity or participation in protests and revoke their visas. To date, approximately 300 foreign nationals have had their visas revoked under this initiative.
AI tools scan social media accounts, news reports, and other publicly available information to flag individuals on visas for further investigation. The U.S. government maintains the program is a national security measure to help identify foreign nationals who should have been denied visas based on support for designated terrorist organizations. Critics argue the AI-driven process may rely on basic keyword searches that are prone to errors, raising concerns about fairness and accuracy. Advocacy groups warn the initiative undermines First Amendment rights by specifically targeting political speech and activism.
Recent arrests by ICE have included doctoral candidate students at several universities, following revocation of their visas. Students identified under the program have reported receiving online notifications that their visas are being canceled and advised them to “self-deport” using the CBP Home mobile app. Schools may also be notified through the Student and Exchange Visitor Program (SEVP) of a visa revocation under national security-related grounds, in which case the school’s designated school official (DSO) may be required to either cancel or terminate the I-20 record.
This initiative arises out of two executive orders that President Donald Trump issued shortly after taking office:
1. Executive Order No. 14161, “Protecting the United States from Foreign Terrorists and Other National Security and Public Safety Threats,” directing the secretary of state, in coordination with the attorney general, the secretary of homeland security, and the director of national intelligence, to promptly “vet and screen” all noncitizens who are already inside the United States “to the maximum degree possible.”
2. Executive Order No. 14188, “Additional Measures to Combat Anti-Semitism,” directing the secretary of state, the secretary of education, and the secretary of homeland security to provide “recommendations for familiarizing institutions of higher education with the grounds for inadmissibility under 8 U.S.C. § 1182(a)(3),” related to national security and support for terrorist organizations, so that schools can monitor and report activities in violation of the law.
On Mar. 25, 2025, the Knight First Amendment Institute filed a lawsuit seeking to block the Trump Administration’s policy of arresting, detaining, and deporting noncitizen students and faculty, including the “Catch and Revoke” program. American Association of University Professors, et al. v. Rubio, et al., No. 1:25-cv-10685 (D. Mass.).
The Catch and Revoke program reflects the Trump Administration’s heightened scrutiny of foreign nationals and highlights the tension between national security measures and civil liberties.

Cleo AI Agrees to $17 Million Settlement with FTC

Sometimes, deals are too good to be true. That was the case for Cleo AI, an online cash advance company that promised consumers fast, up-front cash payments. According to the Federal Trade Commission (FTC), Cleo AI offered consumers a mobile personal finance application that “promises consumers instant or same-day cash advances of hundreds of dollars.” When a consumer requests a cash advance, Cleo AI offers two subscription models, Cleo Plus and Cleo Builder. Once the consumer picks a subscription, they must provide a payment method that Cleo AI can use to obtain a cash advance repayment and subscription and other fees.
According to the FTC’s Complaint filed against Cleo AI, the company limits the cash advances promised to consumers below the advertised amounts. In addition, Cleo AI “falsely promises that consumers can obtain cash advances ‘today’ or instantly,” while it actually takes several days. Cleo AI required consumers to pay an extra fee to obtain the cash advance the same day or the next.
After much dissatisfaction, many consumers attempted to cancel their subscriptions. However, Cleo AI made it difficult to cancel their subscriptions and stop the recurring fees.
The FTC alleges that Cleo AI violated Section 5 of the FTC Act because it made material misrepresentations or deceptive omissions of material fact to consumers that constitute deceptive acts or practices. It also alleges violations of the Restore Online Shoppers’ Confidence Act.
Cleo AI has agreed to pay $17 million to settle the allegations against it.
This settlement reinforces that the FTC will not tolerate companies making misrepresentations to consumers. It also teaches consumers to: a) beware of advertisements that are too good to be true, and b) be wary of providing payment information for a subscription. Once they have your payment information, it is difficult to end the subscription.

Tempus Fugit Ad Nevada

Three days after Delaware’s governor, Matt Meyer, signed into law controversial amendments to Delaware’s General Corporation Law, another publicly traded company filed preliminary proxy materials with the Securities and Exchange Commission seeking stockholder approval of a reincorporation in Nevada. 
“Fugit inreparabile tempus”*
Tempus AI, Inc. describes itself as “a healthcare technology company focused on bringing artificial intelligence and machine learning to healthcare in order to improve the care of patients across multiple diseases”.  Although its principal executive offices are in Chicago, Illinois, it was incorporated in Delaware.  Tempus’ proxy materials emphasize Nevada’s “statute focused” approach and its board’s belief  “that Nevada can offer more predictability and certainty in decision-making because of its statute-focused legal environment”.  The company also faults the litigation environment in Delaware:
The Board also considered the increasingly litigious environment in Delaware, which has engendered less meritorious and costly litigation and has the potential to cause unnecessary distraction to the Company’s directors and management team and potential delay in the Company’s response to the evolving business environment. The Board believes that a more stable and predictable legal environment will better permit the Company to respond to emerging business trends and conditions as needed.

I expect that Tempus’ board was aware of the Delaware legislation, but the changes were not enough to convince it to remain in the Blue Hen state.  

* Time flies irretrievably.  Publius Vergilius Maro, Georgics, Liber III. 

AI Patent Law: Navigating the Future of Inventorship

As a patent attorney experienced in transformer-based AI architectures and large language models (LLMs), I want to share insights on the evolving landscape of AI-assisted inventions.  This is particularly relevant in view of the 2024 publication of the USPTO’s AI Inventorship Guidance (“Inventorship Guidance for AI-Assisted Inventions,” published February 13, 2024, 89 FR 10043, available at https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions), which provides guidance to inform practitioners and the public about inventorship for AI-assisted patent claims.
To paraphrase the AI Inventorship Guidance, all patent claims must have significant contribution from a human inventor, with each claim requiring at least one natural person inventor who made a significant contribution to the claimed invention.  When AI systems are used in claim drafting, practitioners must be particularly vigilant if the AI introduces alternate embodiments not contemplated by the named inventors, as this requires careful reevaluation of inventorship to ensure proper attribution.  Additionally, if any claims are found to lack proper human inventorship, where no natural person made a significant contribution, then those claims must be either canceled or amended to reflect proper inventorship by a human.
Human Contribution to Invention
The USPTO requires that at least one human inventor demonstrates significant involvement in the invention process.  This contribution must extend beyond presenting a problem to the AI or merely recognizing the AI’s output.
Compliance with Pannu Factors
To qualify as an inventor, a person must meet the Pannu Factors:

Significant contribution to conception or reduction to practice of the invention.*
Contributions that are not insignificant relative to the entire invention.
Activities beyond explaining well-known concepts or reiterating prior art.

Substantial Contribution to the Claimed Invention
The human inventor’s input must be meaningful when evaluated against the complete scope of the claimed invention.  Examples of substantial contributions include:

Constructing specific AI prompts designed to solve targeted problems.
Expanding on AI-generated outputs to develop a patentable invention.
Designing or training AI systems tailored for specific problem-solving purposes.

What Constitutes Inventorship in AI-Assisted Innovations?
Several activities can establish inventorship in AI-assisted technologies:

Creating detailed prompts intended to generate targeted solutions from AI systems.
Contributing substantively beyond AI outputs to finalize the invention.
Conducting experiments based on AI results in unpredictable fields and recognizing the inventive outcomes.
Designing, building, or training AI systems to address specific challenges.

What Does Not Constitute Inventorship?
Certain activities fail to meet the threshold for inventorship, such as:

Recognizing a problem or presenting a general goal to the AI.
Providing only basic input without significant engagement in problem-solving.
Simply reducing AI-generated outputs to practice.
Claiming inventorship based solely on oversight or ownership of the AI system.

Practical Strategies for Patent Practitioners

Document Human Contributions: Maintain detailed records of human involvement in the invention process to establish inventorship.
Evaluate Claim Scope: Ensure each claimed element is supported by sufficient human input to meet the USPTO’s requirements.
Correct Inventorship Issues Promptly: Address discrepancies in inventorship to protect the patent’s validity and enforceability.

Drawing from my experience guiding AI-assisted innovations through the patent process, I have seen how vital these strategies are for robust IP protection.

* While the Pannu factors do mention reduction to practice, the Federal Register clarifies that “[t]he fact that a human performs a significant contribution to reduction to practice of an invention conceived by another is not enough to constitute inventorship” (https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions).  Reduction to practice without simultaneous conception (such as in unpredictable arts) is insufficient to demonstrate inventorship.  Inventorship continues to require human conception of the invention.

More States Ban Foreign AI Tools on Government Devices

Alabama and Oklahoma have become the latest states to ban from state-owned devices and networks certain AI tools with links to foreign governments.
In a memorandum issued to all state agencies on March 26, 2025, Alabama Governor Kay Ivey announced new policies banning from the state’s IT network and devices the AI platforms DeepSeek and Manus due to “their affiliation with the Chinese government and vast data-collection capabilities.” The Alabama memo also addressed a new framework for identifying and blocking “other harmful software programs and websites,” focusing on protecting state infrastructure from “foreign countr[ies] of concern,” including China (but not Taiwan), Iran, North Korea, and Russia.
Similarly, on March 21, 2025, Oklahoma Governor Kevin Stitt announced a policy banning DeepSeek on all state-owned devices due to concerns regarding security risks, regulatory compliance issues, susceptibility to adversarial manipulation, and lack of robust security safeguards.
These actions are part of a larger trend, with multiple states and agencies having announced similar policies banning or at least limiting the use of DeepSeek on state devices. In addition, 21 state attorneys general recently urged Congress to pass the “No DeepSeek on Government Devices Act.” 
As AI technologies continue to evolve, we can expect more government agencies at all levels to conduct further reviews, issue policies or guidance, and/or enact legislation regarding the use of such technologies with potentially harmful or risky affiliations. Likewise, private businesses should consider undertaking similar reviews of their own policies (particularly if they contract with any government agencies) to protect themselves from potential risks.

Uncertainty Means AI Training Can Continue

Over 30 lawsuits challenging the training of Generative AI on copyrighted materials are pending, most in federal courts across the country. The copyrighted materials range from news stories to photographs to music. The law is unsettled whether such training violates copyright law.
However, the uncertainty means that training can continue to until we get final guidance from the courts (or the legislature), both of which take time. For example, Thomson Reuters, which provides Westlaw, sued a competitor for copyright infringement in May of 2020. Last month, the Court partially granted summary judgment finding that the headnotes and numbering were copied. The case is still pending trial.
Of course, it will be impossible to “untrain” the GenAI engines, which are and will be in use. This may leave the courts to grapple with the difficult question of what remedy is appropriate, if and when it is determined that such training does not constitute “fair use” or fall within an exception for data mining. 

In decision issued Tuesday, Judge Eumi K. Lee ruled that it remained an “open question” whether using copyrighted materials to train AI is illegal – meaning UMG and other music companies could not show that they faced the kind of “irreparable harm” necessary to win such a drastic remedy.
www.billboard.com/…