CA Legislators Charge That Privacy Agency AI Rulemaking Is Beyond Its Authority
As we have covered, the public comment period closed on February 19th for the California Privacy Protection Agency (CPPA) draft regulations on automated decision-making technology, risk assessments and cybersecurity audits under the California Consumer Privacy Act (the “Draft Regulations”). One comment that has surfaced (the CPPA has yet to publish the comments), in particular, stands out — a letter penned by 14 Assembly Members and four Senators. These legislators essentially charged the CPPA for being over its skis, calling out “the Board’s incorrect interpretation that CPPA is somehow authorized to regulate AI.”
And these lawmakers did not stop there. They questioned the Draft Regulations based on their projected costs on businesses:
While we recognize CPPA’s role in the regulatory setting, the CPPA must avoid operating in a vacuum when developing regulations. You voted to move these regulations forward with the knowledge they will cost Californians $3.5 billion in first year implementation, with ongoing costs of $1.0 billion annually for the next 10 years, and 98,000 initial job losses in California. That is nothing to say of the adverse impact on future investment and jobs noted by the analysis that will get moved to other states, or the startups that will get developed elsewhere….
It is also important to note that California could face a $2 billion deficit in 2025, as recently reported by the Legislative Analyst Office. Your votes to move these regulations forward are unlikely to help California’s fiscal condition in 2025 and, in fact, stand to make the situation much worse. We urge you to take a broader view and redraft all of your regulations to minimize its costs to Californians. Moving forward, the CPPA must work responsibly with other branches of government to get these regulations right in order to avoid significant and irreversible consequences to California. [Emphasis added.]
All of this said, it should not be construed that Privacy World disapproves of robust risk assessments as a best practice and as practically necessary for sound information governance and to ensure legal compliance. However, there is a difference between guidance and compulsion, and it seems that this debate is happening at the highest level in Sacramento, which is timely given the CPPA Board is about to select a new Executive Director to lead the CPPA. There is no doubt that California has been, and remains, the leader in consumer privacy protection, and CPPA Board and staff have the best interest of California and its citizens in mind. It is exactly these kinds of comments, however, that will help the CPPA achieve its goals and follow its mission.
Copyright Litigation Ruling Spotlights Applicability of Fair Use as a Defense When Training AI
Highlights
A federal court held in a copyright infringement case the defendant could not maintain its “fair use” and other defenses
While curated and organized legal content is protected by copyright law, repurposing it to build a direct competitor and utilizing it to train AI pushes beyond fair use protections
Fair use is not a shield when AI training on copyrighted legal compilations directly undermines the original creator’s market and competitive edge
In a recent twist to a closely followed AI-related copyright infringement case between two legal research software providers, a federal court reversed its prior rulings and held the company accused of copyright infringement could not maintain its “fair use” and other defenses.
The court initially denied the parties’ cross-motions for summary judgment in September 2023, citing factual issues as to both whether the headnotes were sufficiently original to warrant copyright protection, as well as factual issues on elements of a fair use defense. Before the scheduled trial date in August 2024, the court continued the trial date and requested the parties to submit additional briefing.
After finding the defendant directly copied over 2,000 “headnotes,” or summaries of legal opinions, and that the headnotes were sufficiently original and copyrightable, the court spent most of its opinion focused on the defendant’s claim that the copying of data to train AI was excused under the fair use defense. The court considered the four factors that are weighed when assessing a claim of fair use, the: 1) purpose and character of the use, 2) nature of the copyrighted work, 3) amount and substantiality of the portion used, and 4) effect of the use upon the potential market for or value of the copyrighted work.
While the court found that the second and third factors weighed in the defendant’s favor, because the headnotes were not as original or creative as fictional works and the defendant’s AI product did not directly reproduce the copied headnotes, the court found that the two most important factors weighed in the plaintiff’s favor. Specifically, the court found that the purpose of the challenged use was commercial, and that the defendant intended to compete directly the incumbent plaintiff from the legal research market and enter the market of providing AI-powered legal research tools.
The court also noted that it did not matter if the plaintiff also intended to train its own AI tools on its headnotes, as the effect on the potential market was enough for the fourth factor to weigh in the plaintiff’s favor.
Importantly, the court took care to distinguish the AI tools in question from generative AI tools. The court found it important that the AI tools being challenged were trained on copyrighted works and only returned relevant judicial opinions based on its training to a user’s queries – not generate new output in response to prompts. The court explicitly left open the question of whether the fair use defense could succeed where generative AI tools are at issue and whether generative AI would change the analysis of a direct competitor creating content from an incumbent’s content.
Takeaways
This ruling reinforces copyright protections for curated legal research materials and signals stricter scrutiny for AI-driven data scraping in competitive industries. Additionally, it highlights a growing legal challenge for AI developers relying on proprietary datasets for training models.
Some Implications of the EU AI Act on Video Game Developers
This blog post provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence (“AI Act”) on video game developers. More and more are integrating AI systems into their video games, including to generate backgrounds, non-player characters (NPCs), histories of objects to be founds in the video game. Some of these use cases are regulated under specific circumstances, and create obligations under the AI Act.
The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act depends predominantly on two factors: the role of the video game developer, and the AI risk level.
The role of the video game developer
Article 2 of the AI Act delimits the scope of the regulation, specifying who may be subject to the AI Act. Video game developers might specifically fall under two of these categories:
Providers of AI systems, who are developers of AI systems who place them on the EU market or put the AI system into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
Deployers of AI systems, who are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).
Thus, video game developers will be considered (i) providers if they develop their own AI system and they will be considered (ii) deployers if they integrate existing AI system made by a third party into their video games.
The AI risk level and related obligations
The AI Act classifies AI systems into four categories based on the risk associated with them (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk resulting from the AI systems used:
AI systems with unacceptable risks are prohibited (Article 5 AI Act). In the video game sector, the most relevant prohibitions are the provision or use of AI systems deploying manipulative techniques or exploiting people’s vulnerabilities, and therefore causing significant harm. As an example, it is prohibited to use AI generated NPCs which would manipulate players towards increased spending in a game.
AI systems with high-risk (Articles 6, 7 and Annex III AI Act) trigger strict obligations for providers and, to a lesser extent, for deployers (Sections 2 and 3 AI Act). The relevant high-risk AI systems used in video games are those which pose a significant risk of harm to the health, safety or fundamental rights of natural persons, given their intended purpose, and in particular the AI systems used for emotional recognition (Annex III(1)(c) AI Act). These could, e.g. be used to make exchanges between players and NPCs more fluid and natural, resulting in strong emotion in players who might feel genuine empathy, compassion, or even anger towards virtual characters.
The list of obligations for providers of high-risk AI systems includes implementing quality and risk management systems, appropriate data governance and management practices, as well as technical documentation, ensuring transparency and information to deployers, keeping the documentation, ensuring resilience against unauthorized alterations or cooperating with competent authorities.
Deployers of high-risk AI systems shall notably operate the system in accordance with the instructions given, ensure human oversight, monitor the operation of the high-risk AI system or inform the provider and the relevant market surveillance authority of any incident or any risk to the health, safety, or fundamental rights of persons.
AI systems with specific transparency risk include chatbots, AI systems generating synthetic content or deep fakes, or emotion recognition systems. They trigger more limited obligations, listed in Article 50 AI Act.
Providers of chatbots must ensure that the latter are developed in such a way that the players are informed that they are interacting with an AI system (unless this is obvious for a reasonably well-informed person). Providers of content-generating AI must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated.
Deployers of emotion recognition systems must inform players of the operation of the system, and process the personal data in accordance with Regulation 2016/679 (GDPR) which applies alongside the AI Act. Deployers of deep fakes-generating AI must disclose that the content has been artificially generated or manipulated.
AI system with minimal risk are not regulated under the AI Act. This category includes all other AI systems that do not fall into the aforementioned categories.
The EC stated that, in principle, AI-enabled video games face no obligation under the AI Act, but companies could voluntarily adopt additional codes of conduct (see AI Act | Shaping Europe’s digital future). It should be borne in mind, however, that in specific cases such as those described in this section, the AI Act will apply. Moreover, the AI literacy obligation applies regardless of the level of risk of the system, including minimal risk.
The AI literacy obligation
The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act), regardless of the AI’s risk level. AI literacy is defined as skills, knowledge and understanding that allow providers, deployers and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The ultimate purpose is to ensure that video games developer’s staff are able to take informed decisions in relation to AI, taking into account their technical knowledge, experience, education and training and the context the AI system is to be used in, and considering the persons or groups of persons on whom the AI system is to be used.
The AI Act does not detail how providers and deployers should comply with the AI literacy obligation. In practice, various steps can be taken to achieve AI literacy:
Determining how and which employees currently use or plan to use or develop AI in the near future;
Assessing employees’ current AI knowledge to identify gaps (e.g. through surveys or quiz sessions);
Providing training activities and materials to the employees using AI on AI basics, and at least the concepts, rules and obligations which are relevant.
Conclusion
The regulation of AI systems in the EU has potentially a significant impact on video game developers depending on the way AI systems are used within particular video games. It is early days for the AI Act and we are carefully watching this space particularly as the AI Act is evolving to adapt to new technologies.
Listen to this post
Introducing the EU AI Act
Understanding the need for regulation to ensure the safe use of AI, the European Union (EU) has introduced the world’s most comprehensive legal guideline, EU AI Act, designed to impose strict requirements on AI systems operating within its jurisdiction. Its objectives are clear; however, its implementation and enforcement present challenges and the debate around its impact on innovation continues to get louder.
The EU AI Act, which officially entered into force in August 2024, aims to regulate the development and use of AI systems, particularly those deemed “high-risk.” The primary focus is ensuring AI is safe and ethical and operates transparently within strict guidelines. Enforcement of the Act formally kicked off on February 2, 2025, when the deadline for prohibitions, e.g., certain AI systems, ensuring tech literacy for staff, etc., lapsed.
Noncompliance comes at a price. With the goal of ensuring compliance, companies found in violation may be fined $7.5 million euros ($7.8 million) to $35 million euros ($35.8 million) or 1% to 7% of their global annual revenue. A significant financial deterrent.
The risk classification system is a critical aspect of the AI Act. AI systems are categorized as “prohibited AI practices,” such as biometric technologies that classify individuals based on race or sexual orientation, manipulative AI, and certain predictive policing applications, which are prohibited. Meanwhile, “high-risk” AI systems are permitted but are subject to rigorous compliance measures, including comprehensive risk assessments, data governance requirements, and transparency obligations. AI systems with limited transparency risk are subject to transparency obligations under Article 50 of the AI Act, which requires companies to inform users when they are interacting with an AI system. Finally, AI systems posing minimal to no risk are not regulated.
The EU AI Act is not without some voices in opposition. Other countries and big tech companies are pushing back on its implementation. Tech companies, for example, argue that stringent regulations will dampen innovation, which in turn will make it more difficult for European startups to compete globally. Critics also argue that by imposing heavy compliance burdens, the Act could push AI development out of Europe and into less regulated regions, hindering the continent’s technological competitiveness.
Feeling some overall pressure, the EU has rolled back some of its initial regulatory ambitions, such as repealing the proposed EU AI Liability Directive, which would have made it easier for consumers to sue AI providers. The EU must walk a fine line when it comes to protecting citizens’ rights while cultivating an environment that encourages technological advancement.
A Step in the Right Direction
It is yet to be seen if the EU AI Act will serve as a model for other countries. Long and short, there will be a lot of growing pains, and the EU should expect to have to iterate on the legislation, but overall, it is good to have a starting point from which to critique and iterate. The current framework may not be perfect, but it is a necessary starting point for the global conversation on AI regulation.
The University of Colorado’s Master’s Program in AI Launches This Fall

CU Boulder Launches Revolutionary Master’s Program in Artificial Intelligence. The University of Colorado is paving the way for the future of Artificial Intelligence (AI) education with the launch of its new dedicated Master’s degree program in AI, starting this fall. As AI technologies continue to revolutionize industries across the globe, this innovative program offers students […]
Lawyers Sanctioned for Citing AI Generated Fake Cases
In another “hard lesson learned” case, on Monday, February 24, 2025, a federal district court sanctioned three lawyers from the national law firm Morgan & Morgan for citing artificial intelligence (AI)-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent.
Although two of the lawyers were not involved in drafting the motions, all three e-signed the motions before they were filed. The lawyer who drafted the motions admitted, after the defense counsel raised issues to the court concerning the cited cases, that they used MX2.law to add case law to the motions. MX2.law is “an in-house database launched by” Morgan & Morgan. The lawyer admitted to the court that it was their first time using AI in this way. Unfortunately, they failed to verify the accuracy of the AI platform’s output before filing the motions.
To Morgan & Morgan’s credit, they withdrew the motions, were forthcoming to the court, reimbursed the defendant for attorney’s fees, and implemented “policies, safeguards, and training to prevent another [such]occurrence in the future.”
The court sanctioned all three lawyers. The attorney who drafted the motions and failed to verify the output was sanctioned $3,000 and the other two who e-filed the motions were sanctioned $1,000 each. A hard lesson learned, although by now all attorneys should be aware of the risks of using generative AI tools for assistance with writing pleadings. This is not the first hard lesson learned by an attorney who cited fake cases in a court filing. Check the output of any AI-generated material, whether it is in a court filing or not. In the words of the sanctioning court: “As attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged.”
President Trump’s “America First” Investment Policy Memorandum
On February 21, 2025, President Trump issued a National Security Presidential Memorandum titled “America First Investment Policy,” outlining several key strategies aimed at enhancing U.S. national and economic security through investment policy. This memorandum directs several agencies and executive departments, including the U.S. Department of the Treasury, the U.S. Department of Commerce, the Committee on Foreign Investment in the United States (“CFIUS”), the Federal Bureau of Investigation, and the Securities and Exchange Commission to take specific actions to encourage investment from allies and to protect America’s national security interests from foreign adversaries, with a particular focus on the People’s Republic of China (“PRC”).
The White House released an accompanying fact sheet outlining its reasons for issuing the memorandum.
While the memorandum does not implement any immediately effective regulatory changes, it establishes an important framework and plan of action that investors should anticipate eventually coming into effect.
Encouraging Allied Investment
The memorandum encourages foreign direct investment from allied nations by proposing a “fast-track” review process for investments from specified “allied and partner” countries. This is intended to facilitate investments in advanced technology and other strategic areas while ensuring these investors do not partner with U.S. adversaries. Along these lines, the memorandum provides that restrictions on foreign investors’ access to U.S. assets “will ease in proportion to their verifiable distance and independence from the predatory investment and technology-acquisition practices of the PRC” and other adversaries. The United States will also expedite environmental reviews for investments exceeding one billion dollars.
Restricting Inbound Investment Linked to Adversaries
The United States “will use all necessary legal instruments,” including CFIUS, to block PRC-affiliated investments in strategic sectors like technology, critical infrastructure, healthcare, agriculture, energy, and raw materials. This may result in CFIUS expanding its scrutiny of “covered transactions” with PRC links, potentially lowering thresholds for review and increasing mandatory filings for PRC-linked entities (although certain measures could require congressional action). The memorandum also provides that the Trump administration will consult with Congress regarding expansion of CFIUS review to cover “greenfield” and farmland investments, which are currently beyond CFIUS’s authority to review.
The memorandum also directs CFIUS to cease using mitigation agreements for U.S. investments from foreign adversaries, and describes these agreements as “overly bureaucratic, complex, and open-ended.” Any mitigation agreements “should consist of concrete actions that companies can complete within a specific time, rather than perpetual and expensive compliance obligations.” The memorandum emphasizes that the United States should direct administrative resources toward facilitating investments from key partner countries.
Restricting Outbound Investment Linked to Adversaries
The memorandum also mentions potential new restrictions on U.S. outbound investments to China in sensitive technologies like semiconductors, artificial intelligence (“AI”), biotechnology, quantum, hypersonics, aerospace, advanced manufacturing, and directed energy, and states that the United States will use all necessary legal instruments to further deter U.S. persons from investing in the PRC’s military-industrial sector. It also indicates that sanctions may be imposed under the International Emergency Economic Powers Act to address threats swiftly. The memorandum further states that the Trump administration will consider applying restrictions on various types of outbound investment, including private equity, venture capital, greenfield investments, corporate expansions, and investments in publicly traded securities, from sources such as pension funds, university endowments, and limited partner investors. Last, the memorandum notes that the Trump administration is reviewing Executive Order 14105 on outbound investment, issued by President Biden in August 2023, to assess whether it sufficiently addresses national security threats.
Passive Investments
The President’s memorandum emphasizes that the United States will continue to encourage “passive investments” from all foreign persons and entities, including non-controlling stakes and shares with no voting, board, or other governance rights and that do not confer any managerial influence, substantive decision-making, or access to sensitive technology or information.
Protecting U.S. Investors
Relevant agencies must review existing auditing standards for foreign companies on U.S. exchanges (e.g., under the Holding Foreign Companies Accountable Act), scrutinize variable interest entities often used by foreign adversary firms, and tighten fiduciary standards to exclude adversary-linked companies from pension plans.
Key Takeaways
The “America First Investment Policy” encourages the realignment and prioritization of investment flows between the United States and allied nations, provided that investors have “verifiable distance” from the PRC. As implementation unfolds, investors and businesses will need to navigate this evolving landscape with agility.
For U.S. companies, the memorandum could unlock significant opportunities and challenges. Firms in strategic sectors like semiconductors, AI, and biotechnology may benefit from increased allied investment and expedited project approvals, boosting domestic innovation and jobs. However, a broader range of transactions (such as greenfield transactions) may be subject to CFIUS review, and if a foreign investor has ties to the PRC that CFIUS considers concerning, it could face heightened scrutiny. (Notably, this already takes place, to an extent.)
For foreign investors, the impact hinges on their origin and affiliations. Investors based in allied countries (e.g., Japan, EU member states) without troubling PRC ties stand to gain from the fast-track process, potentially increasing their U.S. market presence if they comply with anti-adversary stipulations. Conversely, PRC-linked firms face heightened barriers. Investors interested in taking advantage of the fast-track process, once implemented, should consider how to best position themselves for fast-track treatment, including through any appropriate adjustments to operations and third-party relationships with China or other foreign adversaries.
WAS THE FCC HACKED?: Tenlyx Respnse to FCC $4.5M NAL Over Scam Robocalls Hits Home
So Telnyx filed its response to the FCC’s $4.5MM NAL today and it is an incredibly interesting saga.
For those of you just catching up, Telnyx is a carrier that apparently allowed an outfit known as “MarioCop” onto its network.
MarioCop was able to target major players at the FCC–we’ll get just how major in a second–with a robocall scheme pretending to be an FCC fraud detection service. Ultimately the scammers were apparently trying to convince FCC staffers to fall for a gift card scam.
WHAT EVEN IS KYC?: Telnyx LLC CEO is Fighting Back Against Proposed $4.5MM FCC Penalty–and He Kind of Has A Point
If that sounds like a longshot, it is.
And Telnyx CEO David Casem has suggested his company was intentionally “swatted” by MarioCop who brought the FCC heat down on it.
But in this company’s NAL response–out today– Telnyx raises another issue that is jut fascinating– how did MarioCop have the personal cell phone numbers of so many FCC staffers to begin with?
As the NAL response says:
Commission employees (current and past) and their families were the primary and intentional targets of the calls placed by MarioCop. The persons reached include the current Chairman of the Commission, the Chairman of the Commission during President Trump’s first term, one current commissioner, numerous chiefs of staff, legal and policy advisors in the offices of all of the current commissioners and the last two Commission chairs, members of the front offices of the Enforcement Bureau, the Office of General Counsel, the Wireline Competition Bureau, the Office of the Managing Director, and staff attorneys of such bureaus and divisions, family members of Commission personnel, and other government officials and industry participants in the telecom policy ecosystem.
Wow.
As the response points out, “personal cell phone numbers of Commission personnel are not made publicly available by the agency, and the identities and personal cell phone numbers of their family members are not, either.”
So how in the world did MarioCop get all those phone numbers?
Hmmmm.
The answer to that question is just one of many lurking behind the FCC’s actions against Telnyx. And while it is tempting to say Telnyx must have done something wrong because ipso facto when the FCC gets targeted with a robocall scam the carrier is to blame, thee is more here than meets the eye.
Full response here: Telnyx Response
Press release here: Telnyx Press Release
M&A Playbook for Acquiring AI-Powered Companies
As artificial intelligence (AI) continues to transform the business world, acquirors need to prepare for a deep dive when evaluating companies that use AI to enable their businesses or create proprietary AI. Key considerations for buyers targeting AI-driven companies include understanding how AI is being used, assessing the risks associated with AI creation and use, being mindful of protecting proprietary AI technology, ensuring cybersecurity and data privacy, and complying with the regulatory landscape.
Risk Allocation
When acquiring a company that utilizes AI, it is vital to assess the potential risks associated with the AI technologies and their outputs. Buyers should review the target’s third-party contracts to understand how risks are allocated, including warranties, limitations of liability, and indemnification obligations. Buyers should also evaluate potential liabilities by considering where AI-generated content might infringe on copyrights or where AI malfunctions could lead to breaches of commitments or cause harm. Finally, buyers should analyze the target’s insurance coverage to ensure the company has adequate policies in place to cover potential third-party claims related to AI usage.
Protection of Proprietary AI Technology
For companies that have developed proprietary AI technologies, understanding how these assets are protected is essential. Buyers can take steps to mitigate liabilities associated with this area by reviewing the target’s intellectual property strategies. This can include a review of the target’s approach to protecting AI technologies, including patents, copyrights, and trade secrets. Additionally, the target’s security measures should be thoroughly analyzed so the buyer can confirm that reasonable measures are in place to maintain the secrecy of AI models, such as robust information security policies and nondisclosure agreements.
Cybersecurity and Data Privacy
If the target company uses personal or sensitive data in their AI technologies, buyers need to take a closer look at the target’s data protection practices. For example, buyers should assess the target’s compliance with applicable privacy laws and regulations, as well as conduct an evaluation of the target’s compliance measures with respect to data-transfer requirements in applicable jurisdictions. Further, the target’s third-party vendor contracts must include appropriate obligations for data privacy and cybersecurity.
Compliance Support and Regulatory Landscape
Finally, a rapidly evolving regulatory environment around AI requires M&A buyers to ensure that target companies can adapt to new regulations. Buyers can examine the target’s systems for overseeing AI use and addressing regulatory challenges, such as minimizing bias and ensuring transparency. Organizational support is also essential, and the buyer should consider what resources the target company has in place to address compliance issues related to AI that may arise.
Implications for M&A Buyers
As AI continues to advance and integrate into various sectors, M&A buyers need to stay ahead of the game when it comes to the unique challenges of acquiring AI-driven companies. By conducting thorough due diligence in the areas addressed above, buyers can better assess potential liabilities and ensure a smoother integration process. By focusing on and understanding these key areas, buyers not only mitigate risk but also position themselves to capitalize on the strategic advantages of AI technologies. In turn, buyers can make informed decisions that protect their investments and leverage AI for future growth.
Listen to this post
CNIL Publishes Recommendations on AI and GDP
On February 7, 2025, the French Data Protection Authority (“CNIL”) released two recommendations aimed at guiding organizations in the responsible development and deployment of artificial intelligence (“AI”) systems in compliance with the EU General Data Protection Regulation (“GDPR”). The first recommendation is titled “AI: Informing Data Subjects” (the “Recommendation on Informing Individuals”) and the second recommendation is titled “AI: Complying and Facilitating Individuals’ Rights” (the “Recommendation on Individual Rights”). The recommendations build on the CNIL’s four-pillar AI action plan announced in 2023.
At a general level, the CNIL clarifies in its press release that:
The purpose limitation principle applies flexibly to general-purpose AI systems. Operators who cannot precisely define all future applications at the training stage may limit themselves to describing the type of system being developed and illustrating its potential key functionalities.
The data minimization principle does not prevent the use of large training datasets. In principle, the data used should be selected and cleaned to optimize algorithm training while avoiding the use of unnecessary personal data.
Training data may be retained for extended periods, if justified and appropriate security measures are implemented.
The reuse of databases, including those available online, is possible in many cases, subject to verifying that the data was not collected unlawfully and that its reuse is compatible with the original collection purpose.
We have summarized below key takeaways for each recommendation.
Recommendation on Informing Individuals
The CNIL emphasizes the importance of transparency in AI systems that process personal data. Organizations must provide clear, accessible, and intelligible information to data subjects about the processing of their data by an AI system. Specifically:
Timing of the information. The CNIL recommends providing information at the time of the data collection. If data is obtained indirectly, individuals should be informed as soon as possible and at the latest, at the first point of contact with the individuals or the first sharing of the data with another data recipient. In any event, individuals must be informed about the processing of their personal data within one month maximum after the collection of their data.
How to provide information. The CNIL recommends providing concise, transparent and easily understandable information, using clear and simple language. The information should be easily accessible and distinguished from other unrelated content. To achieve those objectives, the CNIL recommends using a layered approach to provide essential information upfront while linking to more detailed explanations.
Derogations to information provided individually. The CNIL analyzes various use cases that allow for an exemption from the obligation to individually inform data subjects. For example, when the individuals already have the information as per Article 14 of the GDPR. In all cases, organizations must ensure that these exemptions are applied judiciously and that individuals’ rights are upheld through alternative measures.
What information must be provided. When providing information to the data subjects, the CNIL states that providing details as required by Articles 13 and 14 of the GDPR will generally be required. If individual notification is exempt under the GDPR, organizations must still ensure transparency by publishing general privacy notices. A website, for example, that contains as much relevant information that would have been provided through individual notification. If the organization cannot identify individuals, they must explicitly state this in the notice. If possible, individuals should be informed of what additional details they can provide to help the organization verify their identity. Regarding data sources, the organization is generally required to provide specific details about these sources when the training datasets comes from a small number of sources, unless an exception applies. However, if the data comes from numerous publicly available sources, a general disclosure is sufficient. This can include the categories and examples of key or typical sources. This aligns with Recital 61 of the GDPR which allows for general information on data sources when multiple sources are used.
AI models subject to the GDPR. The CNIL looks at the applicability of the GDPR to AI models, emphasizing that not all AI systems are subject to its provisions. Some AI models are considered anonymous because they do not process personal data. In such cases, the GDPR does not apply. However, the CNIL highlights that certain AI models may memorize parts of their training data, leading to potential retention of personal data. If so, those models would fall under the scope of the GDPR and the transparency obligation apply. As a best practice, the CNIL advises AI providers to specify the risks associated with data extraction from the model in their information notices, such as the possibility of “regurgitation” of training data in generative AI, the mitigation measures implemented to reduce those risks and the recourse mechanisms available to individuals in case one of those risks materializes (e.g., in the event of “regurgitation”).
Recommendation on Individual Rights
The CNIL’s guidelines aim to ensure that individuals’ rights are respected and facilitated when their personal data is used in developing AI systems or models.
General Principles. The CNIL emphasizes that individuals must be able to exercise their data protection rights both with respect to training datasets and AI models, unless the models are considered anonymous (as specified in the EDPB Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models). The CNIL flags that while the rights of access, rectification or erasure for training datasets present challenges similar to those faced with other large databases, exercising these rights directly with respect to the AI model (as opposed to the training dataset) raises unique and complex issues. To balance individual rights and AI innovation, the CNIL calls for realistic and proportionate solutions, and highlights that the GDPR provides flexibility to accommodate the specificities of AI models when handling data subject rights requests. For example, the complexity of responding to the request and costs to do so are relevant factors that can be taken into account when assessing how to respond to a request.
Exercising rights in AI model or system development.
According to the CNIL, how rights requests should be responded to depends on whether these requests concern training datasets or the AI model itself. In this respect, organizations should clearly inform individuals about how their request is interpreted, i.e., whether it relates to training data or the AI model, and explain how the request is handled. When rights requests relate to training datasets, organizations may face challenges in identifying individuals. In this respect, the CNIL highlights:
If an organization no longer needs to identify individuals in a training dataset and can prove it, it may indicate this in response to rights requests.
AI providers generally do not need to identify individuals in their training datasets.
Organizations are not required to retain identifiers solely to facilitate rights requests if data minimization principles justify their deletion.
If individuals provide additional information, the organization may use this to verify their identity and facilitate rights requests.
Individuals have the right to obtain copies of their personal data from training datasets, including annotations and metadata in an understandable format. Complying with this right of access must not infringe others’ rights, such as intellectual property and trade secrets. Further, when complying with the right of access, organizations must provide details on data recipients and sources. If the original source is known, this information must be disclosed. When multiple sources are used, organizations must provide all available information but are not required to retain URLs unless necessary for compliance. More generally, the CNIL highlights that a case-by-case analysis is necessary to determine the level of detail and content of information that must be reasonably and proportionately stored to respond to access requests.
With respect to the rectification, erasure and objection rights, the CNIL clarifies that, among others:
Individuals can request correction of inaccurate annotations in training datasets.
When processing is based on legitimate interest or public interest, individuals may object, if the circumstances justify it.
AI developers should explore technical solutions, such as opt-out mechanisms or exclusion lists, to facilitate rights requests in cases of web scraping.
Article 19 of the GDPR provides that a controller must notify each data recipient with whom it has shared personal data of a rectification, restriction or deletion request. Accordingly, when a dataset is shared, updates should be communicated to recipients via APIs or contractual obligations requiring those recipients to apply those updates.
Exercising rights on AI Models subject to GDPR. Certain AI models are trained on personal data but remain anonymous after training. In such cases, GDPR does not apply. If the model retains identifiable personal data, GDPR applies and individuals must be able to exercise their rights over the model:
Organizations must assess whether a model contains personal data. If the presence of personal data is uncertain, the organization must demonstrate that it is not able to identify individuals as part of its model.
Once a specific individual has been identified as part of a model, the organization must identify the data that are included. If feasible, data subjects must be given the opportunity to provide additional information to help verify their identity and exercise their rights. If the organization still has access to training data, it may be appropriate to first identify the individual within the dataset before verifying whether their data was memorized by the AI model and could be extracted. If training data is no longer available, the organization can rely on the data typology to determine the likelihood that specific categories of data were memorized. For generative AI models, the CNIL advises providers to establish an internal procedure to systematically query the model using a predefined set of prompts.
The rights to rectification and erasure are not absolute and should be assessed in light of the sensitivity of the data and the impact on the organization, including the technical feasibility and cost of retraining the model. In some cases, retraining the model is not feasible and the request may be denied. That said, AI developers should monitor advances in AI compliance since evolving techniques may require previously denied requests to be honored in the future. When the organization is still in possession of the training data, retraining the model to remove or correct data should be envisaged. In any event, as current solutions do not always provide a satisfactory response in cases where an AI model is subject to the GDPR, the CNIL recommends that providers anonymize training data. If this is not feasible, they should ensure that the AI model itself remains anonymous after training.
Exceptions to the exercise of rights. When relying on an exception to limit individuals’ rights as per the GDPR, the organization must inform individuals in advance that their rights may be restricted and explain the reasons for such restrictions.
Read the CNIL’s Press Release (available in English), Recommendation on Informing Individuals and Recommendation on Individual Rights (both only available in French).
U.S. Shifts AI Policy, Calls for AI Action Plan
Highlights
The U.S.’s cautious approach to AI policy and regulation is signaled by declining to enter a foreign agreement and the withdrawal of previous framework
A new request for information requests broad input from industry, academia, governmental, and other stakeholders
The U.S. has taken significant steps to reshape its artificial intelligence (AI) policy landscape. On Jan. 20, 2025, the administration issued an order revoking Executive Order 14110, originally signed on Oct. 30, 2023. This decision marks a substantial shift in AI governance and regulatory approaches. On Feb. 6, 2025, the government issued a request for information (RFI) from a wide variety of industries and stakeholders to solicit input on the development of a comprehensive AI Action Plan that will guide future AI policy.
As part of this initiative, the government is actively seeking input from academia, industry groups, private-sector organizations, and state, local, and tribal governments. These stakeholders are encouraged to share their insights on priority actions and policy directions that should be considered for the AI Action Plan. Interested parties must submit their responses by 11:59 p.m. ET on March 15, 2025.
Executive Order 14110 was designed to establish a broad regulatory framework for AI, emphasizing transparency, accountability, and risk mitigation. The revoked order required organizations engaged in AI development to adhere to specific reporting obligations and public disclosure mandates. The order affected a wide range of stakeholders, including technology companies, AI developers, and data center operators, all of whom had to align with the prescribed compliance measures. With the Jan. 23 Executive Order 14179, organizations must now reassess their compliance obligations and prepare for potential new frameworks that could take the place of the previous Executive Order 14110.
However, given the RFI, there is an opportunity to participate in the formation of new AI policies and regulations. The new order and the RFI seek input into AI policies and regulations directed towards maintaining U.S. prominence in AI development. Consequently, potentially burdensome requirements seem unlikely to emerge in the near term.
On the international front, the U.S. administration’s decision not to sign the AI Safety Declaration at the recent AI Action Summit in Paris further avoids potential international barriers to AI development in the U.S. This, together with the issuance of the RFI, seems to signal caution in development of an AI Action Plan that will drive policy through stakeholder engagement and regulatory adjustments.
The AI Action Plan is intended to establish strategic priorities and regulatory guidance for AI development and deployment. It aims to ensure AI safety, foster innovation, and address key security and privacy concerns. The scope of the plan is expected to be broad, covering topics such as AI hardware and chips, data centers, and energy efficiency.
Additional considerations will include AI model development, open-source collaboration, and application governance, as well as explainability, cybersecurity, and AI model assurance. Data privacy and security throughout the AI lifecycle will also be central to discussions, alongside considerations related to AI-related risks, regulatory governance, and national security. Other focal areas include research and development, workforce education, intellectual property protection, and competition policies.
Takeaways
Given these policy indications, organizations should take proactive steps to adapt to, and potentially contribute to, the evolving AI regulatory landscape. It is essential for businesses to remain aware of developments policies and engage in the opportunities to help shape forthcoming AI policies. Furthermore, monitoring international AI governance trends will be crucial, as these developments may affect AI operations within the U.S.
FTC COPPA Updates Provide New Protections for Children
In the waning days of the Biden administration, the FTC published an update to its COPPA Privacy Rule. The status of this update, however, is unclear. The revisions to the rule were posted on the FTC website prior to the Trump administration, but had not yet been published in the Federal Register.
Trump’s Presidential Memorandum freezing pending federal regulations means that it has not yet been published. And publication is the next step towards it going into effect. Second, and relatedly, the current FTC chair (Ferguson) had expressed concerns about the rule. It is thus likely that it will not be published, at least as currently drafted. As we wait for next steps, for those companies that offer websites directed to or appealing to children, a quick recap. First, the items that were not of concern for Ferguson (and thus likely to be implemented as are):
Website notice (privacy policy). The content of website notice for those subject to COPPA under the rule as revised will require new content. This includes steps a site takes to make sure persistent identifiers used for operational purposes are not used for behavioral advertising. Additionally, for sites collecting audio files, the privacy policy must indicate how the files are used and deleted.
Verifiable parental consent. The revised rules provide for new methods of parental verification. This includes comparing a parent’s authenticated government ID against their face (using a camera app, for example). It also includes a “dynamic, multiple-choice” question approach, if the questions would be too hard for a child 12 or under to complete. The revision also permits texting for what has been traditionally known as the “email-plus” verification process, which can be used when children’s information is not disclosed. Also added is another “one time use” exception to parental consent. Namely collecting and responding to a question submitted by a child through an audio file.
Security. The new rule will require sites to have a written information security program. This goes beyond the current obligation to have “reasonable measures” in place. The security obligations are detailed, and mirror security obligations that exist under various state data security laws.
Definitions. As revised the rule will add “biometric identifiers” to the list of personally identifiable information. These are elements like fingerprints or voiceprints that can be used to identify someone. The definition also includes someone’s “gait.” The rule will also include the definition of “mixed audience” site, a term currently used by the FTC in its COPPA FAQs.
Putting it into Practice: While we await the publication of the revised rules, whether in the format that they took before the new administration, or in a revised format, companies that operate websites subject to COPPA can keep in mind the parts of the new rule that were not of concern to Ferguson. These include new content in privacy policies.
Listen to this post