Joint Cybersecurity Advisory Released on Ghost (Cring) Ransomware
The Cybersecurity & Infrastructure Security Agency, the Federal Bureau of Investigation, and the Multi-State Information Sharing and Analysis Center released an advisory on February 19, 2025, providing information on Ghost ransomware activity.
According to the advisory, “Ghost actors conduct these widespread attacks targeting and compromising organizations with outdated versions of software and firmware on their internet facing services.” They use publicly available code to exploit Common Vulnerability Exposures (CVE) that have not been patched. The CVEs used by Ghost include CVE-2018-13379, CVE-2010-2861, CVE-2009-3960, CVE-2021-34473, CVE-2021-34523, CVE-2021-31207.
The advisory urges organizations to:
Maintain regular system backups stored separately from the source systems, which cannot be altered or encrypted by potentially compromised network devices [CPG 2.R].
Patch known vulnerabilities by applying timely security updates to operating systems, software, and firmware within a risk-informed timeframe [CPG 2.F].
Segment networks to restrict lateral movement from initial infected devices and other devices in the same organization [CPG 2.F].
Require Phishing-Resistant MFA for access to all privileged accounts and email services accounts.
The advisory details how Ghost (Cring) is gaining initial access, executing applications, escalating privileges, obtaining credentials, evading defenses, moving laterally, and exfiltrating data. It also provides indicators of compromise and email addresses used by the threat actors.
Patching continues to be a crucial block-and-tackle technique, and timely patching is critical for mitigating exploitation. Blocking known malicious emails is a proven tactic to mitigate access. Review the advisory to ensure the applicable patches have been applied and the malicious emails associated with Ghost have been blocked.
Clarifying the Copyrightability of AI-Assisted Works

The U.S. Copyright Office’s long-awaited second report assessing the issues raised by artificial intelligence (AI) makes clear that purely AI generated works cannot be copyrighted, and the copyrightability of AI-assisted works depends on the level of human creative authorship integrated into the work.
With the rise of mainstream generative AI platforms, clarity has been sought by creators, artists, producers, and technology companies concerning whether works created with AI may be entitled to copyright protection. In its most recent report, the Copyright Office concludes that existing copyright legislation and principles are well-suited for the issue of AI outputs’ copyrightability and suggests that AI may be used in the creation of copyrighted works as long as there is the requisite level of human creative expression. The Copyright Office’s report also makes clear that copyright protection will not extend to purely computer-generated works. Instead, copyrightability must be assessed on a case-by-case basis analyzing whether a work has the necessary human creative expression and originality to be copyrightable. Such intensive analysis equips existing U.S. copyright law to adapt to works made with emerging technologies.
In the process of crafting the report, the Copyright Office considered input from over 10,000 stakeholders seeking clarity on the protection of works for licensing and infringement purposes. This report does not address issues relating to fair use in training AI systems or copyright liability associated with the use of AI systems; these topics are expected to be covered in separate publications.
Report on Copyrightability of AI Outputs
The Copyright Office’s report examines the threshold question of copyrightability, or whether a work can be protected and endowed with rights that are enforceable against subsequent copiers, which raises important policy questions on the incentives of copyright law and the history of emerging technologies. Overall, the Copyright Office makes clear that tangential use of AI technology will not disqualify any subsequent work of authorship from protection, but rather the level of protection hinges on the nature and extent of the human expression added to the work.
I. Scope of the Report
The Copyright Office sought to clarify several overarching questions on the copyrightability of AI outputs, including:
Whether the Copyright Clause of the Constitution protects AI-generated works.
Whether AI can be the author of a copyright.
If additional protection for AI generations is recommended, and if so what additions.
If revisions to the human authorship standard are necessary.
II. The Copyrightability Standard and Current AI Technology
Human Authorship –There is a low level of human creativity or “authorship” needed to create copyright protection in a work, and the Copyright Office believes existing legal frameworks are relevant to the assessment of AI-generated outputs. Specifically, the Copyright Office believes that determining whether the authorship standard for copyrightability has been met depends on the level of human expressive intervention in the work.
For example, a photographer’s arrangement, lighting, timing, and post-production editing are all indications of the human expression required for copyright protection, even though, technologically, the camera “assists” to capture the photo.[1] On the other hand, photos taken by animals do not create authorship in the animal because of their non-human status.[2] Similarly, “divine messages” from alleged spirits do not contain the requisite human creativity to amount to authorship.[3] In the context of AI, like a photographer using a camera, the use of new technology does not default to a lack of authorship, but like a monkey taking a picture, non-human machines cannot be authors and therefore the expressions created solely by AI platforms cannot be copyrighted.
Assistive AI – The report further comments on the incorporation of AI into creative tasks, like aging actors on film, adding or removing objects to a scene, or finding errors in software code, and concludes that protection of works using such technology would depend on how the system is being used by a human author and whether a human’s expression is captured by the resulting work.
Protection of Prompts – The Copyright Office concludes that prompts alone do not form a basis for claiming copyright protection in AI-generated outputs (no matter how complex they may seem), unless the prompt itself involves a copyrightable work. At its core, copyright law does not protect ideas because copyright seeks to promote the free flow of ideas and thought. Rather, copyright law protects unique human expressions of the underlying ideas which are fixed in some tangible medium. The Copyright Office explains that prompts do not provide sufficient human control to make AI-users authors. Instead, prompts function as instructions that reflect a user’s conception of the idea but do not control the expression of that idea. Primarily, gaps between prompts and resulting outputs demonstrate that lack of control a user has in the expression of those ideas.
Expressive Inputs – The Copyright Office uses two examples in its report to illustrate this point. The first prompt, detailing the subject matter and composition of a cat smoking a pipe, was considered uncopyrightable because the AI system fills in the gaps of a user’s prompt. Here, the prompt does not specify the breed or coloring of the cat, its size, the pose, or what clothes it should be wearing underneath the robe. Without these particular instructions in the user’s prompts, the AI system still generated an image based on its own internal algorithm to fill in the gaps, thus stripping away expressive control from the user.
In contrast, the second prompt, asking the AI system to generate a photorealistic graphic of a human-drawn sketch, was considered copyrightable because the original elements of the sketch were retained in the AI-generated output. In assessing copyrightability, the Copyright Office pointed to the copyright in the original elements of the sketch as evidence of authorship, and any output depicting identifiable elements of the sketch (directed by the human author) was viewed by the Office as a derivative work of the sketch’s copyright. The artist’s protection in the AI output would overlap with the protectable elements in the original sketch, and like other derivative rights, the AI output would require a license to the original sketch.
In sum, where a human inputs their own copyrightable work into an AI system, they will be the author of that portion of the work still perceptible in the output; the individual elements must be identifiable and traceable to the initial human expression.
The Copyright Office views the current use of prompts as largely containing unprotectable (or public domain) ideas but notes that extensive human expression could potentially make prompts protectable, just not with currently available technology. Additionally, the Copyright Office notes that current technology is unpredictable and inconsistent, often producing vastly different outputs from the same prompts, which in its view shows that prompts lack the requisite clear direction of expression to rise to the level of human authorship.
Arrangement and Modification of AI Works –The Copyright Office also concludes that human authorship can be shown by the additions to, or arrangement of, AI outputs, including the use of AI adaptive tools. For example, a comic book “illustrated” with AI but with added original text by a human author was granted protection in the arrangement and expression of the images in addition to any copyrightable text because the work is the product of creative human choices. The same reasoning applies to AI generated editing tools which allow users to select and regenerate regions of an image with a modified prompt. Unlike prompts, the use of these tools enables users to control the expression of specific creative elements, but the Copyright Office clarifies that assessing the copyrightability of these modifications depends on a case-by-case determination.
III. International AI Copyright Decisions
In its review of international responses to AI copyright questions, the U.S. Copyright Office notes the general consensus of applying existing human authorship requirements to determine copyrightability of AI works.
Instructions from Japan’s Cultural Counsel underline the case-by-case basis necessary for assessing copyrightability and noted examples of human AI input that may rise to a copyrightable level. These include the number and type of prompts given, the number of attempts to generate the ideal work, selection by the user, and any later changes to the work.
A court in China found that over 150 prompts, along with retouches and modifications to the AI’s output, resulted in sufficient human expression to gain copyright protection.
In the European Union, most member states agree that current copyright policy is equipped to cover the use of AI, and similar to the U.S., most member states require significant human input into the creative process to qualify for copyright protection.
Canada and Australia have both expressed a lack of clarity on the issue of AI, but neither has taken steps to change legislation.
Unlike other countries, some commonwealth jurisdictions like the United Kingdom, India, New Zealand, and Hong Kong enacted laws before modern generative AI allowing for copyright protection for works created entirely by computers. With recent developments in technology, the United Kingdom has considered changing this law, but other countries have yet to clarify whether their existing laws would apply to AI-generated works.
IV. Policy Implications for Additional Protection
Incentives – One of the key components of copyright policy, as written in the U.S. Constitution, is to “promote science and the useful arts.” Comments to the Copyright Office varied on whether providing protection for AI-generated work would incentivize authorship; proponents of increasing copyright protection argued that it would promote emerging technologies, while opponents note the quick expansion of these technologies shows incentivization is not necessary. The Copyright Office finds the current legal framework as sufficiently balanced, stating that additional laws are not needed to incentivize AI creation because the existing threshold requirement of human creativity already protects and incentivizes the works of human authorship that copyright law seeks to promote.
Staying Internationally Competitive – Commentators noted that without underlying copyright protection, U.S. creators would be impacted by weaker protection for AI-generated works. The Copyright Office counters that similar protections are available worldwide and align with the U.S.’s standards of human authorship.
Clarity on AI-Generated Protection – Commentators petitioned Copyright Office officials for some legal certainty that works created with AI could be licensed to other parties and be registered with the Copyright Office. The Copyright Office’s report provides assurance that works made with assistance from AI platforms may be registered under existing copyright laws and notes the difficulty of any further clarity due to the case-by case nature of copyright analysis.
Conclusion and Considerations[4]
The foundations of U.S. copyright law have been applied consistently to emerging technologies, and the Copyright Office believes those doctrines will apply equally well to AI technologies. With the Copyright Office’s affirmation that purely AI-generated works cannot be copyrighted, and that AI-assisted works must involve meaningful human authorship, businesses leveraging AI systems must consider several key legal and strategic factors:
Maintain detailed records of human prompts and modifications, such as arranging, adapting, or refining AI outputs.
Focus on enhancing human-made, copyrightable works with AI systems rather than generating works solely through uncopyrightable prompts.
For companies commissioning AI-assisted work, specify in contracts that employees or contractors provide sufficient human control, arrangement, or modification of AI works to ensure copyrightability.
For companies offering AI-assisted work as part of their services, consider mitigating risks by excluding AI generated works from standard IP representations/warranties, and further disclaiming any liability in relation to the use of such works.
Consider variations in international AI copyright laws to assess the impact on global IP strategies.
Given the unique analysis copyright cases require, and the existing precedent requiring human input for protection, copyright law is well prepared to face the challenges posed by AI platforms. Due to the unique facts of each case, creators are encouraged to check with an experienced copyright attorney who can help evaluate whether an individual AI-assisted work includes enough human intervention to be protectable.
[1] Burrow-Giles Litho. Co. v. Sarony, 111 U.S. 53, 55–57 (1884).
[2] Naruto v. Slater, No. 15-cv-04324, 2016 U.S. Dist. LEXIS 11041, at *10 (N.D. Cal. Jan. 28, 2016) (finding animals are not “authors” within the meaning of the Copyright Act).
[3] Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 957–59 (9th Cir. 1997) (holding that copyright law does not intend to protect divine beings, and protects the arrangement of otherworldly messages, but not the messages’ content).
[4] As noted above, the Copyright Office’s report does not address issues relating to fair use in training AI systems or copyright liability associated with the use of AI systems; these topics are expected to be covered in a separate publication.
The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines
The European Commission has recently released its Guidelines on the Definition of an Artificial Intelligence System under the AI Act (Regulation (EU) 2024/1689). The guidelines are adopted in parallel to commission guidelines on prohibited AI practices (that also entered into application on February 2), with the goal of providing businesses, developers and regulators with further clarification on the AI Act’s provisions.
Key Takeaways for Businesses and AI Developers
Not all AI systems are subject to strict regulatory scrutiny. Companies developing or using AI-driven solutions should assess their systems against the AI Act’s definition. With these guidelines (and the ones of prohibited practices), the European Commission is delivering on the need to add clarification to the core element of the act: what is an AI system?
The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. The system, for explicit or implicit objectives, infers from input data how to generate outputs – such as predictions, content, recommendations or decisions – that can influence physical or virtual environments.
One of the most significant clarifications in the guidelines is the distinction between AI systems and “traditional software.”
AI systems go beyond rule-based automation and require inferencing capabilities.
Traditional statistical models and basic data processing software, such as spreadsheets, database systems and manually programmed scripts, do not qualify as AI systems.
Simple prediction models that use basic statistical techniques (e.g., forecasting based on historical averages) are also excluded from the definition.
This distinction ensures that compliance obligations under the AI Act apply only to AI-driven technologies, leaving other software solutions outside of its scope.
Below is a breakdown of what the guidelines bring for each of the seven components:
Machine-based systems – AI systems rely on computational processes involving hardware and software components. The term “machine-based” emphasizes that AI systems are developed with and operate on machines, encompassing physical elements such as processing units, memory, storage devices and networking units. These hardware components provide the necessary infrastructure for computation, while software components include computer code, operating systems and applications that direct how the hardware processes data and performs tasks. This combination enables functionalities like model training, data processing, predictive modeling, and large-scale automated decision-making. Even advanced quantum computing systems and biological or organic systems qualify as machine-based if they provide computational capacity.
Varying levels of autonomy – AI systems can function with some degree of independence from human intervention. This autonomy is linked to the system’s capacity to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. The AI Act clarifies that autonomy involves some independence of action, excluding systems that require full manual human involvement. Autonomy also spans a spectrum – from systems needing occasional human input to those operating fully autonomously. This flexibility allows AI systems to interact dynamically with their environment without human intervention at every step. The degree of autonomy is a key consideration for determining if a system qualifies as an AI system, impacting requirements for human oversight and risk-mitigation measures.
Potential adaptiveness – Some AI systems change their behavior after deployment through self-learning mechanisms, though this is not a mandatory criterion. This self-learning capability enables systems to automatically learn, discover new patterns or identify relationships in the data beyond what they were initially trained on.
Explicit or implicit objectives – The system operates with specific goals, whether predefined or emerging from its interactions. Explicit objectives are those directly encoded by developers, such as optimizing a cost function or maximizing cumulative rewards. Implicit objectives, however, emerge from the system’s behavior or underlying assumptions. The AI Act distinguishes between the internal objectives of the AI system (what the system aims to achieve technically) and the intended purpose (the external context and use-case scenario defined by the provider). This differentiation is crucial for regulatory compliance, as the intended purpose influences how the system should be deployed and managed.
Inferencing capability – AI systems must infer how to generate outputs rather than simply executing manually defined rules. Unlike traditional software systems that follow predefined rules, AI systems reason from inputs to produce outputs such as predictions, recommendations or decisions. This inferencing involves deriving models or algorithms from data, either during the building phase or in real-time usage. Techniques that enable inference include machine learning approaches (supervised, unsupervised, self-supervised and reinforcement learning) as well as logic- and knowledge-based approaches.
Types of outputs – AI systems generate predictions, content, recommendations or decisions that shape both their physical and virtual environments. Predictions estimate unknown values based on input data; content generation creates new materials like text or images; recommendations suggest actions or products; and decisions automate processes traditionally managed by human judgement. These outputs differ in the level of human involvement required, ranging from fully autonomous decisions to human-evaluated recommendations. By handling complex relationships and patterns in data, AI systems produce more nuanced and sophisticated outputs compared to traditional software, enhancing their impact across diverse domains.
Environmental influence – Outputs must have a tangible impact on the system’s physical or virtual surroundings, exposing the active role of AI systems in influencing the environment they operate within. This includes interactions with digital ecosystems, data flows and physical objects, such as autonomous robots or virtual assistants.
Why These Guidelines Matter
The AI Act introduces a harmonized regulatory framework for AI developed or used across the EU. Core to its scope of application is the definition of “AI system” (which then spills over onto the scope of regulatory obligations, including restrictions on prohibited AI practices and requirements for high-risk AI systems).
The new guidelines serve as an interpretation tool, helping providers and stakeholders identify whether their systems qualify as AI under the act. Among the key takeaways is the fact that the definition is not to be applied mechanically, but should consider the specific characteristics of each system.
AI systems are a reA(I)lity; if you have not started assessing the nature of the one you develop or the one you procure, now is the time to do so. While the EU AI Act might be considered by many as having missed its objective (a human-centric approach to AI that fosters innovation and sets a level playing field), it is here to stay (and its phased application is on track).
Google Removes Gemini AI Assistant from Main iOS App: What It Means for Users and the Future of AI.

Google Removes Gemini AI Assistant from Main iOS App: What It Means for Users and the Future of AI. Alphabet Inc.’s Google has made a significant move by removing its AI assistant, Gemini, from the main app on iOS devices. Users were informed via email that Gemini is no longer available in the Google app, […]
Prosecutorial Reset: NLRB Acting General Counsel Rescinds Biden Guidance Memoranda En Masse
Not waiting for the appointment of a new General Counsel after President Trump’s discharge of both the previous General Counsel and then Acting General and suggesting that his motivation related to the workload of the Agency, on February 14, 2025, National Labor Relations Board’s current Acting General Counsel William B. Cowen rescinded nearly all of the Biden administration General Counsel’s substantive prosecutorial guidance memos.
While these memoranda do not have the weight of law or regulation, they do set out the agency’s priorities and key interpretations of the National Labor Relations Act. As a result, it marks a (not unexpected) complete reversal of the prosecutorial focus of the Office of the General Counsel from General Counsel Abruzzo’s tenure.
There were generally two types of rescissions. Some of the memos were rescinded in full, while others were rescinded “pending further guidance” – suggesting those areas where the new administration will be placing its focus.
The Acting GC’s memorandum did not address the impact of the NLRB’s current lack of a quorum on the Acting GC’s prosecutorial agenda. President Trump’s unprecedented firing of former NLRB Chair Gwynne Wilcox, which deprived the NLRB of a quorum, is currently being litigated.
The list of key rescinded memoranda and their policy impact are summarized below.
Abruzzo GC Memoranda Rescinded in Full
Rescinded General Counsel Memoranda
Topic and Relevant Policy
Memorandum GC 21-01
Offered guidance on mail-ballot elections, because “COVID-19 is no longer a Federal Public Health Emergency”.
Memorandum GC 21-02
Rescinded prior memos, including those that provided guidance on employment handbook rules, decertification petitions, and duty of fair representation cases, among other things.
Memorandum GC 21-03
Advocated greater enforcement of Section 7 rights regarding workplace health and safety in light of COVID-19.
Memorandum GC 21-08
Endorsed prosecuting universities that did not classify student-athletes as employees under the NLRA.
Memorandum GC 22-06
Offered an update on NLRB regional offices seeking broader remedies when prosecuting unfair labor practices (e.g., consequential damages, employer letters of apology).
Memorandum GC 23-02
Advocated prosecuting employers who used AI and algorithms in a way that could chill employee Section 7 activity.
Memorandum GC 23-05
Endorsed prosecuting employers that imposed on employees broadly worded severance agreements with expansive non-disparagement and confidentiality clauses.
Memorandum GC 23-08
Advocated prosecuting employers that imposed on employees noncompetition agreements outside limited cases.
Memorandum GC 24-04
Supported seeking full remedies (e.g., increased healthcare costs, lost pension contributions) for employees in unfair labor practice charge settlements with employers.
Memorandum GC 24-05
Proposed continuing to seek Section 10(j) injunctive relief against employers despite the higher procedural bar set by the Supreme Court in Starbucks Corp. v. McKinney.
Memorandum GC 25-01
Advocated prosecuting employers who imposed on employees stay-or-pay provisions (e.g., training repayment agreement provisions, quit fees, sign-on bonuses).
Abruzzo Memoranda Rescinded – Pending Further Guidance
Rescinded General Counsel Memoranda
Topic and Relevant Policy
Memorandum GC 21-05
Advocated Board prosecutors seek Section 10(j) injunctive relief to protect Section 7 rights from “remedial failure due to the passage of time.”
Memorandum GC 21-06
Endorsed NLRB regional offices seeking a “full panoply” of make-whole remedies, including “consequential damages to make employees whole for economic losses (apart from the loss of pay or benefits)”, such as credit card late fees or higher healthcare costs, in unfair labor practice cases.
Memorandum GC 21-07
Proposed Board prosecutors seek expanded remedies in formal and informal settlements, including consequential damages, front pay, and work authorization sponsorship for immigrant workers, employer letters of apology, among others.
Memorandum GC 22-01
Supported ensuring that immigrant workers’ Section 7 rights were protected, including by NLRB regional offices pursuing deferred action, parole, and a stay of removal, among other things, when immigrants allege they suffered unfair labor practices.
Memorandum GC 22-02
Endorsed NLRB regional offices seeking Section 10(j) injunctive relief in response to employers allegedly committing unfair labor practices during union organizing campaigns.
Memorandum GC 24-01
Offered guidance to Board prosecutors seeking a Cemex bargaining order against employers that allegedly fail to recognize and bargain with unions.
Memorandum GC 25-04
Provided insight on the interaction between federal anti-discrimination and labor law, including in cases where an employee engages in Section 7 activity that may be discriminatory.
As always, we will continue to monitor developments related to the Board and provide updates as they develop.
Fair Warning: AI’s First Copyright Fair Use Ruling, Thomson Reuters v. ROSS
Early last week, the first substantive US ruling on fair use in AI-related copyright litigation, Thomson Reuters v. ROSS Intelligence, No. 1:20-cv-00613 (D. Del.), was issued by Judge Stephanos Bibas. This landmark opinion marks a significant development in AI litigation, particularly concerning the use of copyrighted materials in training AI models.
However, while this decision focuses on the training of an AI model, it does not involve generative AI technology. Hence, it will be important for AI developers and deployers to continue to monitor future decision that address whether the subsequent steps of generating and distributing AI-generated content are considered fair use of the original works.
Key Takeaways
The court held that ROSS’s copying of Thomson Reuters’ content to build a competing AI-based legal platform is not fair use under the US Copyright Act.
The court found actual copying and substantial similarity of 2,243 Westlaw headnotes.
The court rejected ROSS’s defenses of innocent infringement, copyright misuse, merger, and scenes à faire.
ROSS’s commercial use weighed heavily against its fair use defense.
The court vacated its previous denial of summary judgment on the issue of fair use.
Background
Thomson Reuters owns Westlaw, one of the largest legal research platforms in the US. Through a subscription, Westlaw users are able to access a wide range of resources, including case law, state and federal statutes, state and federal regulations, practical guides, news, law review articles, legislative histories, and trial transcripts. A key feature of Westlaw is its headnotes, which summarize the key points of legal opinions. Additionally, West.aw includes the “Key Number System,” which organizes legal opinions.
ROSS Intelligence, a competitor, sought to license Westlaw’s content to develop its own legal AI-based tool. After Thomson Reuters refused, ROSS obtained “Bulk Memos,” created using Westlaw’s headnotes, through a third-party legal services vendor. Thomson Reuters discovered this and sued ROSS for copyright infringement based on its use of Westlaw content to train its AI model.
Overview of the Case
The court granted partial summary judgment to Thomson Reuters on direct copyright infringement, fair use, and other defenses, while denying summary judgment motions from Ross, analyzing the fair use factors. Under the US Copyright Act (17 § USC 107), there are four factors:
(1) the purpose and character of use, including whether the use is of a commercial nature or for nonprofit educational purposes;
(2) the nature of the copyrighted work;
(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
(4) the likely effect of the use on the potential market for the copyrighted work.
Thomson Reuters prevailed on the first factor. In examining the purpose and character of ROSS’s use, the court focused on whether the use was commercial and transformative. ROSS acknowledged that its use was commercial but argued that it was transformative, as the headnotes in question were allegedly “transformed” into numerical data representing the relationship among legal words in its AI system. The court disagreed, noting that ROSS’s use did not have a further purpose or different character from Thomson Reuters’ use. ROSS also argued that its use was permissible under the doctrine of “intermediate copying,” but again the court disagreed, noting that the cited cases were inapt because they involved copying of computer code rather than written words and the code copying was necessary for competitors to innovate. By contrast, use of the headnotes was not necessary to achieve ROSS’s desired purpose.
The court resolved factor two in ROSS’s favor, finding that although Westlaw’s material has the minimal required originality, it is not highly creative. Further, while the headnotes involve some editorial creativity, the Key Number System is a factual compilation with limited creativity.
The court also ruled in favor of ROSS on factor three, despite the number of headnotes used, because the material available to the public did not include the Westlaw headnotes. According to the court, what matters is not the amount and substantiality of the portion used in making a copy, but rather the amount and substantiality of what is thereby made accessible to the public for which it may serve as a competing substitute. It determined there was no factual dispute, as ROSS’s output did not include Westlaw headnotes.
Finally—and most important in this case—the court emphasized that because ROSS could have developed its own product without infringing Thomson Reuters’ copyrights, the fourth factor weighs in favor of Thomson Reuters. The court examined the likely effect of ROSS’s copying on the market for Westlaw’s product and, while initially considering whether ROSS’s use served a different purpose by creating a new research platform, ultimately concluded that ROSS intended to compete with Westlaw, and failed to prove otherwise. Court have differed over the years on whether the first or the fourth fair use factor is the most important; here analysis of the fourth is crucial (but both favored Thomson Reuters).
Looking Ahead
The implications of this decision for AI copyright litigation and fair use arguments are significant.
First, many practitioners have been waiting for a decision whether creating an AI model is considered transformative and fair use, particularly since AI models store their intelligence as numerical weights that are updated during the training process. But even with such advanced technology, the court in this case declined to hold that the use was transformative, based largely on its ultimate purpose of competing with the owner of the original works.
Also noteworthy is that the court declined to find fair use for an AI technology that is not generative AI. Even though the output from ROSS’s AI system was uncopyrighted verbatim quotes from court opinions (and not the original copyrighted headnotes), there was no fair use.
This could have broader consequences for large language models (LLMs) and generative AI technologies. When judges in other pending generative AI cases consider both the training step (as in ROSS) and the output generation step for a generative AI technology (e.g., an AI-generated image), it could be even less likely that fair use will apply. Here, the court emphasized that “factor four is undoubtedly the single most important element of fair use.” So, if AI-generated content, including that produced by LLMs, is substantially similar to an original work and has a detrimental effect on the market for the original work (e.g., puts an artist out of business), a finding of fair use may be less likely.
Our “fair warning” is this: AI developers and deployers should continue to monitor ongoing AI litigation, while considering the market implications of the use of copyrighted materials for training AI models or distributing AI-generated output. Because fair use is heavily dependent upon the facts, we anticipate different rulings from different courts, particularly where the commercial use of the original content is not a clear-cut as here.
The State of the Funding Market for AI Companies: A 2024 – 2025 Outlook
Artificial intelligence (AI) has emerged as an influential technology, driving notable investments across various industries in recent years. In 2024, venture capital (VC) funding for AI companies reached record levels, signaling ongoing interest and optimism in the sector’s potential. Looking ahead, 2025 is anticipated to bring continued innovation, with promising funding opportunities and a growing IPO market for AI-driven businesses.
VC Funding in 2024: A Year of Growth
Global VC investment in AI companies saw remarkable growth in 2024, as funding to AI-related companies exceeded $100 billion, an increase of over 80% from $55.6 billion in 2023. Nearly 33% of all global venture funding was directed to AI companies, making artificial intelligence the leading sector for investments. This marked the highest funding year for the AI sector in the past decade, surpassing even peak global funding levels in 2021. This growth also reflects the increasing adoption of AI technologies across diverse sectors, from healthcare to transportation and more, and the growing confidence of investors in AI’s transformative potential.
Industries Attracting Funding
The surge in global venture capital funding for AI companies in 2024 was driven by diverse industries adopting AI to innovate and solve complex problems. This section explores some of the industries that captured significant investments and highlights their transformative potential.
Generative AI. Generative AI, which includes technologies capable of creating text, code, images, and synthetic data, has experienced a remarkable surge in investment. In 2024, global venture capital funding for generative AI reached approximately $45 billion, nearly doubling from $24 billion in 2023. Late-stage VC deal sizes for GenAI companies have also skyrocketed from $48 million in 2023 to $327 million in 2024. The growing popularity of consumer-facing generative AI programs like Google’s Bard and OpenAI’s ChatGPT has further fueled market expansion, with Bloomberg Intelligence projecting the industry to grow from $40 billion in 2022 to $1.3 trillion over the next decade. As a result, venture capitalists are increasingly focusing on GenAI application companies—businesses that build specialized software using third-party foundation models for consumer or enterprise use. This new wave of AI identifies patterns in input data and generates realistic content that mimics the features of its training data. Models like ChatGPT generate coherent and contextually relevant text, while image-generation tools such as DALL-E create unique visuals from textual descriptions.
Healthcare and Biotechnology: The healthcare and biotechnology industries have seen a significant surge in AI integration, with startups harnessing the power of artificial intelligence for diagnostics, drug discovery, and personalized medicine. In 2024, these AI-driven companies captured a substantial share of venture capital funding. Overall, venture capital investment in healthcare rose to $23 billion, up from $20 billion in 2023, with nearly 30% of the 2024 funding directed toward AI-focused startups. Specifically, biotechnology AI attracted $5.6 billion in investment, underscoring the growing confidence in AI’s ability to revolutionize healthcare solutions. As AI continues to evolve, its impact on diagnostics and personalized treatments is expected to shape the future of patient care, driving innovation across the sector.
Financial Technology: Fintech, short for financial technology, refers to the use of innovative technologies to enhance and automate financial services. It includes areas like digital banking, payments, lending, and investment management, offering more efficient, accessible, and cost-effective solutions for consumers and businesses. In recent years, AI has become an important tool in fintech, helping to improve customer service through chatbots, enhance fraud detection with machine learning algorithms, automate trading, and personalize financial advice. While overall fintech investment in 2024 has dropped to around $118.2 billion, down from $229 billion in 2021, AI in fintech remains a high-growth area, valued at $17 billion in 2024 and projected to reach $70.1 billion by 2033. This reflects a strong and sustained interest in leveraging AI to revolutionize financial services despite broader investment slowdowns in the sector.
Trends for 2025
In 2025, VC investments in AI companies are continuing the momentum from previous years. Global venture funding totaled $26 billion in January 2025, of which AI-related companies garnered $5.7 billion, accounting for 22% of overall funding. However, despite the continued interest in AI investment, the investment strategies in 2025 may evolve from the approaches seen in 2024, as market dynamics shift and investors adapt to new challenges and opportunities.
In 2024, the investment strategy was heavily characterized by aggressive funding and rapid scaling. Investor focus appeared to be on capitalizing on the hype around AI technology, leading to substantial valuations and rapid deal cycles. The strategy was primarily characterized by pure innovation, with VCs eager to back groundbreaking technologies regardless of immediate profitability. This led to significant investments in cutting-edge research and experimental applications. However, this approach often led to inflated valuations.
On the other hand, the investment landscape in 2025 is expected to shift with VCs adopting more disciplined and strategic investment approaches. The focus now appears to be on sustainable growth and profitability. Investors are predicted to become more selective, favoring companies with solid fundamentals and proven business models to navigate economic uncertainties.
Regulatory concerns are also playing an increasingly significant role in shaping VC investment strategies in AI. Governments worldwide are ramping up efforts to regulate AI technologies to address issues such as data privacy, algorithmic bias, and security risks. For instance, in the United States, regulatory scrutiny is also intensifying, with lawmakers proposing new frameworks to ensure transparency and accountability in AI algorithms. This includes discussions about mandating audits of high-risk AI systems and potentially introducing liability rules for AI-generated content. These evolving regulatory landscapes are contributing to market unpredictability, as startups may face heightened compliance burdens and legal uncertainties. As a result, while the enthusiasm for AI investments remains high, the 2025 strategy is marked by increased due diligence and a more calculated approach, reflecting a growing emphasis on navigating complex regulatory landscapes.
Resurgence of Initial Public Offerings (IPOs):
In 2025, the IPO market for AI companies is expected to be a significant area of focus, driven by a combination of strong growth in the sector and favorable market conditions. A major window for the IPO market could be opening. Analysts attribute this rebound to factors such as markets reaching new highs, stabilized interest rates, a strong economy, and a clearer understanding of the new administration’s plans following the recent election. The favorable market environment for these companies is supported by a solid U.S. economy, which is expected to grow by 2.3% in 2025.
Several major AI players are preparing to enter the public markets. One of the most anticipated IPOs is that of Databricks, an AI-driven data analytics platform that has raised nearly $14 billion in funding, most recently at a $62 billion valuation. The company has expressed intentions to go public in 2025, indicating a favorable outlook for the sector. Additionally, companies like CoreWeave, an AI cloud platform based in New Jersey, are expected to follow with their own IPOs later in the year, further fueling the optimism around AI investments. Crunchbase News highlights that there are at least 13 other AI startups with strong IPO potential in 2025. This IPO pipeline is a reflection of the broader momentum within the AI sector.
Despite these positive indicators, economic challenges such as trade tensions, inflationary pressures, and concerns over policy decisions add a layer of complexity to the market. For example, trade tensions could contribute to rising manufacturing costs, which could put pressure on companies that rely on global supply chains. Tariffs could contribute to inflationary pressures, which could dampen consumer spending and overall economic growth. These challenges highlight the need for companies to navigate an evolving landscape where trade policies and inflationary concerns could impact their growth trajectories.
Despite these hurdles, the IPO market remains buoyed by investor confidence, particularly in AI. As AI companies continue to develop new applications across industries, the appetite for public offerings remains strong. The favorable market environment for these companies suggests that AI will be a key focus for investors seeking sustainable growth opportunities in 2025.
Conclusion
The AI funding landscape in 2024 demonstrated the technology’s transformative potential across industries. As we move into 2025, investors and companies alike will need to navigate evolving market dynamics and regulatory landscapes. The IPO market, too, holds promise, provided companies are well-prepared to meet investor expectations surrounding sustainable growth and profitability.
AI and Blockchain – 1+1 =3
Individually, AI and blockchain are among the hottest, most transformative technologies. Collectively, they are incredibly synergistic – hence the 1+1=3 concept in the title. We are seeing more examples of how the two will interact. Over time, the level of interaction will be extensive. Many projects are being developed that bring the power of AI to blockchain applications and vice versa. One of these projects that has garnered significant attention is the Virtuals Protocol. The project launched in October 2024 via integration with Base, an Ethereum layer-2 network. Just recently, the project announced that it is expanding to Solana.
The Virtuals Protocol is a decentralized platform for buying, trading, and creating AI agents. It transforms AI agents into tokenized, revenue-generating assets. By leveraging blockchain technology, Virtuals Protocol enables the creation, co-ownership, and interaction with AI agents, expanding their potential across various applications.
AI agents are software programs that can interact with their environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. See “What are AI Agents?” for more information.
How Virtuals Protocol Works
The Virtuals Protocol integrates AI agents, blockchain infrastructure, and tokenization to create a scalable, decentralized ecosystem. Here’s a breakdown of how it operates:
Agent Tokenization: AI agents are minted as ERC-20 tokens with fixed supplies, paired with $VIRTUAL in locked liquidity pools. These tokens are deflationary through buyback-and-burn mechanisms.
G.A.M.E Framework: Agents utilize multimodal AI capabilities, such as text generation, speech synthesis, gesture animation, and blockchain interactions. This framework allows agents to adapt in real-time.
Revenue Routing: Agents earn revenue through inference fees, app integrations, or user interactions. The proceeds flow into their on-chain wallets for buybacks or treasury growth.
Memory Synchronization: Agents retain cross-platform memory through a Long-Term Memory Processor, ensuring user-specific, contextual continuity.
Decentralized Validation: Contributions and model updates are governed by a Delegated Proof of Stake (DPoS) system, ensuring agent performance aligns with community standards.
On-Chain Wallets: Each agent operates an ERC-6551 wallet, enabling autonomous transactions, asset management, and financial independence.
What Virtuals Do
The Virtuals Protocol redefines digital engagement across gaming, entertainment, and decentralized economies. By simplifying AI adoption, rewarding contributors, and lowering barriers for non-experts, it creates a scalable ecosystem that delivers value for stakeholders. The platform’s agents collectively hold a valuation of over $850 million at the time of publishing, led by Mentigent and aidog_agent. Ownership of these two tokenized AI agents is fractionalized; each is held by more than 200 owners who receive a share of the revenue generated.
Sample Legal Issues Associated with Virtuals
As with any emerging technology, Virtuals Protocol faces several legal challenges:
Intellectual Property Rights: The creation and use of AI agents raise questions about the ownership and protection of intellectual property. Ensuring that creators and users have clear rights and protections is crucial;
Data Privacy: AI agents collect and process vast amounts of user data, raising concerns about data privacy and security. Robust safeguards are necessary to protect user information;
Liability and Safety Standards: Ensuring the safety and reliability of AI agents is essential. Legal frameworks must address potential liabilities and establish safety standards to protect users; and
Regulatory Compliance: As AI and blockchain technologies evolve, regulatory compliance becomes increasingly complex. Virtuals Protocol must navigate various legal requirements to ensure its operations remain lawful and ethical.
Securities Laws: The tokenization of AI agents as ERC-20 tokens and the fractionalized ownership of high-value AI agents may attract scrutiny under securities laws. If the SEC deems these tokens to be investment contracts under the Howey Test, the project could face enforcement actions, requiring registration. See here for our discussion on the SEC’s gameplan for crypto under Trump.
Consumer Finance Laws: The collection and processing of user data by AI agents could subject the project to data privacy and consumer protection regulations. Furthermore, if promotional efforts are perceived as deceptive or unfair to users or investors, this could lead to enforcement actions under federal or state consumer protection laws. To the extent revenue-sharing models are subject to consumer protection laws, this could trigger requirements for fair and clear disclosures to fractionalized owners.
AI-Specific Regulations: The Federal Trade Commission (FTC) has issued guidance emphasizing the importance of transparency and honesty in the use of AI, and cautioning against deceptive practices such as making misleading claims about AI capabilities or results. Overstating the capabilities or revenue generating potential of AI agents to attract users or investors could lead to increased regulatory scrutiny and enforcement. Proposed federal legislation, such as the Algorithmic Accountability Act, would require projects like Virtuals to assess the impacts of bias and discrimination on automated decision-making systems, including AI. AI agents may require audits for bias, transparency, and accountability, particularly given their use in user interactions and decision-making.
Despite the novel legal issues Virtuals Protocol presents, the project represents an exciting and significant advancement in the integration of AI and blockchain technologies. By transforming AI agents into tokenized assets, it creates new opportunities for digital engagement and revenue generation. However, addressing the associated legal issues is essential to ensure user trust and the platform’s sustainable growth.
New York Proposal to Protect Workers Displaced by Artificial Intelligence
On 14 January 2025, during her State of the State Address (the Address), New York Governor Kathy Hochul announced a new proposal aimed at supporting workers displaced by artificial intelligence (AI).1 This proposal would require employers to disclose whether AI tools played a role in mass layoffs or closings subject to New York’s Worker Adjustment and Retraining Notification Act (NY WARN Act). Governor Hochul announced that she is directing the New York State Department of Labor (DOL) to enact and enforce this requirement. The DOL does not have a timeline for implementing the new requirement, and Labor Commissioner Roberta Reardon acknowledged that “defining what counts as an AI-related layoff would be a challenge [to implementation].”2
In the Address, Governor Hochul acknowledged the benefits of AI, stating, “[innovations in AI] have the ability to change the way businesses operate, leading to greater efficiency, fewer business disruptions, and increased responsiveness to customer needs.” However, the implementation of AI tools in the workplace leads to increased automation, which may result in increased job loss, wage stagnation or loss, reduced hiring, lack of job satisfaction, and skill obsolescence—all of which are major concerns for US workers.3
The primary goals of imposing these employer disclosures are to: (i) aid transparency and gather data on the impact of AI technologies on employment and employees; and (ii) ensure the integration of AI tools into the workforce creates an environment where workers can thrive.
Implications for Employers
Disclosure Requirement
Employers in New York will need to disclose in their NY WARN Act notices whether layoffs are due to the implementation of AI tools replacing employees.
Scope
While specific details about the scope of the new disclosure requirement are not yet available, employers should prepare for this additional obligation as part of the existing complex notice requirements under the NY WARN Act.4
Compliance
Employers contemplating a NY WARN Act-triggering event should consult with legal counsel to ensure compliance with these disclosure requirements and expanded NY WARN Act obligations.
NY WARN Act
The Worker Adjustment and Retraining Notification Act (WARN Act) is a federal law that requires covered employers to provide employees with 60-day advance notice before closing a plant or conducting a mass layoff.5 The purpose of the WARN Act is to give workers and their families time to adjust to potential layoffs and to seek or train for new jobs.6 New York is one of 18 states with its own “mini-WARN Act.” The NY WARN Act imposes stricter requirements than the federal WARN Act. For example, the NY WARN Act applies to employers with 50 or more employees while the federal WARN Act applies to employers with 100 or more employees. The NY WARN Act also requires a 90-day advance notice, compared to the 60-day notice required under federal law. The early warning notices of closures and layoffs are provided to affected employees, their representatives, and the Department of Labor and local officials. If Governor Hochul’s proposal is enforced, NY WARN Act notices will also need to include the required AI disclosure.
Takeaways for Employers
Employers should be well versed in how AI tools are being used and the impact they are having on workers, especially if such impacts may lead to mass layoffs. Specifically, legal and human resources leaders should understand how the business is automating certain processes through AI tools and the implications the tools have on headcount requirements, employee job satisfaction and morale.
Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of issues related to emerging issues in labor, employment, and workplace safety law, and are well-positioned to provide guidance and assistance to clients on AI developments.
Footnotes
1 https://www.governor.ny.gov/news/governor-hochul-announces-new-proposals-support-small-businesses-and-boost-economic-growth
2 https://news.bloomberglaw.com/product/blaw/bloomberglawnews/exp/eyJpZCI6IjAwMDAwMTk0LTcxNTYtZDIzYy1hYmZjLTc1ZmU5NDhiMDAwMSIsImN0eHQiOiJETE5XIiwidXVpZCI6ImhqMGRvcTNKdGdrSkpKckZyL01QaUE9PU9seW0rTExPbVdiODlZZ1N6aWtDZHc9PSIsInRpbWUiOiIxNzM3Mzc0NjI2MDg5Iiwic2lnIjoiYTZXMnkwZnczcGZ3SnVpdlFrclV0S3FERFlnPSIsInYiOiIxIn0=?source=newsletter&item=body-link®ion=text-section&channel=daily-labor-report
3 https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity#:~:text=Roughly%20half%20the%20exposed%20jobs,of%20these%20jobs%20may%20disappear; https://cepr.org/voxeu/columns/workers-responses-threat-automation.
4 12 NYCRR Part 92
5 https://www.dol.gov/general/topic/termination/plantclosings
6 Id.
Are Employees Receiving Regular Data Protection Training? Are They AI Literate?
Employee security awareness training is a best practice and a “reasonable safeguard” for protecting the privacy and security of an organization’s sensitive data. The list of data privacy and cybersecurity laws mandating employee data protection training continues to grow and now includes the EU AI Act. The following list is a high-level sample of employee training obligations.
EU AI Act. Effective February 2, 2025, Article 4 of the Act requires that all providers and deployers of AI models or systems must ensure their workforce is “AI literate”. This means training workforce members to achieve a sufficient level of AI literacy considering various factors such as the intended use of the AI system. Training should incorporate privacy and security awareness given the potential risks. Notably, the Act applies broadly and has extraterritorial reach. As a result, this training obligation may apply to organizations including but not limited to:
providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country (e.g., U.S.);
deployers of AI systems that have their place of establishment or are located within the Union; and
providers and deployers of AI systems that have their place of establishment or are located in a third country (e.g., U.S.), where the output produced by the AI system is used in the Union.
California Consumer Privacy Act, as amended (CCPA). Cal. Code Regs. Tit. 11 sec. 7100 requires that all individuals responsible for the business’s compliance with the CCPA, or involved in handling consumer inquiries about the business’s information practices, must be informed of all of the requirements in the CCPA including how to direct consumers to exercise their rights under the CCPA. Under the CCPA, “consumer” means a California resident and includes employees, job applicants and individuals whose personal data is collected in the business to business context.
HIPAA. Under HIPAA, a covered entity or business associate must provide HIPAA privacy training as well as security awareness training to all workforce members. Note that this training requirement may apply to employers in their role as a plan sponsor of a self-insured health plan.
Massachusetts WISP law (201 CMR 17.03 201). Organizations that own or license personal information about a resident of the Commonwealth are subject to a duty to protect that information. This duty includes implementing a written information security program that addresses ongoing employee training.
23 NYCRR 500. The New York Department of Financial Services’ cybersecurity requirement for financial services companies requires that covered entities provide cybersecurity personnel with cybersecurity updates and sufficient training to address relevant cybersecurity risks.
Gramm-Leach-Bliley Act and the Safeguards Rule. The Safeguards Rule requires covered financial institutions to implement a written information security program to safeguard non-public information. The program must include employee security awareness training. In 2023, the FTC expanded the definition of financial institutions to include additional industries such as automotive dealerships and retailers that process financial transactions.
EU General Data Protection Regulation (“EU GDPR”). Under Art. 39 of the EU GDPR, the tasks of a Data Protection Officer include training staff involved in the organization’s data processing activities.
In addition to the above, there are express or implied security awareness training obligations in numerous other laws and regulations including certain Department of Homeland Security contractors, licensees under state insurance laws modelled on the NAIC Insurance Data Security Model Law, and organizations that process payments via credit cards in accordance with PCI DSS.
Whether mandated by law or implemented as a best practice, ongoing employee privacy and security training plays a key role in safeguarding an organization’s sensitive data. Responsibility for protecting data is no longer the sole province of IT professionals. All workforce members with access to the organization’s sensitive data and information systems share that responsibility. And various stakeholders, including HR professionals, play a vital role in supporting that training.
Fair Use Falls Short: Judge Bibas Rejects AI Training Data Defense in Thomson Reuters v. ROSS
Fair use — a critical defense in copyright law that allows limited use of copyrighted material without permission — has emerged as a key battleground in the wave of artificial intelligence (AI) copyright litigation. In a significant revision of his earlier position, Judge Stephanos Bibas in the United States District Court for the District of Delaware has dealt a blow to artificial intelligence companies by blocking their ability to rely on this defense in Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc.
Fair use serves as a safety valve in copyright law, permitting uses of copyrighted works for purposes such as criticism, commentary, news reporting, teaching, scholarship, or research. Courts evaluate fair use through four factors, with particular emphasis on whether the use transforms the original work and how it affects the market for the copyrighted work.
The case originated when Thomson Reuters sued ROSS Intelligence for using content from Westlaw — the legal research platform’s headnotes and proprietary Key Number System — to train an AI-powered legal research competitor. ROSS had initially sought to license Westlaw content, but when Thomson Reuters refused, ROSS turned to a third-party company, LegalEase Solutions. LegalEase created approximately 25,000 legal question-and-answer pairs, allegedly derived from Westlaw’s copyrighted content, which ROSS then used to train its AI system.
In September 2023, Bibas denied Thomson Reuters’ motion for summary judgment on fair use, finding that the question was heavily fact-dependent and required jury determination. He particularly emphasized factual disputes about whether ROSS’s use was transformative and how it affected potential markets.
However, in August 2024, Bibas took the unusual step of continuing the scheduled trial and inviting renewed summary judgment briefing. His opinion represents a dramatic shift, explicitly acknowledging that his “prior opinion wrongly concluded that I had to send this factor to a jury.” While noting that fair use involves mixed questions of law and fact, Bibas recognized that the ultimate determination in this case “primarily involves legal work.”
Bibas’ unusual move to invite renewed briefing stemmed from his realization, upon deeper study of fair use doctrine, that his earlier ruling afforded too much weight to factual disputes and did not fully account for how courts should assess transformative use in AI-related cases. Rather than viewing transformation through the lens of whether ROSS created a novel product, Bibas recognized that the key question was whether ROSS’s use of Thomson Reuters’ content served substantially the same purpose as the original works.
He concluded that while fair use involves factual elements, the dispositive questions in the case were ultimately legal ones appropriate for resolution by the court. His key analytical shifts included rejecting the notion that ROSS’s use might be transformative merely because it created a “brand-new research platform.”
Critically, Bibas distinguished ROSS’s use from cases like Google v. Oracle where copying was necessary to access underlying functional elements. While Google needed to copy Java API code to enable interoperability between software programs, ROSS had no similar technical necessity to copy Westlaw’s headnotes and organizational system. ROSS could have developed its own legal summaries and classification scheme to train its AI — it simply found it more expedient to build upon Thomson Reuters’ existing work.
The concept of “transformative” use lies at the heart of the fair use analysis. This principle was recently examined by the Supreme Court in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, where the Court significantly narrowed the scope of transformative use under the first fair use factor. The Court held that when two works serve “substantially the same purpose,” the fact that the second work may add “new expression, meaning, or message” is not enough to tip the first factor in favor of fair use.
This framework for analyzing transformative use provides important context for understanding how courts may evaluate AI companies’ fair use defenses. AI companies have consistently argued that their use of copyrighted materials for training data is transformative because the AI systems learn patterns and relationships from the works rather than reproducing their expressive content. They contend that this process fundamentally transforms the original works’ purpose and character. However, Bibas’ ruling suggests courts may be increasingly skeptical of such arguments, particularly when the resulting AI products compete in similar markets to the original works.
While this ruling represents a setback for one of the key defenses believed to be available to AI companies in copyright litigation, fair use is only one of several defenses these companies are raising in more than 30 pending lawsuits. Other defenses include arguments about copyrightability, substantial similarity, and whether training data uses constitute copying at all. A weakening of the fair use defense, while significant, does not necessarily predict the ultimate outcome of these cases.
Additionally, this case involved a particularly direct form of market competition — an AI system trained on legal content to compete with the original legal research platform. Other cases involving different types of training data or AI applications that don’t directly compete with the source materials might be distinguished. For instance, an AI trained on literary works to generate news articles might present a more compelling case for transformative use since the end product serves a fundamentally different purpose than the training data.
Nevertheless, Bibas’ ruling may alter how AI companies approach training data acquisition. If other courts follow his lead in viewing fair use primarily as a legal rather than factual determination, these companies may explore licensing agreements with copyright holders — a process that some have already undertaken.
Listen to this article
Clock is Ticking for Responses to UK Government Consultation on Copyright and Artificial Intelligence
Ever since the emergence of generative AI, a major concern for all participants has been the extent to which copyright works can and should be used in training AI models.
The application of UK copyright law for this purpose is disputed, leading to inevitable high-profile tension between, on one hand, rights holders keen to control and be paid for use of their work and, on the other, developers who argue that this legal uncertainty is undermining investment in, and the development of, AI in the UK.
Whilst cases are making their way through the courts in the UK and further afield (such as in Germany[1] and the US[2]) on this issue, there have been frequent calls for specific legislation (including by the UK government itself, which has publicly stated that the status quo cannot continue).
As a result, the UK government has launched a consultation[3] open until 25 February 2025 inviting interested parties to submit feedback on potential changes to UK copyright legislation in light of AI. The options set out in the consultation, and on which feedback is sought, range from doing nothing through to the introduction of broad data mining rights which would allow use of copyright works for AI training (including for commercial use), without rights holders’ permission and subject to few or no restrictions.
The Options
The consultation invites feedback on four potential options under consideration:
Do nothing and leave UK copyright and other related laws as they are – essentially this would defer the matter to the courts to resolve on a piecemeal and ad-hoc basis. Whilst feedback on this option is invited the consultation makes clear this is not an option looked upon favourably by the government, given that it would prolong the current legal uncertainty.
The opt-in model requiring licensing in all cases – this would strengthen copyright protection for rights holders by providing that AI models could only be trained on copyright works in the UK if an express licence to do so has been granted. This option is likely to be popular with rightsholders, but at odds with the government’s desire to turbocharge the AI economy in the UK.
A broad data mining exception – this would follow a similar approach already seen in Singapore[4] (and to some extent in the US under its “fair use” standards) and allow data mining on copyrighted works in the UK, including for AI training, without the rights holder’s permission. Under this approach, copyrighted material could be used for commercial purposes, and would be subject to few, if any, restrictions. Needless to say, this option is likely to be very popular with AI developers but is the least favoured by rights holders.
Allow data mining but copyright holders to reserve their rights along with increased transparency measures – this is the middle ground between options 2 and 3, and would allow AI developers to train AI models using material to which they have lawful access, but only to the extent that right holders have not expressly reserved their rights. Any such use would also be subject to robust transparency measures requiring developers to be transparent about what material their AI models have been trained on. For rights holders, this means an “opt-out” as opposed to “opt-in” model and pro-active monitoring to identify unauthorised use.
The Unanswered Questions
Option 4 broadly follows the approach which has already been seen in the EU under the not uncontroversial text and data mining exception in the EU Directive on the Digital Single Market,[5] which has been further enhanced by the EU AI Act[6] which declared these text and data mining exceptions to be applicable to general-purpose AI models. The government’s view expressed in the consultation is that option 4 is the option which would balance the rights of all participants, although the EU approach was rejected by the previous government as being a threat to rightsholders interests.
However, at this stage it does not represent a “silver bullet” as many issues would still need to be resolved, including those set out below:
It is unclear how a “rights reserved” model would work in practice and how exactly copyright owners would be able to reserve their rights. The EU equivalent provision requires opt-outs to be machine readable, but query how this works once multiple copies are available. There is also the question of what “machine-readable” means in the context of machines designed to read anything (including handwriting).
How would such a model apply to works that are already publicly available and how does it address works which have been previously used to train current AI models? It is uncommon for legislation to have retroactive effect. This would then leave it open to debate what will happen with works that have been mined prior to the effective date of the legislation, and would leave early entrants into the AI market with a huge advantage (or disadvantage) depending on what shape any future legislation and court cases take.
Does this apply to works in non-digital formats? The EU legislation on data mining specifically refers to “automated analytical technique aimed at analysing text and data in digital form.” But how does this apply to books which are scanned in (a process which Google went through many years ago)?
What happens to AI models if all or a significant volume of rights holders opt-out? An AI opt-out could soon become ubiquitous at which point developers could find themselves wading through a significant volume of claims making the UK an unpopular location for AI development.
How will rights holders know that their material has been used? The consultation states that robust measures will be put in place to ensure that developers are transparent about the works their models are trained on, but what will be the penalties for failing to be transparent and will there be robust enforcement against non-compliance?
To what extent can and should any new legislation have extraterritorial application? With many major AI players headquartered outside of the UK, any new legislation which is limited to those based in the UK may have limited impact and an increased legislative burden in the UK could make it a less attractive location for AI businesses.
Ultimately the outcome may be collective licensing deals between rights holders and AI developers as has already happened for a number of news outlets and websites. However, that will be reliant on collective will and action by rights holders and a willingness to embrace AI, which so far has not been forthcoming.
[1] Breaking News from Germany! Hamburg District Court breaks new ground with judgment on the use of copyrighted material as AI training data | Global IP & Technology Law Blog
[2] Copyright Office: Copyrighting AI-Generated Works Requires “Sufficient Human Control Over the Expressive Elements” – Prompts Are Not Enough | Global IP & Technology Law Blog
[3] Copyright and Artificial Intelligence – GOV.UK
[4] Artificial Intelligence and Intellectual Property Legal Frameworks in the Asia-Pacific Region | Global IP & Technology Law Blog
[5] EU Directive 2019/790.
[6] EU Regulation 24/1689.
Sumaiyah Razzaq contributed to this article