Court Definitively Rejects Fair Use Defense in AI Training Case

In one of the most closely-watched copyright cases this year, a Delaware court rejected defendant, ROSS Intelligence’s (“ROSS”), fair use and other defenses by vacating its previous stance and granting summary judgement in favor of plaintiff, Thomson Reuters (“Reuters”). The case stems from allegations that ROSS used copyrighted material from Reuters’ legal research platform, Westlaw, to train its artificial intelligence (“AI”)-driven legal research engine. While the decision will certainly be informative to the dozens of pending lawsuits against AI developers, it is important to note the scope of the opinion is limited to the specific facts of this case. Specifically, generative AI was not involved, and the court’s analysis of the fair use factors was heavily focused on the fact that the parties are direct competitors.
The fair use analysis considered the four statutory factors, with factors one and four weighing most heavily.
Factor One—Purpose and Character of the Use: The court easily determined that ROSS’s use was commercial, as ROSS sought to profit from the copyrighted material without paying the customary price. Significantly, the court also found that the use was not transformative—ROSS used the copyrighted material to build a competing legal research tool, which closely resembled Westlaw’s functionality. Finally, the court rejected ROSS’s attempts to align its actions with a string of cases involving so-called “intermediate copying” of computer code, explaining that although the final commercial product ROSS presented to consumers was not a direct copy (the copying was an intermediate step), ROSS’ use was not necessary. Accordingly, because the use lacked a different purpose or character, the court found this factor weighed against fair use.
Factor Two—Nature of the Copyrighted Work: While the court acknowledged that Reuters’ headnotes involved editorial judgment and the minimum amount of creativity required for copyright validity, they were not highly creative works. Rather, they were more akin to factual compilations, which receive less protection under copyright law. The court found that this factor weighed in favor of fair use, but noted that this factor rarely plays a significant role in fair-use determinations,
Factor Three—Amount and Substantiality of the Use: Although ROSS copied a significant number of Westlaw headnotes, its final product did not present them directly to users. Instead, ROSS used the headnotes to train its AI to help retrieve judicial opinions. Given that the public did not receive direct access to the copyrighted material, the court found that this factor weighed in favor of fair use.
Factor Four—Market Effect. The court explained that ROSS’ use created a market substitute for Reuters’ legal research platform, directly harming its core business. In addition, Reuters had a potential market for licensing its headnotes as AI training data, and ROSS’ use undermined that opportunity. The court emphasized that this “is undoubtedly the single most important element of fair use,” and found that it weighed decisively against ROSS and against a finding of fair use.
Putting it Into Practice: Some key takeaways from this decision and its potential impact on other AI cases:

this decision is fact-specific and does not address generative AI;
the decision signals the limits of fair use, particularly in cases where copyrighted material is used for a non-transformative purposes to develop a competing product; and
as AI litigation continues to evolve, companies developing or deploying AI should seek ongoing legal training and they should ensure they have (and update) effective policies to manage AI risks for training AI and using AI to generate content.

 
Listen to this post

Legal AI Unfiltered: 16 Tech Leaders on AI Replacing Lawyers, the Billable Hour, and Hallucinations

With AI-powered tools promising efficiency gains and cost savings, AI is fundamentally changing the practice of law. But as AI adoption accelerates, major questions arise: Will AI replace lawyers? What does AI adoption mean for the billable hour? And can hallucinations ever be fully eliminated?
To explore these issues, we surveyed 16 tech leaders who are at the forefront of AI-driven transformation. They provided unfiltered insights on the biggest AI adoption challenges, AI’s effect on billing models, the potential for AI to replace lawyers, and the persistent problem of hallucinations in legal AI tools. Here’s what they had to say:
Why Are Law Firms Hesitant to Adopt AI Tools?
According to our survey of tech leaders, law firms’ hesitation in adopting AI is driven by several key factors, primarily concerns about accuracy, risk, and economic incentives. Many firms worry that AI tools can generate incorrect or misleading information while presenting it with unjustified confidence, making mistakes hard to detect. Additionally, larger firms that rely on the billable hour see efficiency-driven AI as a potential threat to their revenue models. Other firms lack a clear AI strategy, making AI adoption and integration difficult. Trust, data privacy, and liability remain major concerns.
More specifically, here’s what tech leaders had to say about law firm hesitancy in adopting AI:
Scott Stevenson, CEO, Spellbook:

Daniel Lewis, CEO, LegalOn Technologies: “Law firms are hesitant to adopt AI over risk and liability concerns — accuracy and client confidentiality matter most. They need professional-grade AI that is accurate and secure. Solve that, and firms will break through business and organizational barriers—unlocking immense value for themselves and their clients.” 
Kara Peterson, Co-Founder, descrybe.ai: “Because you can’t really count on it to be right—at least not yet. And unlike humans, when AI is unsure, it doesn’t admit it. In fact, it speaks with great authority and persuasiveness even when it is completely wrong. This means human lawyers must double-check everything because the errors are hard to spot. For many firms, this is simply too big a barrier to overcome. They are waiting for more reliable and error-free tools before jumping in.” 
Katon Luaces, President & CTO, PointOne: “The main reason law firms are hesitant to adopt AI is reliability.” 
Ted Theodoropoulos, CEO, Infodash: “A primary reason law firms hesitate to adopt AI is the absence of a comprehensive strategy. According to Thomson Reuters’ 2024 Generative AI in Professional Services survey, only 10% of law firms have a generative AI policy. Policies typically stem from well-defined strategies; without a clear strategy, formulating effective policies becomes challenging. Consequently, it’s likely that fewer than 10% of firms possess an AI strategy. Often, firms appoint C-suite or director-level AI/innovation roles without a pre-established strategy, expecting these individuals to develop one. However, strategic planning is most effective when initiated from the top down, and lacking this foundation can lead to unsuccessful AI integration.” 
Dorna Moini, CEO/Founder, Gavel: “Law firms are mainly cautious because they need to ensure that any new technology meets their high standards for accuracy and confidentiality. They have built a reputation on careful, detailed work and worry that premature adoption might compromise quality. However, as AI improves and its track record strengthens, it can support lawyers in routine tasks without sacrificing the meticulous approach that defines legal practice.” 
Troy Doucet, Founder, AI.Law: “Fear. Perfect is currently the enemy of the good, and as that subsides, lawyers will use it more.” 
Colin Levy, Director of Legal, Malbek: “The risk of hallucinating, where a tool produces inaccurate or misleading information, is a key reason. Secondarily to this the lack of transparency around AI tools and the data they use/rely on is another cause of concern and hesitancy for many law firms.” 
Gil Banyas. Co-Founder & COO, Chamelio: “The #1 reason law firms are hesitant to adopt AI is the lack of urgency due to their billable hour business model. Since firms generate revenue based on time spent, there’s no immediate financial incentive to implement efficiency-boosting AI tools that could reduce billable hours.” 
Arunim Samat, CEO, TrueLaw: “Impact to the billable hour.” 
Greg Siskind, Co-founder, Visalaw.ai: “Concerns regarding answer quality.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “The answer depends on a law firm’s familiarity with AI and its awareness of the current legal AI market. Attorneys with little to no understanding of AI tend to be hesitant, often due to concerns about security, accuracy, and reliability. Those with some knowledge of AI and legal technology are more skeptical about its practical applications and return on investment, particularly given that many legal AI solutions require lengthy, complex implementations and significant change management.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “Data privacy and hallucinations are the most common concerns we hear from law firms. Lawyers want to ensure the data they provide will not go towards training any models, and they want to know that the outputs of models are reliable and grounded in truth.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Law firms need to be sure that any AI tool they adopt will not compromise the precision or confidentiality required in legal work.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “Trust. Lawyers need to have complete confidence in their tools, and AI can sometimes feel like a ‘black box’—making decisions without clear explanations. If they don’t fully understand how AI reaches conclusions, it’s hard to trust it with high-stakes legal work. But here’s the thing—AI has been used in law for over a decade. Tools like Technology-Assisted Review (TAR) have proven to be as reliable as human review when used correctly. The hesitation isn’t really about whether AI can be trustworthy; it’s about transparency and control. The good news? With the right safeguards, oversight, and clear explanations of AI-driven decisions, law firms can use AI confidently. It’s not about replacing legal judgment—it’s about supporting it with smarter, faster tools.”

Where Does AI Excel, and Where Is It Still Overhyped?
Legal AI tools have made significant strides in the past two years, particularly in automating routine tasks that involve large volumes of data and well-defined processes. However, AI still struggles with more nuanced legal work that requires contextual understanding and strategic reasoning. Most legal tech leaders identified clear areas where AI is proving effective, alongside areas where expectations may still outpace reality.
Currently, AI excels in contract review, where it can analyze and summarize contracts with high accuracy. It is also highly effective in document review and due diligence, flagging inconsistencies, and surfacing relevant documents. Additionally, AI has reliably streamlined e-discovery, significantly reducing the time spent reviewing electronic documents. Another strength is its ability to summarize and extract data from documents. 
However, AI remains less reliable in legal brief writing, as it struggles with complex legal arguments and strategic reasoning. Similarly, while it can return results for case law research, it often fails to grasp legal context, hierarchy, and nuances—though some legal tech leaders hold differing views on its efficacy in this area. 
Legal tech leaders shared their insights into this “jagged frontier” of AI’s capabilities:

Scott Stevenson, CEO, Spellbook: “Excelling: Contract review and drafting; Hype: Litigation brief writing.” 
Daniel Lewis, CEO, LegalOn Technologies: “There is a ‘jagged frontier’ between what AI handles well and where it can improve. It excels at repetitive and time-consuming tasks with clear guardrails, like contract review, drafting, and some types of Q&A, while less structured tasks like legal research carry a higher risk of hallucination. Contract review stands out for its defined standards, verifiable outputs, and clear objectives.” 
Kara Peterson, Co-Founder, descrybe.ai: “AI is incredible at generating and interpreting text. However, it is not yet good at producing error-free, multistep legal workflows—though it is getting close. I wouldn’t call agentic AI in law “hype,” but it is still some distance away from being fully reliable.” 
Katon Luaces, President & CTO, PointOne: “The great majority of legal tasks have yet to be mastered by AI. However, there are many tasks frequently done by lawyers that AI is genuinely excelling at. For example, AI is excellent at administrative work such as filling in billing codes and writing time entry descriptions—tasks that aren’t legal tasks historically done by lawyers.” 
Ted Theodoropoulos, CEO, Infodash: “AI is excellent at acting as a sounding board during ideation and bringing additional perspective to the creative process. That said, it’s not good at generating novel ideas as it is limited to the confines of its training data. AI is good at summarization, extraction, and classification but still has a lot of room for improvement. For higher risk tasks it shouldn’t be relied upon solely. Currently, AI is not good at understanding context and nuance. As the infamous Stanford paper on legal research tools pointed out last year, these tools misunderstand holdings, struggle to distinguish between legal actors, and fail to grasp the order of authority.” 
Dorna Moini, CEO/Founder, Gavel: “Today, AI is particularly effective at tasks like document review, legal research, and contract analysis. It can process large volumes of information quickly and flag important details for further review. On the other hand, AI is still far from being able to handle complex legal strategy or provide the nuanced judgment that experienced lawyers offer.” 
Colin Levy, Director of Legal, Malbek: “AI is still not great at handling complexity or ambiguity, but some tools are getting better. Currently existing tools, however, are really good at conducting basic legal research and document review and analysis. AI tools are best at specific and well-defined tasks.” 
Gil Banyas. Co-Founder & COO, Chamelio: “Genuinely Excelling: Document review & due diligence (finding relevant clauses, inconsistencies across contracts), legal research (case law/statute search, surfacing relevant precedents), and contract analysis (template comparison, risk flagging). Current hype: Legal writing from scratch, strategy/counseling, negotiation, and settlement work.” 
Arunim Samat, CEO, TrueLaw: “AI excels in document review for eDiscovery; however, prompt engineering complicates the measurement of CAL review metrics. Caselaw research remains little more than an advanced search function in a database, as AI models are not yet capable of formulating case strategies while accurately citing relevant case law. Teaching LLMs to shepardize effectively remains a complex challenge.” 
Greg Siskind, Co-founder, Visalaw.ai: “Practice management advice, legal research (via curated libraries), summarization, legal drafting and analysis.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “E-discovery (and other procedural solutions) are likely the most advanced so far, as they are easier to develop and face fewer challenges related to issues like hallucinations and transparency. Meanwhile, some tools in more substantive areas, such as legal research, are marketed with great enthusiasm but may not fully meet expectations just yet. However, that doesn’t mean they won’t get there—it may simply take more time for them to be perfected.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI is excelling at tasks where it has access to a wealth of “grounding” data. Given the valid concerns around hallucinations, the most effective work an AI can do relies less on what the AI model intrinsically “knows” and more on analyzing data provided by the user or external sources (such as case law). For example, AI excels at document review because it’s processing and transforming existing data and at much faster rates than humans can.” 
Troy Doucet, Founder, AI.Law: “The hype/reality issue is more from the companies that say they do AI but don’t really have an offering. We find AI can do just about anything if you know how to work with it.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Automating repetitive tasks such as document review, data extraction, and research. AI’s ability to aggregate information from multiple sources demonstrates that AI is already effective in these areas. AI still struggles with tasks that require nuanced judgment and strategic thinking.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “AI is making a real impact in legal work, especially when it comes to handling large volumes of documents. One area where it truly shines is summarizing lengthy legal materials—think trial packages, depositions, and case files. Plus, AI helps minimize human error by ensuring critical information isn’t overlooked—especially in claims and litigation.”

Will “Hallucinations” in Legal AI Tools Ever Be Eliminated?
AI hallucinations—when a model generates incorrect or fabricated information—remain one of the biggest concerns for lawyers when using AI-powered legal tools. While advancements in AI continue to mitigate these issues, experts largely agree that hallucinations will likely persist to some degree due to the probabilistic nature of LLMs. 
Nonetheless, some legal tech leaders believe that hallucinations can be eliminated completely.
Legal tech companies are taking different approaches to address the “hallucination challenge,” from refining training data to improving AI oversight and validation systems. Many companies focus on “grounding” AI models in authoritative legal content, ensuring they pull from verified sources rather than relying solely on predictive algorithms. Others are developing fact-checking layers and human-in-the-loop review processes to minimize errors before outputs reach end users.
Here’s what the heads of these companies had to say about hallucinations: 

Scott Stevenson, CEO, Spellbook: “If you force an AI tool to do something that is impossible, it will hallucinate. If you give it achievable tasks and supply it with correct information that can fit in its short-term memory, it generally will not. We no longer hear of customers complaining about hallucination at Spellbook.” 
Daniel Lewis, CEO, LegalOn Technologies: “Eliminating hallucinations entirely may be out of reach for now but substantially reducing them is achievable. At LegalOn, we do this by grounding AI in authoritative legal content built by our in-house lawyers, ensuring accuracy from the start. Thoughtful product design can also make a big difference in helping users quickly evaluate the reliability of an AI-generated answer.” 
Kara Peterson, Co-Founder, descrybe.ai: “Given how quickly AI has advanced, it’s hard to imagine that hallucinations won’t eventually be solved. I expect AI will develop self-monitoring capabilities, which could potentially eliminate this issue once and for all.” 
Katon Luaces, President & CTO, PointOne: “Some amount of ‘hallucination’ is done even by humans when reasoning and writing, we just frame them differently. These errors are fundamental to the pursuit of complex tasks and will never be eliminated completely. That said, for certain tasks, AI already has a lower error rate than the 75th percentile lawyer and will continue to improve.” 
Ted Theodoropoulos, CEO, Infodash: “Currently, no legal tech company has completely eradicated hallucinations in AI outputs. Some vendors claim to have solved this issue, but such assertions often don’t withstand thorough examination. Given the substantial investments in AI research, many of the brightest minds are dedicated to addressing this challenge, which suggests a positive outlook. However, as of now, hallucinations remain an inherent aspect of large language models, and ongoing efforts continue to mitigate this issue.” 
Nathan Walter, CEO, Briefpoint: “Hallucinations are a symptom of LLM’s infancy – not a requisite part of their functionality. They can and will be solved through ‘trust but verify’ implementations wherein all generated citations can be quickly verified by the user.” 
Dorna Moini, CEO/Founder, Gavel: “LLMs sometimes generate errors or ‘hallucinations’ due to the way they predict text based on patterns in data. Developers are making progress with safeguards and improved models to reduce these occurrences. While it may not be possible to completely eliminate them, continuous improvements should help make AI more reliable for legal applications.” 
Colin Levy, Director of Legal, Malbek: “Unclear. This seems to be awfully dependent on a) how these models are designed and b) the amount (breadth + depth) of data used to train the models on. Currently, data set size is a major limitation of existing models.” 
Gil Banyas. Co-Founder & COO, Chamelio: “Hallucinations are an inherent feature of LLMs, but that’s okay. Leading legal tech companies are building comprehensive systems where LLMs are just one component. With the right checks and balances in place, hallucinations can be effectively contained.” 
Arunim Samat, CEO, TrueLaw: “LLMs, by their nature, are probabilistic generating machines, and with probability, nothing is certain. Hallucinations are highly use-case-specific. In document classification for eDiscovery, hallucinations are easier to measure using standard precision and recall metrics. In contrast, generative tasks present greater challenges, though the risk can be minimized—almost to zero—using grounding techniques. However, given the probabilistic nature of LLMs, there are no statistical guarantees. Eliminating hallucinations entirely would imply creating an “information black hole”—a system where infinite information can be stored within a finite model and retrieved with 100% accuracy. In its current form, I don’t believe this is possible.” 
Greg Siskind, Co-founder, Visalaw.ai: “I think we will have this problem for a couple of years but at a diminishing rate. I think after about five years or so the problem a lot have largely disappeared.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “Legal tech companies might not be able to eliminate hallucinations entirely, but they’re putting stronger guardrails in place to keep them in check. Engineers are using structured prompts, fine-tuning models, and building smarter architectures to reduce them. Plus, companies are rolling out advanced validation layers, fact-checking systems, and other safeguards to catch and correct errors. While hallucinations are likely to stick around as a natural part of LLMs, these improvements will go a long way in making legal AI more accurate and reliable.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “As with any technology, flaws like these will probably never be completely eliminated. However, there have been significant advances both in model technology and in data augmentation in just the last 6 months that have vastly improved accuracy. When paired with improved explainability and citation features, AI-generated responses are becoming much more verifiable and trustworthy.” 
Troy Doucet, Founder, AI.Law: “This actually isn’t hard to avoid from a programming issue for a company building on top of LLMs. The LLMs themselves will figure this out too once they make it a priority.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “It’s important to reference citations and rely on validated sources to manage the risk of inaccurate outputs. Some degree of error remains inherent in current AI models, necessitating human review.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “AI hallucinations are an inherent challenge of LLMs. While they may never fully disappear, legal tech companies can significantly reduce them with smarter AI design. In law, where accuracy is everything, even one AI-generated mistake—like the Canadian lawyer citing fake precedents—can be a serious liability. But fixing this issue isn’t just about human oversight—it starts with using AI built for the job. How do we cut down on hallucinations? Extractive AI. Instead of generating new interpretations, extractive AI pulls and organizes key details directly from source documents, keeping everything factually accurate.”

Will AI Change the Billable Hour Model?
AI’s increasing role in legal workflows may be putting pressure on the billable hour model. While some firms have already transitioned to flat-fee and subscription-based billing structures, others remain hesitant to abandon traditional hourly billing. 
Most legal tech leaders agree that AI will drive efficiency and encourage alternative pricing models, but the complete demise of the billable hour remains unlikely in the near future:

Scott Stevenson, CEO, Spellbook: “Yes. We see many boutique firms moving into flat fee billing, increasing their margins substantially.” 
Daniel Lewis, CEO, LegalOn Technologies: “Reports about the death of the billable hour continue to feel exaggerated. AI-driven efficiencies will push clients to put pressure on the amount of time billed, but rather than a dramatic overturning, we’ll see adaptation. Many clients still prefer the billable hour for certain work, and firms will evolve to deliver more value and perhaps different services — faster and in less time. The real competition will be in who can leverage AI to provide the best services, not just on billable hours and rates.” 
Kara Peterson, Co-Founder, descrybe.ai: “Absolutely. The billable hour likely won’t disappear entirely, but there will be significant pressure on this payment model, forcing it to evolve. I can envision a future where flat fees and even subscription-based models become far more common. While these changes may start at the lower end of the market, that’s not a given. Additionally, as AI makes time-consuming tasks more efficient, we may actually see hourly rates rise for human lawyers—especially for high-level legal expertise.” 
Katon Luaces, President & CTO, PointOne: “AI will continue to put pressure on law firms to adopt alternative fee arrangements and even make some tasks completely non-billable.” 
Ted Theodoropoulos, CEO, Infodash: “The billable hour has been the cornerstone of the law firm economic model for 50 years, but AI is increasingly challenging its dominance. AI-driven tools are significantly reducing time spent on tasks like document review legal research, and contract drafting. As a result, clients are demanding value-based pricing, pushing firms toward alternative fee arrangements (AFAs) such as fixed fees and subscription models. ALSPs and Big Four firms like KPMG are already leveraging AI for scalable, cost-effective legal services. If traditional firms do not adapt, these entrants will capture the value-driven segment of the market.” 
Nathan Walter, CEO, Briefpoint: “AI is changing the billable hour – many firms using Briefpoint have switched to flat rate billing on the tasks Briefpoint automates (discovery response and request drafting). Elimination of the billable hour is another story, and I don’t think we’ll see that in the next five years – eliminating the billable hour would require a fundamental restructuring of firms’ business model, not to mention a revision of attorney fee-shifting statutes. Law firms will maintain their business model until it doesn’t work. For the business model to ‘not work,’ law firms must lose business because of the billable hour. While we’re seeing significant increases in in-house teams asking about AI usage, that’s a far cry from conditioning representation on flat-rate billing.” 
Dorna Moini, CEO/Founder, Gavel: “AI is already shifting the landscape away from the traditional billable hour by enabling alternative business models. These models can offer clients greater transparency on costs and outcomes while allowing lawyers to work more efficiently and profitably. This change benefits both sides by aligning pricing with value rather than time spent.” 
Colin Levy, Director of Legal, Malbek: “AI will allow law firms to more easily scale work, e.g. take on more work without increasing headcount. AI will also increase competitive pressures from ALSPs and alternative fee arrangements. The outright disappearance of the billable hour is unlikely given current economic and technical factors.” 
Gil Banyas. Co-Founder & COO, Chamelio: “AI won’t eliminate the billable hour in the short term, but it will force firms to evolve their pricing. As routine tasks become automated, firms will need to shift toward value-based pricing for complex work while offering fixed fees for AI-assisted tasks.” 
Arunim Samat, CEO, TrueLaw: “We believe that law firms investing in proprietary AI models will unlock new revenue streams by monetizing their expertise. By training AI on their unique knowledge and experience, firms can offer novel, AI-powered services that were previously impractical. Since these services have predictable costs, they lend themselves well to flat-fee arrangements, allowing firms to introduce new revenue models without immediately disrupting the traditional billable hour structure. We’re already seeing firms offer proactive litigation risk monitoring and other recurring AI-driven services to their clients.” 
Greg Siskind, Co-founder, Visalaw.ai: “I think that is inevitable. And practice areas like immigration, which are largely flat billed already, we’re seeing more rapid adaptation of AI and more innovation in that space. I think the rest of the bar will follow.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “AI will definitely change the billable hour, but it won’t make it disappear entirely. As legal AI tools streamline workflows and improve efficiency, more firms will likely shift toward flat-fee or value-based pricing models, especially for routine work. However, billable hours will still play a role, particularly for complex matters that require deep legal expertise. That said, once firms fully leverage AI, the billable hour may no longer be the most lucrative model.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI will push the needle towards alternative fee arrangements at a rate faster than ever before. It is inspiring more innovation in how law firms bill their clients, especially for the work that AI is accelerating. However, AI is also increasing efficiency and allowing firms to take on more cases, which could offset this trend and even lead to increased profitability.” 
Troy Doucet, Founder, AI.Law: “In 10 years, we won’t have billable hours the way they exist today, if at all. Value of lawyers will be derived from broader engagement with their clients- things like strategy and risk management.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Efficiency gains from using AI is reducing the time spent on routine work, leading to alternative billing models (such as flat fees or value-based pricing), while the billable hour might still be used as an internal performance metric. Ultimately, AI will likely change how legal work is billed without entirely eliminating traditional metrics.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “Absolutely—but not everyone is on board just yet. Some law firms are resistant to AI because it threatens the traditional billable hour model. If AI can handle document review, legal research, and analysis in a fraction of the time, that means fewer billable hours. But here’s the catch: as more firms embrace AI, clients will start expecting the same efficiency everywhere. Law firms that resist AI risk falling behind as clients demand faster, more cost-effective legal services. Instead of measuring value by hours worked, the industry will shift toward a more results-driven approach—where expertise, strategy, and outcomes matter more than time spent on tedious tasks. Billable hours won’t disappear overnight, but AI is already pushing the legal industry toward a future where efficiency and results take center stage.”

Will AI Replace Lawyers?
As AI continues to improve, the question of whether it will replace lawyers remains a topic of debate. While AI excels at automating routine legal tasks, legal tech leaders largely agree that it lacks the judgment, strategic thinking, and interpersonal skills necessary to fully replace attorneys. Instead, AI is largely expected to augment legal professionals, allowing them to focus on higher-value work while automating administrative and repetitive processes.
Here’s what legal tech leaders had to say about whether AI will replace lawyers: 

Scott Stevenson, CEO, Spellbook: “No. Even if you can automate legal work, clients can’t understand what the documents you produce mean, and they ultimately need to be able to trust a human’s judgment. We built a product that cut out lawyers that worked fairly well, six years ago, but most users didn’t like it because they had “DIY anxiety” and ultimately needed a human to guide them through their matter.” 
Daniel Lewis, CEO, LegalOn Technologies: “No, AI will not replace lawyers, but it will change and elevate how they work. In contracts, for example, AI helps with line-by-line contract review, freeing lawyers to focus on judgment, context, and strategy—where expertise makes the biggest impact.” 
Kara Peterson, Co-Founder, descrybe.ai: “In a sense, yes—but not in the way many fear. AI will replace rote, mundane legal tasks, but not the entire legal process. If anything, AI will augment lawyers rather than fully replace them. In fact, the “human-in-the-loop” lawyer will become more critical than ever to handle nuance and complexity that AI simply can’t.” 
Katon Luaces, President & CTO, PointOne: “AI will certainly replace some of the work of lawyers just as lawyers no longer manually shepardize nor personally walk documents to the courthouse. That said, the practice of law is fundamental to government and commerce. While the work of a lawyer may become unrecognizable, there will be individuals who practice law.” 
Ted Theodoropoulos, CEO, Infodash: “AI will not replace lawyers in the immediate future, but it will fundamentally reshape their roles. While AI excels at automating routine tasks (e.g. contract analysis, eDiscovery, and brief drafting), it lacks the human judgment, emotional intelligence, and ethical reasoning required for complex legal matters. Over the next 2-3 years, we will see AI shift legal work toward advisory and strategic functions, but full lawyer replacement remains unlikely. The firms that embrace AI as a complement rather than a competitor will be the ones that thrive in the evolving legal landscape.” 
Nathan Walter, CEO, Briefpoint: “There are some components of the job that need human-to-human connection – I don’t think a jury will ever warm up to an AI trial attorney in the same way I lose interest in a piece of media once I find out it’s made by AI. The parts that don’t need a human touch? Those will be gone.” 
Dorna Moini, CEO/Founder, Gavel: “No, AI will not replace lawyers. There’s a large gap in legal services that needs to be filled, and while AI can assist with some routine functions, it can’t bridge that gap in legal services on its own. Lawyers will continue to be vital in offering the nuanced support and guidance that many clients need. Instead of replacing lawyers, AI can serve as a tool to help them better serve not just underserved communities, but the middle class and digitally-inclined clients as well.” 
Colin Levy, Director of Legal, Malbek: “We do not know how our own brains work especially around self-awareness, so to somehow expect AI to do the same anytime soon is highly unlikely.” 
Gil Banyas. Co-Founder & COO, Chamelio: “AI won’t replace lawyers, but it will reduce the number needed as it automates routine legal work. While AI excels at tasks like document review, it can’t replicate lawyers’ judgment, strategic thinking, and emotional intelligence. The future lawyer will be more efficient and focused on high-value work, but firms will likely need fewer attorneys to handle the same workload.” 
Arunim Samat, CEO, TrueLaw: “We believe AI will create 10x lawyers—legal professionals who can accomplish 10 times more work in the same amount of time. While certain legal functions will inevitably be affected, this shift isn’t unique to the legal industry—it’s happening across every sector. To thrive in an AI-augmented world, professionals must reimagine their workflows and daily operations. The key is adaptability: those who embrace AI and rethink how they work will unlock unprecedented efficiency and value, while those who remain rigid and resistant to change will struggle to keep up in this evolving landscape.” 
Greg Siskind, Co-founder, Visalaw.ai: “No. But lawyers will lead legal teams that include paralegals, lawyers, and AI. With the rise of genAI, roles will evolve where a lot of the tasks performed by paralegals and lawyers will be performed by AI, and humans will increasingly play more of a ‘sherpa’ role managing the tech and personally guiding their clients through the legal process.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “No, AI won’t replace lawyers, but it will fundamentally change the practice of law. It can help struggling associates learn faster, adapt more quickly, and gain expertise in less time. AI will also reshape the business model of law firms—potentially leading to more hiring as firms take on an increased volume of work. Additionally, AI is creating new opportunities for attorneys in adjacent fields like legal operations and AI-driven legal tech, opening up career paths that didn’t exist before. In some areas of law, AI may reduce the need for as many attorneys by cutting down busy work, but overall, it’s more about transformation than replacement. Rather than making lawyers obsolete, AI is redefining how they work and where their skills are most valuable.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI won’t replace lawyers anytime soon. Lawyers don’t simply recite law and design legal strategies. They provide nuanced judgment, empathy, and advocacy—qualities that are crucial in client relationships and that AI still struggles with. The human aspect of attorney-client relationships cannot be understated, and clients need a real person they can connect with to reassure them, especially in high-stakes matters.” 
Troy Doucet, Founder, AI.Law: “No. Lawyers will become managers of AI.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “The quote, ‘Artificial Intelligence won’t replace lawyers, but lawyers using it will’ still stands true.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “No, AI won’t replace lawyers—because the law isn’t just about processing information; it’s about judgment, strategy, and advocacy. AI can be a powerful tool for streamlining tasks like document review and legal research, but it can’t think critically, navigate ethical dilemmas, or argue a case in court. Plus, human oversight is essential to ensure fairness and catch biases in AI models. Rather than replacing lawyers, AI is helping them by handling tedious admin work, freeing up time for higher-level thinking and client advocacy. The future of law isn’t AI vs. lawyers—it’s AI empowering lawyers to work smarter and deliver better results.”

***
As legal tech companies continue to improve their offerings, the legal profession may continue to undergo fundamental changes to the practice of law— reshaping workflows, redefining the billable hour, and transforming the role of lawyers in ways we are only just beginning to understand. 

The AI Royal Flush: The Five Foundations of Artificial Intelligence, Part 2

This is Part 2 of a summary of Ward and Smith Certified AI Governance Professional and IP attorney Angela Doughty’s comprehensive overview of the potential impacts of the use of Artificial Intelligence (AI) for in-house counsel.
Generative AI Best Practices for In-House Counsel
Considering all of the risks and responsibilities outlined in Part 1 of this series, Doughty advises in-house counsel to mandate training, discuss with the management team control of approved tools, and organize a review of what data can be used with AI at each company level.
TRAINING
Training has the potential to bridge generational gaps and create a working environment where people are more comfortable sharing new ideas. “I am very proud of our firm for the way that we have adapted to the modern business landscape,” noted Doughty. “We have some folks who have been practicing for 30 to 40 years, and we have some right out of law school. With everyone evolving and learning at the same rate, we’re using training to build a more inclusive culture.”
HUMAN REVIEW
Having an actual human review key decisions is another best practice. In a case where an experienced person would not have required an additional layer of review, but AI is being used to streamline the process, prudent companies should conduct a secondary review of the AI-generated work.
MONITOR REGULATIONS
Similar to the technology, the regulatory environment is constantly in flux. Most of the potential regulations are currently just proposals that are not legally binding. For example:

The EU AI Act is a comprehensive legislative framework that aims to regulate AI technologies based on their level of risk. It’s designed to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values.
The U.S. Blueprint for an AI Bill of Rights outlines principles to protect the public from the potential harms of AI and automated systems, focusing on civil rights and democratic values in the U.S.
The FTC enforces consumer protection laws relevant to AI, focusing on issues like fairness, transparency, bias, and data privacy, but it currently operates within a more reactive and general deceptive trade practices legal framework.

Like the future, the final rules are impossible to predict. Doughty expects that transparency, fairness, and explainability will be common themes.
Regulators will want to know how decisions were made, whether AI was involved, how data is processed, and how data is protected. “They will not hesitate to hold senior-level people accountable. This is partially why clear policies are an effective strategy for minimizing risk,” commented Doughty.
Different regions have different ideas regarding ethics and bias. This increases the challenge of navigating the evolving regulatory framework.
COMPLIANCE
“Compliance with all of the standards is practically impossible, which makes this very similar to data protection and privacy. One of my worst nightmares is when a client asks me to make them compliant,” added Doughty, “because, in most cases, it’s simply not feasible.”
Penalties are likely to vary in proportion to the risk to society. Companies should consider whether using AI is worth the reputational damage and harm it could cause to individuals.
Businesses operating in high-risk sectors may see additional regulations compared to other businesses. It is a patchwork of inconsistent, overlapping laws, and that is unlikely to change. “If there is a positive to this, it is that it will keep us in business for a long time,” joked Doughty.
Legal knowledge will continue to be vital for helping clients make decisions. Critical thinking skills and an understanding of jurisprudence will also continue to support job security for attorneys.
Remember, AI is a Tool, not a Replacement.
“As attorneys, we have empathy skills. People don’t want to sit in front of a computer and talk about really difficult, hard things. They want to look you in the eyes,” Doughty explained.
AI is just a tool, and fears over being replaced may be overblown. Doughty is using the technology on a daily basis. Along with using it to edit her presentation into bullet points for experienced in-house attorneys, she uses it to draft legal scenarios.
Doughty advises not to use a person’s real name because of privacy. “I also use it when I am frustrated with someone, so I draft how I really feel, then ask the AI to make it more professional,” noted Doughty.
AI can quickly write an article if provided with a topic, a target audience, and a few links. The speed and accuracy are astonishing, but many believe it is difficult, if not impossible, to determine whether the copy was plagiarized. This is likely to be the subject of ongoing litigation.
Audience Questions Answered
In the Q&A portion of the presentation, the audience questions came in quickly. Doughty attempted to address all she could in the time allotted and offered to take calls regarding questions after the program.
In response to a question centered on navigating ongoing regulations, Doughty advised following the National Institute of Standards and Technology Cybersecurity Framework.
Another audience member wondered, “Can AI be trusted?”
“No, it cannot be trusted at all,” said Doughty. “There is not a single tool that I would recommend using as the basis for legal decisions without substantial human oversight – same as you would with any other technology tool.”
“What technology or change, if any, compares to the effect generative AI is having on the legal system and profession?”
“I don’t think we’ve seen anything like this since Word, Excel, and Outlook came out in terms of changing the way that we practice law and prepare legal work products. I remember having to go from the book stacks to Westlaw; it was just a different way to do research, but I still had to do all of those things. This is even more revolutionary than what we saw at that point.”
“How do you mitigate the risk of harmful bias in a vendor agreement?”
“The short answer is to fully vet your vendors. Many vendors understand the risks and will include representations and warranties within their contracts, but this one is difficult. Understanding the training model and data used for training can be key, as it was with the earlier examples of AI hiring tools trained in male-dominated industries that preferred male applicants.”
“Any tips to bring up the topic of AI to organizational leadership?”
“Quantify the risk and discuss all of the penalties that could occur, along with the opportunity costs associated with ruining a deal.”
“Are there any legal-specific AI tools that you see as a good value?”
“In terms of legal research, writing, or counsel, I would not advise using AI for any of that right now – outside of the (very) expensive, but known, traditional legal vendors, such as Westlaw and Thompson Reuters. This is partially because most of the AI tools people are using are open AI tools. This means every question and answer – right or wrong – is being used to train the technology. This is also partially because to ethically use these tools, we must understand their strengths and weaknesses enough to provide sufficient oversight, and many attorneys are not there yet.”
“What about IP infringement?”
“If AI has been predominantly trained on existing content and you use it to create an article, does the writer have an infringement claim? This is to be determined, and it’s one of the biggest issues being litigated right now. There are a slew of artists that are suing Gen AI companies for this purpose.”
“What about the environmental impact of AI processing?”
“Generative AI significantly impacts the environment due to its high energy consumption, especially during the training and operation of large models, leading to substantial carbon emissions. The use of resource-intensive hardware and the cooling needs of data centers further exacerbate this impact.”
“Any suggestions for when the IT department believes their understanding of the risks supersede the opinions of the legal department?”
“This is when the C-suite needs to come in because the legal risk and responsibility are already out there, and implementation is under a completely different department. It’s a business decision. I look at the IT department no different than the marketing or sales departments in determining the legal risk and making a recommendation.”
“Any recommendations for AI-based tools to stay on top of the regulatory tsunami?”
“Not yet, but I spend a lot of time listening to demos and participating in vendor training sessions. Signing up for trade association newsletters is another way to stay current. These are free resources for training and help with staying current on industry trends and proposed regulations.”
Conclusion
Doughty concluded the session by reminding the group of In-House Counsel that their ethical duties and responsibilities extend to governance, compliance, risk management, and an ongoing understanding of the ever-evolving landscape of Generative AI.

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”
At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law.
The court rejected Ross Intelligence merger and scènes à faire arguments. Though the headnotes were drawn directly from uncopyrightable judicial opinions, the court analogized them to the choices made by a sculptor in selecting what to remove from a slab of marble. Thus, even though the words or phrases used in the headnotes might be found in the underlying opinions, Thompson Reuters’ selection of which words and phrases to use was entitled to copyright protection. Interestingly, the court stated that “even a headnote taken verbatim from an opinion is a carefully chosen fraction of the whole,” which “expresses the editor’s idea about what the important point of law from the opinion is.” According to the court, that is enough of a “creative spark” to be copyrightable. In other words, even if a work is selected entirely from the public domain, the simple act of selection is enough to give rise to copyright protection.
Relying on testimony from Thompson Reuters’ expert, the court compared “one by one” how similar 2,830 Bulk Memos were to the West headnotes at issue. The found that 2,243 of the 2,830 Bulk Memos were infringing as a matter of law. Whether Ross Intelligence’s contractor had access to the headnotes was an open question, the court reasoned that a Bulk Memo that “looks more like a headnote that it does the underlying judicial opinion is strong circumstantial evidence of copying.” Questions of infringement are, of course, normally left for the fact finder, but the court found a reasonable juror could not conclude that the Bulk Memos were not copied from the West headnotes.
The court then went on to rule as a matter of law that Ross Intelligence’s fair use defense failed – even though only two of the four fair use factors favored Thompson Reuters. The court specifically found that Ross Intelligence’s use was commercial in nature and non-transformative because the use did not have a “further purpose or character” apart from Thompson Reuters’ use. The court also found dispositive that Ross’ intended purpose was to compete with Thompson Reuters, and thus would impact the market for Thompson Reuters’ service. The court, on the other hand, found that the relative lack of creativity in the headnotes, and the fact that users of Ross’ systems would never see them, also favored Ross.
The court distinguished cases holding that intermediate copying of computer source code was fair use, reasoning that those courts held that the intermediate copying was necessary to “reverse engineer access to the unprotected functional elements within a program.” Here, copying Thompson Reuters’ protected expression was not needed to gain access to underlying ideas. How this reasoning will play out in other pending artificial intelligence cases where fair use will be hotly contested is anyone’s guess – in most of those cases, the defendants would argue that they are not competing with the rights owners and that, in fact, the underlying ideas (not the expression) are precisely what the copying is trying access.
The court left many issues for trial (including whether Ross infringed the West Key Number system and thousands of other headnotes). Nonetheless, the opinion seems to be striking victory for content owners in their fight against the AI onslaught. Although Judge Bibas has only been on the Third Circuit bench since 2017, he has gained a reputation for his thoughtful and scholarly approach to the law. Whether his ruling sitting by designation as a trial judge in the District of Delaware can make it past his colleagues on the Third Circuit will be worth watching.
The case is Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc., Docket No. 1:20-cv-00613 (D. Del. May 06, 2020).

A New Era: Trump 2.0 Highlights for Privacy and AI

Since the Trump 2.0 administration commenced, the U.S. federal government has experienced some major policy shifts. Several Biden-Harris administration era regulations are now eliminated or on a 60-day hold while under review. States and other organizations have filed lawsuits to stay implementation of certain Trump 2.0 initiatives (i.e., the funding freezes, deferred resignation offer, birthright citizenship, among others).
Below is a summary of some of the federal ‘de-regulation’ related to privacy and AI that we are following: 
The January Freeze: COPPA Rule Amendments
Issued on inauguration day, January 20, 2025, the Executive Order titled “Regulatory Freeze Pending Review” (Regulatory Freeze EO) directed federal agencies to not propose or issue any new rule and to withdraw any rule sent to the Office of the Federal Register but not published as final in the Federal Register.
The Federal Trade Commission (FTC) finalized amendments to the Children’s Online Privacy Protection Rule (COPPA Rule Amendments) on January 16, 2025. The COPPA Rule Amendments were submitted to but not published in the Federal Register prior to January 20, 2025. Accordingly, while approved as final from the FTC’s perspective, the COPPA Rule Amendments remain a proposed rule with no effective date or compliance date. The Regulatory Freeze EO directs the FTC to “withdraw” the COPPA Rule Amendments until “a department or agency head appointed or designated by the President after noon on January 20, 2025, reviews and approves the rule.”
Also on January 20th, President Trump appointed FTC Commissioner Andrew Ferguson as FTC Chairman. While still in his role as a Commissioner, Chairman Ferguson voted in favor of the COPPA Rule Amendments but also cited “three major problems” in his concurring statement, which are:

Requiring operators to disclose and receive parental consent about the specific third parties to which the operators will disclose children’s personal information. Then-Commissioner Ferguson noted that not all additions or changes to the identities of third parties should require new parental consent. He suggested that the FTC “could have mitigated this issue” by clarifying that a “change is material for purposes of requiring new consent only when facts unique to the new third party, or the quantity of the new third parties, would make a reasonable parent believe that the privacy and security of their child’s data is being placed at materially greater risk.”
Prohibiting indefinite retention of children’s personal information. The COPPA Rule allows for retention of children’s personal information “as long as is reasonably necessary to fulfill the purpose for which the information was collected.” (§ 312.10). Then-Commissioner Ferguson criticized the addition of the prohibition on indefinite retention because it “is likely to generate outcomes hostile to users,” providing the example that “adults might be surprised to find their digital diary entries, photographs, and emails from their childhood erased from existence.” He wrote that, because the term indefinite is not defined, operators “can comply with the Final Rule by declaring that they will retain data for no longer than two hundred years […] And if ‘indefinite’ is not meant to be taken literally, then it is unclear how the requirement is any different than the existing requirement to keep the information no longer than necessary to fulfill the purpose for which it was collected.”
“Missed opportunity” to clarify that the Amended COPPA Rule is “not an obstacle to the use of children’s personal information solely for the purpose of age verification.” Commissioner Ferguson noted that the COPPA Rule Amendments “should have added an exception for the collection of children’s personal information for the sole purpose of age verification, along with a requirement that such information be promptly deleted once that purpose is fulfilled.”

Other notable changes in the COPPA Rule Amendments that were not part of the concurring statement include:

An official definition for “mixed audience”. While the concept of a mixed audience online service is covered in the COPPA Rule (see the FTC’s COPPA FAQs, Section D, Question 4), the COPPA Rule Amendments add a defined term for “mixed audience website or online service”. It means an online service that is directed to children within the meaning of COPPA but “that does not target children as its primary audience, and does not collect personal information from any visitor, other than for the limited purposes set forth in § 312.5(c), prior to collecting age information or using another means that is reasonably calculated, in light of available technology, to determine whether the visitor is a child.”
Expanded Data Security Requirements. The COPPA Rule requires “reasonable procedures to protect the confidentiality, security, and integrity of personal information collected from children.” (§ 312.8) The COPPA Rule Amendments provide minimum requirements for this reasonableness standard, including a written information security program that contains many of the same safeguards required under state cybersecurity laws, i.e., an accountable person, risk assessments, testing and monitoring and vendor due diligence.

Not-So-Final: Sensitive Personal Data Transfers and Negative Options
On December 30, 2024, the U.S. Department of Justice released a Final Rule titled “Preventing Access to U.S. Sensitive Personal Data and Government Related Data by Counties or Concern or Covered Persons” (DOJ Rules). President Biden’s Executive Order 14117 (EO 14117, dated February 28, 2024) directed the DOJ to issue the DOJ Rules. The DOJ Rules were published in the Federal Register on January 8, 2025.
In brief, the DOJ Rules apply to “U.S. persons,” which means U.S. citizens, national or lawful permanent residents, qualified refugees, entities organized under U.S. law or persons “in the U.S.” (§ 202.256). Subject to certain exemptions (§ 202.501 to § 202.511), U.S. persons are prohibited or restricted from knowingly engaging in a “covered data transaction,” which means a sales or licensing of “bulk sensitive personal data” or “United States Government-related data,” a vendor agreement, employment agreement, or investment agreement (§ 202.210), that involves access by a “country of concern” (§ 202.209) or “covered person” (§ 202.211.) (Counties of concern are China, Cuba, Iran, North Korea, Russia and Venezuela (§ 202.209).)
The DOJ Rules are effective on April 8, 2025. But, as a final rule published in the Federal Register prior to January 20th, the Regulatory Freeze EO requests that federal agencies “consider” postponing the effective date and opening a comment period for interested parties.
Even before the Regulatory Freeze EO was released, the DOJ had announced its intention to “continue to robustly engage with stakeholders to determine whether additional time for implementation is necessary and appropriate” during the 90 days between the DOJ Rules’ publication in the Federal Register and the effective date. Unlike many other Biden-era Executive Orders, EO 14117 was not rescinded on Inauguration Day. Whether the exclusion of EO 14117 means that the DOJ Rules will survive the regulatory freeze is unclear.
Another final rule subject to the regulatory freeze: FTC’s “Rule Concerning Recurring Subscriptions and Other Negative Option Programs” (Final Negative Option Rule), which was published in the Federal Register as final on November 15, 2024.
Parts of the Final Negative Option Rule were effective January 14, 2025, but businesses have until May 14, 2025, to comply with certain sections Final Negative Option Rule, i.e., § 425.4 (disclosures’ form, content and placement), § 425.5 (consent) and § 425.6 (simple cancellation mechanism).
Commissioner Holyoake wrote a dissent (89 FR 90540) to the Final Negative Option Rule, citing procedural issues and the failure to “define with specificity” the acts or practices that are unfair or deceptive and whether these practices are “prevalent.” FTC Chair Ferguson joined, which may indicate the parts of the Final Negative Option Rule that the FTC will revisit or replace. (More about the Final Negative Option Rule is available here).
A third rule – Personal Financial Data Rights Rule (PFDR Rule) – was published as final on November 8, 2024, and effective January 17, 2025 – three days before the Regulatory Freeze EO was issued. On February 3, 2025, the federal agency that issued the PFDR Rule – the Consumer Financial Protection Bureau (CFPB) – announced that Treasury Secretary Scott Bessent took over as acting head and ordered the CFPB to halt all activities. Subsequently, Democrats in Congress expressed concern in a February 7th letter to Acting Director Bessant. That same day, Russell Vought, the newly sworn-in Director of the Office of Management and Budget (OMB) and an architect of The Heritage Foundation’s Project 2025, reportedly replaced Secretary Bessant as acting head of the CFPB and echoed Secretary Bessant’s orders to the CFPB staff. In a social media post, Director Voight announced that the CFPB “will not be taking its next draw of unappropriated funding because it is not ‘reasonably necessary’ to carry out its duties. The Bureau’s current balance of $711.6 million is in fact excessive in the current fiscal environment.”
The CFPB website at https://www.consumerfinance.gov/ currently displays a “404: Page Not Found Error” and the CFPB offices were closed to CFPB staff and taken over by the Department of Government Efficiency (headed by Elon Musk) as of February 9, 2025.
The Congressional Review Act (CRA) (codified at 5 U.S.C. §§801- 808) also is a consideration for these final rules. If a final rule is deemed a “major rule” (5 U.S.C. §804) by the OMB, the CRA provides for a special congressional procedure to overturn the rule during a so-called look-back period. The OMB deemed each of the Negative Option Final Rule, the DOJ Rules and the PFDR Rule as a major rule.
The Senate Parliamentarian has determined that the CRA’s lookback period began on August 16, 2024, for rules submitted in the second session of the 118th Congress, which ended on January 3, 2025. Republican lawmakers already have indicated that they intend to use the CRA procedure to target as many as the Biden-Harris administration rules as possible.
The Big Shift in Artificial Intelligence Policy
President Biden’s Executive Order 14110 of October 30, 2023, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, focused on “governing the development and use of AI safely and responsibly,” was rescinded by Trump’s Executive Order 14148 (“Initial Rescissions of Harmful Executive Orders and Actions”) and replaced by Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”) (Trump AI Executive Order) on January 23, 2025.
The Biden administration focused broadly on eight overarching principles for AI development: safety and security; privacy; managing AI bias and civil rights; consumer, patient and student protection; worker support; privacy; innovation and competition; worker support; international AI leadership; and federal use of AI. (Read more here.) By contrast, the Trump AI Executive Order is centered on deregulation and the promotion of AI innovation as a means of maintaining U.S. global dominance. (Read more here.)
The January Shakeup: The Data Privacy Framework
Like the CFPB and other U.S. federal government staffing changes as well as the controversial Deferred Resignation Program, President Trump fired three of the four members of the Privacy and Civil Liberties Oversight Board (PCLOB), including Chair Sharon Bradford Franklin, who was three years into her six-year term, Professor Edward Felton, and Travis LeBlanc, who served in the Obama administration.
By statute, the PCLOB can have up to five members appointed by the President and confirmed by the Senate. Three members constitute quorum and only three members of the PCLOB can be members of the same political party. As of January 31, 2025, only one PCLOB member – Beth Williams, who served in the first Trump administration, – remains at the PCLOB.
The PCLOB appointee removals are symbolically and practically significant to the future of the EU-U.S. Data Privacy Framework (DPF). The agreement between the European Commission and the U.S. that created the DPF (DPF Agreement) relies on a multi-layer mechanism for non-U.S. individuals to obtain review and redress of their allegations that their personal data collected through U.S. Signals Intelligence was unlawfully handled by the United States. As part of the negotiations for the DPF Agreement, President Biden issued Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (EO 14086), directing federal agencies to address concerns – including redress mechanisms – relating to bulk digital surveillance by U.S. law enforcement and intelligence agencies. (These concerns underpinned objections from EU regulators to the DPF’s predecessors. (Learn more about DPF generally here.)
The PCLOB, which was created in 2004 to advise the federal government on civil liberties matters in connection with U.S. anti-terrorism laws, advised on the creation of the DPF’s redress mechanism. Even though the DPF Agreement was not voted into law by Congress and EO 14086 could be overturned by another President, the redress mechanism in the DPF Agreement was pivotal in demonstrating to the European Commissions that EU citizens could receive protection for their personal data that is essentially equivalent to EU data protection law.
While the U.S. federal government is amid structural changes initiated by Trump 2.0, businesses looking to prepare for and advance compliance efforts are faced with the difficult decision about whether to continue on with compliance efforts under the final rules described above or to stand down until the dust settles in Washington. For example, should a DPF-certified business revisit other cross-border transfer mechanisms now in case the DPF does not survive legal challenges? Meanwhile, state legislatures continue to fill the void. So far this year, many states have already teed up new or amended privacy laws and new AI laws. Since neither a new federal AI law nor a new federal consumer privacy law seem to be top of mind for the Administration, business can for now continue on with state law and federal sectoral law compliance efforts.
 
Krista Setera and Mary Aldrich contributed to this article.

Illinois Takes Aim at Artificial Intelligence in Employment

In a significant move to regulate artificial intelligence (AI) in the workplace, the Illinois Legislature amended the Illinois Human Rights Act (IHRA or “the Act”) to address the growing use of AI at various points throughout the employment process.
Under House Bill 3773, effective January 1, 2026, Illinois will protect prospective and current employees from discriminatory AI practices during recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, and tenure as well as the terms, privileges, and conditions of employment. The amendment also prohibits the use of zip codes as a proxy for protected classes.
As amended, employers are required to notify employees of the use of AI at any of these touchpoints throughout the employment process. However, the amendment does not provide specific notice requirements or prescribe affirmative steps employers must take to prevent discriminatory outcomes as a result of using AI. Rather, the amendment simply states that the Illinois Department of Human Rights (IDHR) will adopt rules to implement and enforce these new standards at an undefined future date.
Illinois has remained at the forefront of workplace AI regulation since passing the Illinois Artificial Intelligence Video Interview Act in 2019, which requires employers to disclose and obtain consent for the use of AI in analyzing video interviews. Since this time, a growing number of cities and states have joined Illinois in expanding regulatory frameworks governing the use of AI in the employment process. Alongside Illinois, Colorado passed a similar law requiring employers to use “reasonable care” to protect Colorado residents from known or foreseeable risks of “algorithmic discrimination.” New York City has also passed legislation requiring employers to conduct “bias audit[s]” within one year of the use of AI tools and provide certain notices to employees or prospective candidates.
Employers should comprehensively evaluate their employment process and algorithms to determine whether and how AI is used to evaluate prospective and current employee information at any point throughout the employment process.

Congress Revisits Stablecoins

After unsuccessful past efforts to enact federal legislation regulating stablecoins, Congress has again turned to stablecoins. While it is always difficult to predict whether any bill will pass, there seems to be growing support in the current Congress, with the Senate Banking Committee and House Financial Services committee working closely together to adopt legislation.
In the Senate, a bipartisan bill entitled the Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act is sponsored by Senators Bill Hagerty (R-TN), Tim Scott (R-SC), Cynthia Lummis (R-WY) and Kirsten Gillibrand (D-NY). The bill defines a payment stablecoin as a digital asset used for payment or settlement that is pegged to a fixed monetary value. It would permit both bank and certain nonbank entities to issue payment stablecoins, and provides for either federal or optional state regulation, depending on the total amount of stablecoins issued. The bill makes clear that payment stablecoins are not securities subject to SEC regulation, and instead provides for banking-like examination, supervision and enforcement.
In the House, House Financial Services Committee Chair French Hill (R-AR) and Digital Assets, Financial Technology, and Artificial Intelligence Subcommittee Chairman Bryan Steil (R-WI) announced a discussion draft of a bill entitled the Stablecoin Transparency and Accountability for a Better Ledger Economy (STABLE) Act. The bill is similar in many respects to the GENIUS Act in that it seeks to provide a path for the permitted issuance of payment stablecoins with regulation at either the federal or state level. A key difference between the GENIUS Act and STABLE Act is that while the GENIUS Act requires the Treasury Department to prepare a written study on “endogenously collateralized stablecoins,” also known as algorithmic stablecoins, the STABLE Act imposes a two-year moratorium on their issuance.

Context for the Five Pillars of EPA’s ‘Powering the Great American Comeback Initiative’

On February 4, new US Environmental Protection Agency (EPA) Administrator Lee Zeldin announced EPA’s “Powering the Great American Comeback Initiative,” which is intended to achieve EPA’s mission “while emerging the greatness of the American Economy.” The initiative has five “pillars” intended to “guide the EPA’s work over the first 100 days and beyond.” These are:

Pillar 1: Clean Air, Land, and Water for Every American.
Pillar 2: Restore American Energy Dominance.
Pillar 3: Permitting Reform, Cooperative Federalism, and Cross-Agency Partnership.
Pillar 4: Make the United States the Artificial Intelligence Capital of the World.
Pillar 5: Protecting and Bringing Back American Auto Jobs.

Below, we break down each of the five pillars and present context what these pillars may mean to the regulated community.
Pillar 1: “Clean Air, Land, and Water for Every American”
The first pillar is intended to emphasize the Trump Administration’s continued commitment to EPA’s traditional mission of protecting human health and the environment, including emergency response efforts. To emphasize this focus, accompanied by Vice President JD Vance, Zeldin’s first trip as EPA Administrator was to East Palatine, Ohio, on the two-year anniversary of a train derailment. While there, Administrator Zeldin noted that the “administration will fight hard to make sure every American has access to clean air, land, and water. It was an honor to meet with local residents, and I leave this trip more motivated to this cause than ever before. I will make sure EPA continues to clean up East Palestine as quickly as possible.” After surveying the site of the train derailment to survey the cleanup, Zeldin and Vance “participated in a meeting with local residents and community leaders to learn more” about how to expedite the cleanup.
Taken alone or in conjunction with Administrator Zeldin’s trip to an environmentally impacted site in Ohio, Pillar 1 appears consistent with past EPA practice.
Read in the context of the Trump Administration’s first-day executive orders (for more, see here) and related actions such as a memoranda from Attorney General Pam Bondi on “Eliminating Internal Discriminatory Practices” and “Rescinding ‘Environmental Justice’ Memoranda.” Pillar 1 should be construed as meaning that EPA no longer intends to proactively work to redress issues in “environmentally overburdened” communities. Consequently, programs under the Biden Administration that focus on environmental justice (EJ) and related equity issues are ended. (For more, see here.)
Pillar 2: Restoring American Energy Dominance
Pillar 2 focuses on “Restoring American Energy Dominance.” What this means in practice is little surprise given President Trump’s promises during his inauguration to “drill, baby, drill.” Two first-day Executive Orders provide further context to this pillar:

The Executive Order “Declaring a National Energy Emergency” declares a national energy emergency due to inadequate energy infrastructure and supply, exacerbated by previous policies. It emphasizes the need for a reliable, diversified, and affordable energy supply to support national security and economic prosperity. The order calls for immediate action to expand and secure the nation’s energy infrastructure to protect national and economic security.
The Executive Order on “Unleashing American Energy,” seeks to encourage the domestic production of energy and rare earth minerals while reversing various Biden Administration actions that limited the export of liquid natural gas (LNG), promoted electric vehicles and energy efficient appliances and fixtures, and required accounting for the social cost of carbon. (For context on the social context of carbon, see here and here.)

Pillar 3: Permitting Reform, Cooperative Federalism, and Cross-Agency Partnership
Pillar 3 focuses on government efficiency including permitting reform, cooperative federalism, and cross-agency partnerships. As with Pillar 1, two of these goals (cooperative federalism and cross-agency partnership) are generally consistent with typical agency practice across all administrations even if administrations approach them in different ways.
“Permitting reform” generally means streamlining the permitting processes so that the time from permitting submission to conclusion is shorter.
Current events, most notably three court decisions involving the National Environmental Policy Act (NEPA), require a deeper exploration of “permitting reform.” NEPA is a procedural environmental statute that requires federal agencies to evaluate the potential environmental impacts of major decisions before acting and provides the public with information about the environmental impacts of potential agency actions. The Council on Environmental Quality (CEQ), an agency within the Executive Office of the president, was created in 1969 to advise the president and develop policies on environmental issues, including ensuring that agencies comply with NEPA by conducting sufficiently rigorous environmental reviews.
Energy-related infrastructure ranging from transmission lines to ports needed to ship LNG often require NEPA reviews. During his first term, President Trump sought to streamline NEPA reviews. As we previously discussed, in 2020, CEQ regulations were overhauled to exclude requirements to discuss cumulative effects of permitting and, among other things, to set time and page limits on NEPA environmental impact statements. During the Biden Administration, in one phase of revisions, CEQ reversed course to undo the Trump Administration’s changes, and, in a second phase, the Biden Administration required evaluation of EJ concerns, climate-related issues, and increased community engagement. (For more, see here.) Predictably, litigation followed these changes. Additionally, we are waiting on the US Supreme Court’s decision in Seven County Infrastructure Coalition v. Eagle County, Colorado, which addresses whether NEPA requires federal agencies to identify and disclose environmental effects of activities which are outside their regulatory purview.
These two recent decisions add to the ongoing debate about whether CEQ ever had the authority to issue regulations that have been relied upon for decades. These include the DC Circuit’s decision in Marin Audubon Society v. FAA (for more, see here) and a second decision by a North Dakota trial court in in Iowa v. Council on Environmental Quality.
Pillar 4: Make the United States the Artificial Intelligence Capital of the World
EPA’s Pillar 4 seeks to promote artificial intelligence (AI) so that America is the AI “Capital of the World.”
AI issues fall into EPA’s purview because development of AI technologies is highly dependent on electric generation, transmission, and distribution. EPA plays a key role in overseeing permitting and compliance activities related to facilities like these. As we have discussed, AI requires significant energy to power the data centers it needs to function, and a study indicates that the carbon footprint of training a single AI natural language processing model produced similar emissions to 125 round-trip flights between New York and Beijing. Because data center developments tend to be clustered in specific regions, more than 10% of the electricity consumption in at least five states is used by data centers. (Report available here.)
Pillar 5: Protecting and Bringing Back American Auto Jobs
Pillar 5 focuses on supporting the American automobile industry. As was discussed in relation to Pillar 2, EPA seeks to support the American automobile industry. Regarding this sector, EPA intends to “streamline and develop smart regulations that will allow for American workers to lead the great comeback of the auto industry.” Additionally, the US Office of Management and Budget released a memo on January 21, clarifying that provisions of the “Unleashing American Energy” Executive Order were intended to pause disbursement of Inflation Reduction Act funds, including those for electric vehicle charging stations.
While the particulars of this pillar are less clear than some others, we expect that EPA’s efforts in this area will involve some combination of permitting reform and rollback to prior EPA decisions related to vehicle emissions.

Elon Musk’s Exit from OpenAI: Why He Sold His Stake and Why He Wants Back In

Elon Musk’s Exit from OpenAI: Why He Sold His Stake and Why He Wants Back In. Elon Musk’s relationship with OpenAI, the AI research organization he co-founded in 2015, has been complicated, marked by early enthusiasm, a departure, and a more recent attempt to regain influence. The journey from co-founder to adversary and, perhaps, back […]

Hangzhou Internet Court: Generative AI Output Infringes Copyright

On February 10, 2025, the Hangzhou Internet Court announced that an unnamed defendant’s generative artificial intelligence’s (AI) generating of images constituted contributory infringement of information network dissemination rights, and ordered the defendant to immediately stop the infringement and compensate for economic losses and reasonable expenses of 30,000 RMB. 

LoRA model training with Ultraman

Infringing image generated with the model.

The defendant operates an AI platform that provides Low-Rank Adaptation (LoRA) models, and supports many functions such as image generation and model online training. On the homepage of the platform and under “Recommendations” and “IP Works”, there are AI-generated pictures and LoRA models related to Ultraman, which can be applied, downloaded, published or shared. The Ultraman LoRA model was generated by users uploading Ultraman pictures, selecting the platform basic model, and adjusting parameters for training. Afterwards, other users could then input prompts, select the base model, and overlay the Ultraman LoRA model to generate images that closely resembled the Ultraman character.
The unnamed plaintiff (presumably Tsuburaya Productions) alleged that the defendant infringed on their information network dissemination rights by placing infringing images and models on the information network after training with input images. The defendant used generative AI technology to train the Ultraman LoRA model and generate infringing images, constituting unfair competition. The plaintiff demanded the defendant cease the infringement and compensate for economic damages of 300,000 RMB.
The defendant countered that their AI platform, by calling third-party open-source model code, integrates and deploys these models according to platform needs, offering a generative AI platform for users. However, the platform does not provide training data and only allows users to upload images to train the model, which falls within the “safe harbor” rule for platforms and does not constitute infringement.
The Court reasoned:
On the one hand, if the generative artificial intelligence platform directly implements actions protected by copyright, it may constitute direct infringement. However, in this case, there is no evidence to prove that the defendant and the user jointly provided infringing works, and the defendant did not directly implement actions protected by information network dissemination rights.
On the other hand, in this case, when the user inputs infringing images and other training materials and decides whether to generate and publish them, the defendant does not necessarily have an obligation to conduct prior review of the training images input by the user and the dissemination of the generated products. Only when it is at fault for the specific infringing behavior can it constitute aiding and abetting infringement.
Specifically, the following aspects are considered comprehensively:
First, the nature and profit model of generative AI services. The open source ecosystem is an important part of the AI industry, and the open source model provides a general basic algorithm. As a service provider directly facing end users at the application layer, the defendant has made targeted modifications and improvements based on the open source model in combination with specific application scenarios, and provided solutions and results that directly meet the use needs. Compared with the provider of the open source model, it directly participates in commercial practices and benefits from the content generated based on the targeted generation. From the perspective of service type, business logic and prevention cost, it should maintain sufficient understanding of the content in the specific application scenario and bear the corresponding duty of care. In addition, the defendant obtains income through users’ membership fees, and sets up incentives to encourage users to publish training models, etc. It can be considered that the defendant directly obtains economic benefits from the creative services provided by the platform.
Secondly, the popularity of the copyrighted work and the obviousness of the alleged infringement. Ultraman works are quite well-known. When browsing the platform homepage and specific categories, there are multiple infringing pictures, and the LoRA model cover or sample picture directly displays the infringing pictures, which is relatively obvious infringement.
Thirdly, the infringement consequences that generative AI may cause. Generally speaking, the results of user behavior using generative AI are not identifiable or intervenable, and the generated images are also random. However, in this case, because the Ultraman LoRA model is used, the characteristics of the character image can be stably output. At this time, the platform has enhanced the identifiability and intervention of the results of user behavior. And because of the convenience of technology, the pictures and LoRA models generated and published by users can be repeatedly used by other users. The trend of causing the spread of infringement consequences is already quite obvious, and the defendant should have foreseen the possibility of infringement.
Finally, whether reasonable measures have been taken to prevent infringement. The defendant stated in the platform user service agreement that it would not review the content uploaded and published by users. After receiving the lawsuit notice, it has taken measures such as blocking relevant content and conducting intellectual property review in the background, proving that it has the ability to take but has failed to take necessary measures to prevent infringement that are consistent with the technical level at the time of the infringement.
In summary, the defendant should have known that network users used its services to infringe upon the right of information network dissemination but did not take necessary prevention measures. It failed to fulfill his duty of reasonable care and was subjectively at fault, constituting aiding and abetting infringement.
Violations of the Anti-Unfair Competition Law did not need to be considered as copyright infringement was determined.
The full text of the announcement is available here (Chinese only).

Key Insights on President Trump’s New AI Executive Order and Policy & Regulatory Implications

On January 23, 2025, President Trump issued a new Executive Order (EO) titled “Removing Barriers to American Leadership in Artificial Intelligence” (Trump EO). This EO replaces President Biden’s Executive Order 14110 of October 30, 2023, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (Biden EO), which was rescinded on January 20, 2025, by Executive Order 14148.
The Trump EO signals a significant shift away from the Biden administration’s emphasis on oversight, risk mitigation and equity toward a framework centered on deregulation and the promotion of AI innovation as a means of maintaining US global dominance.
Key Differences Between the Trump EO and Biden EO
The Trump EO explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It criticizes the influence of “engineered social agendas” in AI systems and seeks to ensure that AI technologies remain free from ideological bias. By contrast, the Biden EO focused on responsible AI development, placing significant emphasis on addressing risks such as bias, disinformation and national security vulnerabilities. The Biden EO sought to balance AI’s benefits with its potential harms by establishing safeguards, testing standards and ethical considerations in AI deployment and deployment.
Another significant shift in policy is the approach to regulation. The Trump EO mandates an immediate review and potential rescission of all policies, directives and regulations established under the Biden EO that could be seen as impediments to AI innovation. The Biden EO, however, introduced a structured oversight framework, including mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols and monitoring requirements for AI used in critical infrastructure. The Biden administration also directed federal agencies to collaborate in the development of best practices for AI safety and reliability efforts that the Trump EO effectively halts.
The two EOs also diverge in their treatment of workforce development and education. The Biden EO dedicated resources to attracting and training AI talent, expanding visa pathways for skilled workers and promoting public-private partnerships for AI research and development. The Trump EO, however, does not include specific workforce-related provisions. Instead, the Trump EO seems to assume that reducing federal oversight will naturally allow for innovation and talent growth in the private sector.
Priorities for national security are also shifting. The Biden EO mandated extensive interagency cooperation to assess the risks AI poses to critical national security systems, cyberinfrastructure and biosecurity. It required agencies such as the Department of Energy and the Department of Defense to conduct detailed evaluations of potential AI threats, including the misuse of AI for chemical and biological weapon development. The Trump EO aims to streamline AI governance and reduce federal oversight, prioritizing a more flexible regulatory environment and maintaining US AI leadership for national security purposes.
The most pronounced ideological difference between the two executive orders is in their treatment of equity and civil rights. The Biden EO explicitly sought to address discrimination and bias in AI applications, recognizing the potential for AI systems to perpetuate existing inequalities. It incorporated principles of equity and civil rights protection throughout its framework, requiring rigorous oversight of AI’s impact in areas such as hiring, healthcare and law enforcement. Not surprisingly, the Trump EO did not focus on these concerns, reflecting a broader philosophical departure from government intervention in AI ethics and fairness – perhaps considering existing laws that prohibit unlawful discrimination, such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act, as sufficient.
The two orders also take fundamentally different approaches to global AI leadership. The Biden EO emphasized the importance of international cooperation, encouraging US engagement with allies and global organizations to establish common AI safety standards and ethical frameworks. The Trump EO, in contrast, appears to adopt a more unilateral stance, asserting US leadership in AI without outlining specific commitments to international collaboration.
Implications for the EU’s AI Act, Global AI and State Legal Frameworks
The Trump administration’s deregulatory approach comes at a time when other jurisdictions, particularly the EU, are moving toward stricter regulatory frameworks for AI. The EU’s Artificial Intelligence Act (EU AI Act), which was adopted by the EU Parliament in March 2024, imposes comprehensive rules on the development and use of AI technologies, with a strong emphasis on safety, transparency, accountability and ethics. By categorizing AI systems based on risk levels, the EU AI Act imposes stringent requirements for high-risk AI systems, including mandatory third-party impact assessments, transparency standards and oversight mechanisms.
The Trump EO’s emphasis on reducing regulatory burdens stands in stark contrast to the EU’s approach, which reflects a precautionary principle that prioritizes societal safeguards over rapid innovation. This divergence could create friction between the US and EU regulatory environments, especially for multinational companies that must navigate both systems. Although the EU AI Act is being criticized as impeding innovation, the lack of explicit ethical safeguards and risk mitigation measures in the Trump EO also could weaken the ability of US companies to compete in European markets, where compliance with the EU AI Act’s rigorous standards is a legal prerequisite for EU market access.
Globally, jurisdictions such as Canada, Japan, the UK and Australia are advancing their own AI policies, many of which align more closely with the EU’s focus on accountability and ethical considerations than with the US’s pro-innovation stance under the Trump administration. For example, Canada’s Artificial Intelligence and Data Act emphasizes transparency and responsible development, while Japan’s AI guidelines promote trustworthy AI principles through multistakeholder engagement. While the UK has a less regulated approach than the EU, it has a strong accent on safety through the AI Safety Institute.
The Trump administration’s decision to rescind the Biden EO and prioritize a “clean slate” for AI policy also may complicate efforts to establish global standards for AI governance. While the EU, the G7 and other multilateral organizations are working to align on key principles such as transparency, fairness and safety, the US’s unilateral focus on deregulation could limit its influence in shaping these global norms. Additionally, the Trump administration’s pivot toward deregulation risks creating a perception that the US prioritizes short-term innovation gains over long-term ethical considerations, potentially alienating allies and partners.
A final consideration is the potential for the Trump EO to widen the gap between federal and state AI regulatory regimes, inasmuch as it presages deregulation of AI at the federal level. Indeed, while the EO signals a federal shift toward prioritizing innovation by reducing regulatory constraints, the precise contours of the new administration’s approach to regulatory enforcement – including on issues like data privacy, competition and consumer protection – will become clearer as newly appointed federal agency leaders begin implementing their agendas. At the same time, states such as Colorado, California and Texas have already enacted AI laws with varying scope and degrees of oversight. As with state consumer privacy laws, increased state-level activity in AI also would likely lead to increased regulatory fragmentation, with states implementing their own rules to address concerns related to high-risk AI applications, transparency and sector-specific oversight.
Thus, in the absence of clear federal guidelines, leaving businesses with a growing patchwork of state AI regulations will complicate compliance across multiple jurisdictions. Moreover, if Congress enacts an AI law that prioritizes innovation over risk mitigation, stricter state regulations could face federal preemption. Until then, organizations must closely monitor both federal and state developments to navigate this evolving and increasingly fragmented AI regulatory landscape.
Ultimately, a key test for the Trump administration’s approach to AI is whether it preserves and enhances US leadership in AI or allows China to build a more powerful AI platform. The US approach will undoubtedly drive investment and innovation by US AI companies. But China may be able to arrive at a collaborative engagement with international AI governance initiatives, which would position China strongly as an international leader in AI. Alternatively, is DeepSeek a flash in the pan, a stimulus for US competition or a portent for the future?
Conclusion
Overall, the Trump EO reflects a fundamental shift in US AI policy, prioritizing deregulation and freemarket innovation while reducing oversight and ethical safeguards. However, this approach could create challenges for US companies operating in jurisdictions with stricter AI regulations, such as the EU, the UK, Canada and Japan – as well as some of those states in the US that have already enacted their own AI regulatory regimes. The divergence between the US federal government’s pro-innovation strategy and the precautionary regulatory model pursued by the EU and these US states underscores the need for companies operating across these jurisdictions to adopt flexible compliance strategies that account for varying regulatory standards.
Pablo Carrillo also contributed to this article.

A Look at U.S. Government’s Changed Approach to Artificial Intelligence Development and Investments

Highlights

In January 2025, the new administration took several steps related to AI technologies and infrastructure
Many previous executive orders were rescinded, but a prior executive order regarding using federal lands for data centers remains in place
The U.S. has also announced major private investments into state-of-the-art AI data centers 

Since the new administration took office, the U.S. has taken several steps to implement new strategies and priorities related to the development of, and investment in, artificial intelligence (AI) technologies.
On Jan. 20, 2025, Executive Order 14110 titled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, was rescinded. It required companies developing AI to share information about their technologies with the federal government before their products could be made publicly available. All other previous executive orders pertaining to AI also were rescinded, except for an order related to using public lands for data centers.
On Jan. 21, 2025, several private companies announced from the White House a new private venture called the Stargate Project, which intends to invest $500 billion over the next four years building new AI infrastructure, including AI-focused data centers, in the U.S.
On Jan. 23, 2025, Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence. was implemented. This new order states that to maintain U.S. leadership in AI innovation, “we must develop AI systems that are free from ideological bias or engineered social agendas.” It also “revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.”
The order further states that it is the “policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
To accomplish those objectives, the order requires:
1) Within 180 days, the Assistant to the President for Science and technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant, shall develop and submit to the President an action plan to achieve the policy set forth in section 2 of this order.
2) The APST, the special advisor for AI and crypto, and the APNSA, in coordination with the heads of relevant agencies shall (1) identify policies, directives, regulations and orders taken pursuant to EO 14110 and (2) suspend, revise, or rescind such actions if they are inconsistent with the order’s objectives.
3) Within 60 days, the OMB shall revise its published guidance on AI to align with the order.
Takeaways
The U.S. is taking strides to maintain and extend its edge in AI innovations amid competition from others. The new executive order is one of the steps being taken, and the AI regulatory landscape is continuing to rapidly evolve, making it important to monitor the steps the U.S. and others take in connection with AI.