European Commission Withdraws ePrivacy Regulation and AI Liability Directive Proposals

On February 11, 2025, the European Commission made available its 2025 work program (the “Work Program”). The Work Program sets out the key strategies, action plans and legislative initiatives to be pursued by the European Commission.
As part of the Work Program, the European Commission announced that it plans to withdraw its proposals for a new ePrivacy Regulation (aimed at replacing the current ePrivacy Directive) and AI Liability Directive (aimed at complementing the new Product Liability Directive) due to lack of a consensus for their adoption. The withdrawal means that the current ePrivacy Directive and its national transposition laws will remain in force and postpones the regulation of non-contractual liability for damages arising from the use of AI at the EU level. 
Read the Work Program.

State Regulators Eye AI Marketing Claims as Federal Priorities Shift

With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part two of our series, in which we turn our focus to regulators, where we’re seeing increased scrutiny at the state level amidst uncertainty at the federal level.
FTC Led the Charge but Unlikely to Continue AI “Enforcement Sweep”
As mentioned in part one of our series, last year regulators at the Federal Trade Commission (FTC) launched “Operation AI Comply,” which it described as a “new law enforcement sweep” related to using new AI technologies in misleading or deceptive ways.
In September 2024, the FTC announced five cases against AI technology providers for allegedly deceptive claims or unfair trade practices. While some of these cases involve traditional get-rich-quick schemes with an AI slant, others highlight the risks inherent in the rapid adoption of new AI technologies. Specifically, the complaints filed by the FTC involve:

An “AI lawyer” who was supposedly able to draft legal documents in the U.S. and automatically analyze a customer’s website for potential violations.
Marketing of a “risk free” business powered by AI that refused to honor money-back guarantees when the business began to fail.
Claims of a get-rich-quick scheme that attracted investors by claiming they could easily invest in online businesses “powered by artificial intelligence.”
Positioning a business opportunity supposedly powered by AI as a “surefire” investment and threatening people who attempted to share honest reviews.
An “AI writing assistant” that enabled users to quickly generate thousands of fake online reviews of their businesses.

Since these announcements, dramatic changes have occurred at the FTC (and across the federal government) as a result of the new administration. Last month, the Trump administration appointed FTC Commissioner Andrew N. Ferguson as the new FTC chair, and Mark Meador’s nomination to fill the FTC Commissioner seat left vacant by former chair Lina M. Khan appears on track for confirmation. These leadership and composition changes will likely impact whether and how the FTC pursues cases against AI technology providers.
For example, Commissioner Ferguson strongly dissented from the FTC’s complaint and consent agreement with the company that created the “AI writing assistant,” arguing that the FTC’s pursuit of the company exceeded its authority.
And in a separate opinion supporting the FTC’s action against the “AI lawyer” mentioned above, Commissioner Ferguson emphasized that the FTC does not have authority to regulate AI on a standalone basis, but only where AI technologies interact with its authority to prohibit unfair methods of competition and unfair or deceptive acts and practices.
While it is impossible to predict precisely how the FTC under the Trump administration will approach AI, Commissioner Ferguson’s prior writings provide insight into the FTC’s future regulatory focus for AI, along with the focus in Chapter 30 of Project 2025 (drafted by Adam Candeub, who served in the first Trump administration) on protecting children online.
The impact of the new administration’s different approach to AI regulation is not limited to the FTC and likely will affect all federal regulatory and enforcement activity. This is due in part to one of President Trump’s first executive orders, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”
That order repealed the Biden administration’s 2023 executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established guidelines for the development and use of AI. An example of this broader impact is the SEC’s proposed rule on the use of AI by broker-dealers and registered investment advisors, which is likely to be withdrawn based on the recent executive order, especially given the acting chair’s public hostility toward the rule and the emphasis on reducing securities regulation outlined in Chapter 27 of Project 2025. 
The new administration has also been outspoken in international settings regarding its view that regulating AI will give advantages to authoritarian nations in the race to develop the powerful technology.
State Attorneys General Likely to Take on Role of AI Regulation and Enforcement
Given the dramatic shifts in direction and focus at the federal level, it is likely that short-term regulatory action will increasingly shift to the states.
In fact, state attorneys general of both parties have taken recent action to regulate AI and issue guidance. As discussed in a previous client alert, Massachusetts Attorney General Andrea Campbell has emphasized that AI development and use must conform with the Massachusetts Consumer Protection Act (Chapter 93A), which prohibits practices similar to those targeted by the FTC.
In particular, she has highlighted practices such as falsely advertising the quality or usability of AI systems or misrepresenting the safety or conditions of an AI system, including representations that the AI system is free from bias.
Attorney General Campbell also recently joined a coalition of 38 other attorneys general and the Department of Justice in arguing that Google engaged in unfair methods of competition by making its AI functionality mandatory for Android devices, and by requiring publishers to share data with Google for the purposes of training its AI.
Most recently, California Attorney General Rob Bonta issued two legal advisories emphasizing that developers and users of AI technologies must comply with existing California law, including new laws that went into effect on January 1, 2025. The scope of his focus on AI seems to extend beyond competition and consumer protection laws to include laws related to civil rights, publicity, data protection, and election misinformation.
Bonta’s second advisory emphasizes that the use of AI in health care poses increased risks of harm that necessitate enhanced testing, validation, and audit requirements, potentially signaling to the health care industry that its use of AI will be an area of focus for future regulatory action.
Finally, in a notable settlement that was the first of its kind, Texas Attorney General Ken Paxton resolved allegations that an AI health care technology company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its AI products, including error and hallucination rates.
As AI technology continues to impact consumers, we expect other attorneys general to follow suit in bringing enforcement actions based on existing consumer protection laws and future AI legislation.
Moving Forward with Caution
Recent success by plaintiffs, combined with an active focus on AI by state regulators, should encourage businesses to be thoughtfully cautious when investing in new technology. Fortunately, as we covered in our chatbot alert, there are a wide range of measures businesses can take to reduce risk, both during the due diligence process and upon implementing new technologies, including AI technologies, notwithstanding the change in federal priorities. Other countries – particularly in Europe – may also continue their push to regulate AI. 
At a minimum, businesses should review their consumer-facing disclosures — usually posted on the company website — to ensure that any discussion of technology is clear, transparent, and aligned with how the business uses these technologies. Companies should expect the same transparency from their technology providers. Businesses should also be wary of so-called “AI washing,” which is the overstatement of AI capabilities and understatement of AI risks, and scrutinize representations to business partners, consumers, and investors.
Future alerts in this series will cover:

Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.

The BR International Trade Report: February 2025

Recent Developments
President Trump drives forward with “America First” trade policy. Shortly after taking office on January 20, President Trump issued a memorandum to various department heads outlining his “America First” trade policy. Notably, the memorandum paves the way for robust tariffs and calls for executive branch review of various elements of U.S. trade policy. Read our alert for additional analysis. 
United States delays tariffs on imports from Canada and Mexico but imposes 10 percent tariffs on imports from China. On February 1, President Trump, acting under the authority of the International Emergency Economic Powers Act (“IEEPA”), imposed a 25 percent tariff on imports from Canada and Mexico (excluding energy resources from Canada, which were subject to a tariff of 10 percent) and a 10 percent tariff on imports from China. After first threatening to respond in kind—with retaliatory tariffs or other measures—both Canada and Mexico negotiated a 30-day pause in exchange for increased enforcement measures at America’s borders. There was no similar agreement between the United States and China, which became subject to additional tariffs on February 4. Notably, the president initially eliminated the de minimis exemption for certain Chinese-origin imports of items valued under $800, but then later reinstated the exemption.
President Trump announces 25 percent tariff on all steel and aluminum imports entering the United States. On February 10, President Trump signed a proclamation imposing 25 percent tariffs on imports of steel and aluminum from all countries and cancelling previous tariff exemptions. Peter Navarro, a trade advisor to the president, remarked that “[t]he steel and aluminum tariffs 2.0 will put an end to foreign dumping, boost domestic production, and secure our steel and aluminum industries as the backbone and pillar industries of America’s economic and national security.” The new tariffs will take effect on March 12. 
President Trump announces reciprocal tariff regime. On February 13, the president paved the way for what he called “the big one,” reciprocal tariffs directed against countries that impose trade barriers on the United States. Under the new framework, the United States will impose tariffs on imports from countries that levy tariffs on imports of U.S. goods, maintain a value-added tax (“VAT”) system, issue certain subsidies, or implement “nonmonetary trade barriers” against the United States. The president stated that the U.S. Department of Commerce will conduct an assessment, expected to be completed by April 1, to determine the appropriate tariff level for each country.
President Trump sets tariff sights on European Union. President Trump has said he “absolutely” plans to impose tariffs on goods from the European Union to address what he considers “terrible” treatment on trade. In an effort to stave off such measures, the European Union reportedly has offered to lower tariffs on imports of U.S. automobiles. Experts suggest that, in the event of U.S. tariffs, the European Union may retaliate with countermeasures against U.S. technology services. 
Trump and Putin discuss commencing negotiations to end the war in Ukraine. President Trump stated on February 12 that he had a “lengthy and productive” phone call with Russian President Vladimir Putin in which the two leaders discussed “start[ing] negotiations immediately” and “visiting each other’s nations.” The president followed up with a call to Ukrainian President Volodymyr Zelensky, who reported that the call was “meaningful” and focused on “opportunities to achieve peace.” The dialogue comes amidst Russia and Belarus releasing American detainees in recent days.
President Trump and Indian Prime Minister Narendra Modi meet to discuss deepening cooperation. On January 27, President Trump spoke with Indian Prime Minister Narendra Modi to discuss regional security issues, including in the Indo-Pacific, the Middle East, and Europe. Notably, following the phone call, India cut import duties on certain U.S.-origin motorcycles, potentially in an effort to distance itself from President Trump’s claims on the campaign trail that India was a “very big abuser” of the U.S.-India trade relationship. Prime Minister Modi followed up the discussion with a meeting with President Trump at the White House on February 13.
Secretary of State Marco Rubio meets with “Quad” ministers on President Trump’s first full day in office. On January 21, foreign ministers of the “Quad”—a diplomatic partnership between the United States, India, Japan and Australia—convened in Washington, D.C. In a joint statement, the group expressed its opposition to “unilateral actions that seek to change the status quo [in the Indo-Pacific] by force or coercion.”
U.S. Secretary of State Marco Rubio meets with Panamanian President José Raúl Mulino. In early February, Secretary of State Marco Rubio traveled to Panama to meet with Panama’s President José Raúl Mulino and Foreign Minister Javier Martínez-Acha. During the meeting, Secretary Rubio criticized Chinese “influence and control” over the Panama Canal area. Notably, following the meeting with Secretary Rubio, Panama announced that it would let its involvement in China’s Belt and Road initiative expire.
DeepSeek launches an artificial intelligence app, prompting U.S. national security concerns. In January, DeepSeek—a Chinese artificial intelligence (“AI”) startup—released DeepSeek R1, an AI app reportedly less expensive to develop than rival apps. Reports indicate that the United States is investigating whether DeepSeek, in developing its platform, accessed AI chips subject to U.S. export controls in contravention of U.S. law. Commerce Secretary nominee Howard Lutnick echoed these concerns in his recent confirmation hearing.
President Trump issues memorandum launching “maximum pressure” campaign against Iran. On February 4, the president issued a National Security Presidential Memorandum (“NSPM”) restoring his prior administration’s “maximum pressure” policy towards Iran, with a focus on denying Iran a nuclear weapon and intercontinental ballistic missiles. The NSPM directs the U.S. Department of the Treasury and the U.S. Department of State to take various measures exerting such pressure, including imposing sanctions or pursuing enforcement against parties that have violated sanctions against Iran; reviewing all aspects of U.S. sanctions regulations and guidance that provide economic relief to Iran; issuing updated guidance to the shipping and insurance sectors and to port operators; modifying or rescinding sanctions waivers, including those related to Iran’s Chabahar port project (which India has developed at considerable expense); and “driv[ing] Iran’s export of oil to zero.” See the White House fact sheet.
President Trump signs executive order calling for establishment of a U.S. sovereign wealth fund. On February 3, the president issued an executive order directing the Secretary of the Treasury, the Secretary of Commerce, and the Assistant to the President for Economic Policy to develop a plan for the creation of a sovereign wealth fund. A corresponding fact sheet describes the White House’s goals for the fund, including “to invest in great national endeavors for the benefit of all of the American people.” Treasury Secretary Scott Bessent stated that he expects the fund to be operational within the next year.
Dispute between the United States and Colombia over deportation flights prompts brief tariff threat. On January 26, Colombian President Gustavo Petro barred “U.S. planes carrying Colombian migrants from entering [Colombia’s] territory” due to concerns over migrants’ treatment. President Trump responded by ordering 25 percent tariffs on Colombian goods, to be raised to 50 percent in one week, visa restrictions on Colombian government officials and their families, and cancellation of visa applications. The standoff between the two countries was resolved later that same day, signaling President Trump’s intention to use tariffs as a key foreign policy tool. 
Impeached South Korean President Yoon Suk Yeol officially charged with insurrection. On January 26, South Korean prosecutors formally charged impeached President Yoon Suk Yeol with insurrection. Yoon becomes the first president in South Korean history to be criminally charged while still in office. In addition to criminal charges, Yoon faces potential removal from office via impeachment. Should the Constitutional Court uphold the impeachment, as many experts anticipate, South Korea will have two months to hold a new election.

Global Data Protection Authorities Issue Joint Statement on Artificial Intelligence

On February 11, 2025, the data protection authorities of the UK, Ireland, France, South Korea and Australia issued a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective artificial intelligence (“AI”) (the “Joint Statement”). In the Joint Statement, the DPAs recognize “the importance of supporting players in the AI ecosystem in their efforts to comply with data protection and privacy rules and help them reconcile innovation with respect for individuals’ rights.”
The Joint Statement refers to the “leading role” DPAs have in “shaping data governance” to address the evolving challenges of AI. Specifically, the Joint Statement indicates that the authorities will commit to:

Foster a shared understanding of lawful grounds for processing personal data in the context of AI training.
Exchange information and establish a shared understanding of proportionate safety measures, to be updated in line with evolving AI data processing activities.
Monitor technical and societal impacts of AI.
Reduce legal uncertainties and create opportunities for innovation where data processing is considered essential for AI.
Strengthen interactions with other authorities to enhance consistency between different regulatory frameworks for AI systems, tools and applications, including those responsible for competition, consumer protection and intellectual property.

Read the full Joint Statement.

Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors

On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:

Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.

In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:

Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.

Thomson Reuters Wins Copyright Case Against Former AI Competitor

Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.
A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.
In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.

Three States Ban DeepSeek Use on State Devices and Networks

New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.
Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”
According to the Texas Governor’s press release:
“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.” 

New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”
The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”
The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”
These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.

USCO Releases Part 2 of its Report on Copyright and Artificial Intelligence

On January 29, 2025, the Copyright Office (USCO) issued the second of its three-part report, Copyright and Artificial Intelligence, relating to its study of how copyright law and policy should respond to the development and use of artificial intelligence. The report grows out of the USCO’s Notice of Inquiry issued in August of 2023, which garnered over 10,000 responses from the public, including from businesses, trade associations, academics, non-profits, artists, and computer scientists. Part 2 deals with the copyrightability of AI-generated materials. Part 1, issued on December 16, 2024, discussed “digital replicas” (digital depictions of an individual) and proposed a new federal law to fill the gap in coverage of existing copyright law and other aligned areas such as rights of publicity. Part 3, expected in early 2025, will discuss the use of copyrighted works for training AI models.
In Part 2, the USCO concludes that existing law on copyrightability adequately accommodates AI-generated output and accordingly makes no recommendation for legislative action. The Copyright Act grants authors limited monopolies on their creations to assure that they have sufficient incentive to create and thereby to enrich culture (i.e., in the language of Article I, Section 8: “to promote the progress of the Arts and Sciences”). As interpreted, authors must be humans. Thaler v. Perlmutter, 687 F. Supp. 3d 140, 149–50 (D.D.C. 2023), Notice of Appeal, No. 23-5233 (D.C. Cir. October 18, 2023 (argued September 19, 2024).
Although the requirement of human authorship would appear, at present, to preclude AI-generated work from receiving copyright protection under the Copyright Act, the USCO considered the pros and cons of creating a new sui generis protection targeted at AI-generated works (perhaps a more limited protection). In support of such protection, it is worth noting that a robust repository of work, even if created by non-human actors, would arguably further the goal of promoting the Arts and Sciences (and therefore argue in favor of such protection). Such protection might also encourage many people (e.g., the non-professional) to participate in the excitement of creation and to express themselves, and even monetize those expressions, in ways they never dreamed.
The USCO was ultimately not persuaded by these arguments. Focusing initially on the AI technology itself, the USCO noted that unlike humans, machines, software, and algorithms need no incentive to create and therefore need no protection. The USCO was also cautious about increasing incentives to rely on AI-generated works and the concomitant creation of a synthetically diluted culture at the expense of a robust and perpetually renewing repository of human creativity. See Part 2, at 36-37. Citing as evidence the recent challenges faced by writers and musicians, the USCO was also sympathetic to concerns that such reliance might dampen the ability of human creators to monetize their works and thereby degrade the societal incentive to create. The USCO determined that the case of persons with disabilities did not require a different result. Noting its strong support for the empowerment of all creators, the USCO noted that AI is used as an assistive technology to facilitate human creation. Copyright protection remains available so long as it is used as a tool to recast, transform, or adapt an author’s expression and not to generate that expression. The USCO provided the example of singer Randy Travis who received a copyright registration for a song he created after suffering a stroke, under circumstances where he used AI to recreate his voice and help realize the musical sounds that he and his musical team desired. Part 2, at 37-38.
Acknowledging that a work must be “authored” by a human to receive protection, humans can play a variety of roles in creating AI-generated works. When does human activity rise to authorship? The USCO devoted a large part of its discussion, perhaps the most interesting part, to this question, the answer to which, as explained by the USCO, is rooted in copyright law’s distinction between ideas and expression. Only the latter is protected under copyright law (17 U.S.C. § 102(b)).
AI output is typically generated by user inputs (“prompts”), often in the form of text (e.g., “draw an image of dolphins at a birthday party”). Typical prompts, according to the USCO, contain merely unprotectible ideas that are then translated by AI into expression. These translation processes are largely random (“black box”) processes that are not predictable or understood by the user. The USCO noted that frequently the output includes many items that were not specified and excludes items that were, and the same prompt often can yield very different results. Viewed in this light, the AI process is akin to the novel writer who translates the high-level suggestions of his or her editor, or the scriptwriter who follows the general ideas and suggestions of a movie treatment. The suggestions of the editor do not make him or her an author of the novel, and the treatment does not make its creator an author of the script for the movie. Community for Creative Non-Violence v. Reid, 490 U.S. 730, 737 (1989) (“person who translates an idea into a fixed, tangible expression entitled to copyright protection”) (emphasis added), cited in Part 2 at 9; cf., Andrien v. Southern Ocean County Chamber of Commerce, 927 F.2d 132, 135 (3d Cir. 1991) cited in Part 2 at 9 (“[A] party can be considered an author when his or her expression of an idea is transposed by mechanical or rote transcription into tangible form under the authority of the party”) (emphasis added); Milano v. NBC Universal, Inc., 584 F. Supp. 2d 1288, 1294 (C.D. Cal. 2008), citing and quoting from Berkic v. Crichton, 761 F.2d 1289, 1293 (9th Cir.1985) (distinguishing a television treatment from “the actual concrete elements that make up the total sequence of events and the relationships between the major characters”). 
Importantly, even a detailed string of prompts, involving a selection and adoption process leading iteratively to the final output, does not, according to the USCO, confer the requisite control for authorship. As stated in Part 2 (at 20):
“Repeatedly revising prompts does not change this analysis or provide a sufficient basis for claiming copyright in the output. . . . By revising and submitting prompts multiple times, the user is “re-rolling” the dice, causing the system to generate more outputs from which to select, but not altering the degree of control over the process. No matter how many times a prompt is revised and resubmitted the final output reflects the user’s acceptance of the AI system’s interpretation, rather than authorship of the expression it contains.”
This conclusion may be hard to swallow for some. The iterative process more or less reflects the way many artists work, adopting small random acts on the page or canvas into integral parts of their work. Action painting of Jackson Pollock (referenced in Part 2), involving the dripping of paint on a canvas, is only a more extreme example of this regular phenomenon. While acknowledging a certain randomness in both prompting and the artistic process, the USCO distinguished the latter by the control that the artist, unlike the AI user, exerts physically over a process that is transparent and understood. “Jackson Pollock’s process of creation,” said the USCO, “did not end with his vision of a work. He controlled the choice of colors, number of layers, depth of texture, placement of each addition to the overall composition – and used his own body movements to execute each of these choices.” Part 2 at 20.
The approach of the USCO differs markedly from a highly publicized case in China, cited by the USCO at 28-29, which accorded copyright protection to an Image created by using Stable Diffusion and recognized the person using the AI tool as the author. In addition to making subsequent adjustments and modifications, the “author” used over 150 prompts to refine the image. As distinguished from the USCO’s position on iterative prompts, the court in that case considered the use of prompts as evidence of the control and creativity exerted by the author. The spirit of the Chinese court’s ruling is in striking contrast to the USCO’s:
“Therefore, when people use an AI model to generate pictures, there is no question about who is the creator. In essence, it is a process of man using tools to create, that is, it is man who does intellectual investment throughout the creation process, [not the] AI model. The core purpose of the copyright system is to encourage creation. And creation and AI technology can only prosper by properly applying the copyright system and using the legal means to encourage more people to use the latest tools to create. Under such context, as long as the AI-generated images can reflect people’s original intellectual investment, they should be recognized as works and protected by the Copyright Law.”
Li v. Liu, Dispute over Copyright Infringement of the Right of Attribution and Right of Information Network Distribution of works (as translated), at 13 (Beijing Internet Ct. November 27, 2023).
Various statutes in UK, Hong Kong, New Zealand, and India, though created before the proliferation of AI over the last couple of years, allow for computer-generated works to be copyrightable in the name of the person causing the creation of the work. The USCO noted that it remains to be seen how foreign countries interpret and apply their laws, how they harmonize their laws among themselves, and how content creators and information technology developers respond to these emerging laws. Part 2, at 29. Finally, with respect to its own conclusions on prompts, the USCO left open the possibility that advances in technology giving humans more control over generated content might merit a reevaluation of the conclusions in the report. Part 2, at 21.
Less nuanced, and perhaps less debatable than its conclusions with respect to prompts, were some of the USCO’s other conclusions. For example, even if a particular output might not be protectible, the creativity embodied in a modification of an AI output (the derivative aspects) may well be protectible. Likewise, if certain AI-generated components are themselves not protectible, a human compilation of those components may well be (e.g., organizing/compiling the AI-generated frames of a graphic novel or comic book). Additionally, the USCO noted that creative prompts (e.g., a drawing combining flowers and a human head) may result in a protectible AI image to the extent that the image transcribes the prompt. Finally, the USCO saw no need to enhance protections for AI Technologies themselves and noted that first-mover incentives along with existing copyright, patent, and trade secret protections for such technologies are sufficient.
For entities in the U.S. wanting to assure ownership of their creative works, the USCO’s position on AI-generated content might induce a bit of caution in using AI in the creative process, and it may offer some relief, at least at the margins, for people whose livelihood depends on their human creativity. Of course, ultimately it will be Congress and the courts that decide the direction of U.S. law.

Court Definitively Rejects Fair Use Defense in AI Training Case

In one of the most closely-watched copyright cases this year, a Delaware court rejected defendant, ROSS Intelligence’s (“ROSS”), fair use and other defenses by vacating its previous stance and granting summary judgement in favor of plaintiff, Thomson Reuters (“Reuters”). The case stems from allegations that ROSS used copyrighted material from Reuters’ legal research platform, Westlaw, to train its artificial intelligence (“AI”)-driven legal research engine. While the decision will certainly be informative to the dozens of pending lawsuits against AI developers, it is important to note the scope of the opinion is limited to the specific facts of this case. Specifically, generative AI was not involved, and the court’s analysis of the fair use factors was heavily focused on the fact that the parties are direct competitors.
The fair use analysis considered the four statutory factors, with factors one and four weighing most heavily.
Factor One—Purpose and Character of the Use: The court easily determined that ROSS’s use was commercial, as ROSS sought to profit from the copyrighted material without paying the customary price. Significantly, the court also found that the use was not transformative—ROSS used the copyrighted material to build a competing legal research tool, which closely resembled Westlaw’s functionality. Finally, the court rejected ROSS’s attempts to align its actions with a string of cases involving so-called “intermediate copying” of computer code, explaining that although the final commercial product ROSS presented to consumers was not a direct copy (the copying was an intermediate step), ROSS’ use was not necessary. Accordingly, because the use lacked a different purpose or character, the court found this factor weighed against fair use.
Factor Two—Nature of the Copyrighted Work: While the court acknowledged that Reuters’ headnotes involved editorial judgment and the minimum amount of creativity required for copyright validity, they were not highly creative works. Rather, they were more akin to factual compilations, which receive less protection under copyright law. The court found that this factor weighed in favor of fair use, but noted that this factor rarely plays a significant role in fair-use determinations,
Factor Three—Amount and Substantiality of the Use: Although ROSS copied a significant number of Westlaw headnotes, its final product did not present them directly to users. Instead, ROSS used the headnotes to train its AI to help retrieve judicial opinions. Given that the public did not receive direct access to the copyrighted material, the court found that this factor weighed in favor of fair use.
Factor Four—Market Effect. The court explained that ROSS’ use created a market substitute for Reuters’ legal research platform, directly harming its core business. In addition, Reuters had a potential market for licensing its headnotes as AI training data, and ROSS’ use undermined that opportunity. The court emphasized that this “is undoubtedly the single most important element of fair use,” and found that it weighed decisively against ROSS and against a finding of fair use.
Putting it Into Practice: Some key takeaways from this decision and its potential impact on other AI cases:

this decision is fact-specific and does not address generative AI;
the decision signals the limits of fair use, particularly in cases where copyrighted material is used for a non-transformative purposes to develop a competing product; and
as AI litigation continues to evolve, companies developing or deploying AI should seek ongoing legal training and they should ensure they have (and update) effective policies to manage AI risks for training AI and using AI to generate content.

 
Listen to this post

Legal AI Unfiltered: 16 Tech Leaders on AI Replacing Lawyers, the Billable Hour, and Hallucinations

With AI-powered tools promising efficiency gains and cost savings, AI is fundamentally changing the practice of law. But as AI adoption accelerates, major questions arise: Will AI replace lawyers? What does AI adoption mean for the billable hour? And can hallucinations ever be fully eliminated?
To explore these issues, we surveyed 16 tech leaders who are at the forefront of AI-driven transformation. They provided unfiltered insights on the biggest AI adoption challenges, AI’s effect on billing models, the potential for AI to replace lawyers, and the persistent problem of hallucinations in legal AI tools. Here’s what they had to say:
Why Are Law Firms Hesitant to Adopt AI Tools?
According to our survey of tech leaders, law firms’ hesitation in adopting AI is driven by several key factors, primarily concerns about accuracy, risk, and economic incentives. Many firms worry that AI tools can generate incorrect or misleading information while presenting it with unjustified confidence, making mistakes hard to detect. Additionally, larger firms that rely on the billable hour see efficiency-driven AI as a potential threat to their revenue models. Other firms lack a clear AI strategy, making AI adoption and integration difficult. Trust, data privacy, and liability remain major concerns.
More specifically, here’s what tech leaders had to say about law firm hesitancy in adopting AI:
Scott Stevenson, CEO, Spellbook:

Daniel Lewis, CEO, LegalOn Technologies: “Law firms are hesitant to adopt AI over risk and liability concerns — accuracy and client confidentiality matter most. They need professional-grade AI that is accurate and secure. Solve that, and firms will break through business and organizational barriers—unlocking immense value for themselves and their clients.” 
Kara Peterson, Co-Founder, descrybe.ai: “Because you can’t really count on it to be right—at least not yet. And unlike humans, when AI is unsure, it doesn’t admit it. In fact, it speaks with great authority and persuasiveness even when it is completely wrong. This means human lawyers must double-check everything because the errors are hard to spot. For many firms, this is simply too big a barrier to overcome. They are waiting for more reliable and error-free tools before jumping in.” 
Katon Luaces, President & CTO, PointOne: “The main reason law firms are hesitant to adopt AI is reliability.” 
Ted Theodoropoulos, CEO, Infodash: “A primary reason law firms hesitate to adopt AI is the absence of a comprehensive strategy. According to Thomson Reuters’ 2024 Generative AI in Professional Services survey, only 10% of law firms have a generative AI policy. Policies typically stem from well-defined strategies; without a clear strategy, formulating effective policies becomes challenging. Consequently, it’s likely that fewer than 10% of firms possess an AI strategy. Often, firms appoint C-suite or director-level AI/innovation roles without a pre-established strategy, expecting these individuals to develop one. However, strategic planning is most effective when initiated from the top down, and lacking this foundation can lead to unsuccessful AI integration.” 
Dorna Moini, CEO/Founder, Gavel: “Law firms are mainly cautious because they need to ensure that any new technology meets their high standards for accuracy and confidentiality. They have built a reputation on careful, detailed work and worry that premature adoption might compromise quality. However, as AI improves and its track record strengthens, it can support lawyers in routine tasks without sacrificing the meticulous approach that defines legal practice.” 
Troy Doucet, Founder, AI.Law: “Fear. Perfect is currently the enemy of the good, and as that subsides, lawyers will use it more.” 
Colin Levy, Director of Legal, Malbek: “The risk of hallucinating, where a tool produces inaccurate or misleading information, is a key reason. Secondarily to this the lack of transparency around AI tools and the data they use/rely on is another cause of concern and hesitancy for many law firms.” 
Gil Banyas. Co-Founder & COO, Chamelio: “The #1 reason law firms are hesitant to adopt AI is the lack of urgency due to their billable hour business model. Since firms generate revenue based on time spent, there’s no immediate financial incentive to implement efficiency-boosting AI tools that could reduce billable hours.” 
Arunim Samat, CEO, TrueLaw: “Impact to the billable hour.” 
Greg Siskind, Co-founder, Visalaw.ai: “Concerns regarding answer quality.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “The answer depends on a law firm’s familiarity with AI and its awareness of the current legal AI market. Attorneys with little to no understanding of AI tend to be hesitant, often due to concerns about security, accuracy, and reliability. Those with some knowledge of AI and legal technology are more skeptical about its practical applications and return on investment, particularly given that many legal AI solutions require lengthy, complex implementations and significant change management.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “Data privacy and hallucinations are the most common concerns we hear from law firms. Lawyers want to ensure the data they provide will not go towards training any models, and they want to know that the outputs of models are reliable and grounded in truth.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Law firms need to be sure that any AI tool they adopt will not compromise the precision or confidentiality required in legal work.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “Trust. Lawyers need to have complete confidence in their tools, and AI can sometimes feel like a ‘black box’—making decisions without clear explanations. If they don’t fully understand how AI reaches conclusions, it’s hard to trust it with high-stakes legal work. But here’s the thing—AI has been used in law for over a decade. Tools like Technology-Assisted Review (TAR) have proven to be as reliable as human review when used correctly. The hesitation isn’t really about whether AI can be trustworthy; it’s about transparency and control. The good news? With the right safeguards, oversight, and clear explanations of AI-driven decisions, law firms can use AI confidently. It’s not about replacing legal judgment—it’s about supporting it with smarter, faster tools.”

Where Does AI Excel, and Where Is It Still Overhyped?
Legal AI tools have made significant strides in the past two years, particularly in automating routine tasks that involve large volumes of data and well-defined processes. However, AI still struggles with more nuanced legal work that requires contextual understanding and strategic reasoning. Most legal tech leaders identified clear areas where AI is proving effective, alongside areas where expectations may still outpace reality.
Currently, AI excels in contract review, where it can analyze and summarize contracts with high accuracy. It is also highly effective in document review and due diligence, flagging inconsistencies, and surfacing relevant documents. Additionally, AI has reliably streamlined e-discovery, significantly reducing the time spent reviewing electronic documents. Another strength is its ability to summarize and extract data from documents. 
However, AI remains less reliable in legal brief writing, as it struggles with complex legal arguments and strategic reasoning. Similarly, while it can return results for case law research, it often fails to grasp legal context, hierarchy, and nuances—though some legal tech leaders hold differing views on its efficacy in this area. 
Legal tech leaders shared their insights into this “jagged frontier” of AI’s capabilities:

Scott Stevenson, CEO, Spellbook: “Excelling: Contract review and drafting; Hype: Litigation brief writing.” 
Daniel Lewis, CEO, LegalOn Technologies: “There is a ‘jagged frontier’ between what AI handles well and where it can improve. It excels at repetitive and time-consuming tasks with clear guardrails, like contract review, drafting, and some types of Q&A, while less structured tasks like legal research carry a higher risk of hallucination. Contract review stands out for its defined standards, verifiable outputs, and clear objectives.” 
Kara Peterson, Co-Founder, descrybe.ai: “AI is incredible at generating and interpreting text. However, it is not yet good at producing error-free, multistep legal workflows—though it is getting close. I wouldn’t call agentic AI in law “hype,” but it is still some distance away from being fully reliable.” 
Katon Luaces, President & CTO, PointOne: “The great majority of legal tasks have yet to be mastered by AI. However, there are many tasks frequently done by lawyers that AI is genuinely excelling at. For example, AI is excellent at administrative work such as filling in billing codes and writing time entry descriptions—tasks that aren’t legal tasks historically done by lawyers.” 
Ted Theodoropoulos, CEO, Infodash: “AI is excellent at acting as a sounding board during ideation and bringing additional perspective to the creative process. That said, it’s not good at generating novel ideas as it is limited to the confines of its training data. AI is good at summarization, extraction, and classification but still has a lot of room for improvement. For higher risk tasks it shouldn’t be relied upon solely. Currently, AI is not good at understanding context and nuance. As the infamous Stanford paper on legal research tools pointed out last year, these tools misunderstand holdings, struggle to distinguish between legal actors, and fail to grasp the order of authority.” 
Dorna Moini, CEO/Founder, Gavel: “Today, AI is particularly effective at tasks like document review, legal research, and contract analysis. It can process large volumes of information quickly and flag important details for further review. On the other hand, AI is still far from being able to handle complex legal strategy or provide the nuanced judgment that experienced lawyers offer.” 
Colin Levy, Director of Legal, Malbek: “AI is still not great at handling complexity or ambiguity, but some tools are getting better. Currently existing tools, however, are really good at conducting basic legal research and document review and analysis. AI tools are best at specific and well-defined tasks.” 
Gil Banyas. Co-Founder & COO, Chamelio: “Genuinely Excelling: Document review & due diligence (finding relevant clauses, inconsistencies across contracts), legal research (case law/statute search, surfacing relevant precedents), and contract analysis (template comparison, risk flagging). Current hype: Legal writing from scratch, strategy/counseling, negotiation, and settlement work.” 
Arunim Samat, CEO, TrueLaw: “AI excels in document review for eDiscovery; however, prompt engineering complicates the measurement of CAL review metrics. Caselaw research remains little more than an advanced search function in a database, as AI models are not yet capable of formulating case strategies while accurately citing relevant case law. Teaching LLMs to shepardize effectively remains a complex challenge.” 
Greg Siskind, Co-founder, Visalaw.ai: “Practice management advice, legal research (via curated libraries), summarization, legal drafting and analysis.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “E-discovery (and other procedural solutions) are likely the most advanced so far, as they are easier to develop and face fewer challenges related to issues like hallucinations and transparency. Meanwhile, some tools in more substantive areas, such as legal research, are marketed with great enthusiasm but may not fully meet expectations just yet. However, that doesn’t mean they won’t get there—it may simply take more time for them to be perfected.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI is excelling at tasks where it has access to a wealth of “grounding” data. Given the valid concerns around hallucinations, the most effective work an AI can do relies less on what the AI model intrinsically “knows” and more on analyzing data provided by the user or external sources (such as case law). For example, AI excels at document review because it’s processing and transforming existing data and at much faster rates than humans can.” 
Troy Doucet, Founder, AI.Law: “The hype/reality issue is more from the companies that say they do AI but don’t really have an offering. We find AI can do just about anything if you know how to work with it.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Automating repetitive tasks such as document review, data extraction, and research. AI’s ability to aggregate information from multiple sources demonstrates that AI is already effective in these areas. AI still struggles with tasks that require nuanced judgment and strategic thinking.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “AI is making a real impact in legal work, especially when it comes to handling large volumes of documents. One area where it truly shines is summarizing lengthy legal materials—think trial packages, depositions, and case files. Plus, AI helps minimize human error by ensuring critical information isn’t overlooked—especially in claims and litigation.”

Will “Hallucinations” in Legal AI Tools Ever Be Eliminated?
AI hallucinations—when a model generates incorrect or fabricated information—remain one of the biggest concerns for lawyers when using AI-powered legal tools. While advancements in AI continue to mitigate these issues, experts largely agree that hallucinations will likely persist to some degree due to the probabilistic nature of LLMs. 
Nonetheless, some legal tech leaders believe that hallucinations can be eliminated completely.
Legal tech companies are taking different approaches to address the “hallucination challenge,” from refining training data to improving AI oversight and validation systems. Many companies focus on “grounding” AI models in authoritative legal content, ensuring they pull from verified sources rather than relying solely on predictive algorithms. Others are developing fact-checking layers and human-in-the-loop review processes to minimize errors before outputs reach end users.
Here’s what the heads of these companies had to say about hallucinations: 

Scott Stevenson, CEO, Spellbook: “If you force an AI tool to do something that is impossible, it will hallucinate. If you give it achievable tasks and supply it with correct information that can fit in its short-term memory, it generally will not. We no longer hear of customers complaining about hallucination at Spellbook.” 
Daniel Lewis, CEO, LegalOn Technologies: “Eliminating hallucinations entirely may be out of reach for now but substantially reducing them is achievable. At LegalOn, we do this by grounding AI in authoritative legal content built by our in-house lawyers, ensuring accuracy from the start. Thoughtful product design can also make a big difference in helping users quickly evaluate the reliability of an AI-generated answer.” 
Kara Peterson, Co-Founder, descrybe.ai: “Given how quickly AI has advanced, it’s hard to imagine that hallucinations won’t eventually be solved. I expect AI will develop self-monitoring capabilities, which could potentially eliminate this issue once and for all.” 
Katon Luaces, President & CTO, PointOne: “Some amount of ‘hallucination’ is done even by humans when reasoning and writing, we just frame them differently. These errors are fundamental to the pursuit of complex tasks and will never be eliminated completely. That said, for certain tasks, AI already has a lower error rate than the 75th percentile lawyer and will continue to improve.” 
Ted Theodoropoulos, CEO, Infodash: “Currently, no legal tech company has completely eradicated hallucinations in AI outputs. Some vendors claim to have solved this issue, but such assertions often don’t withstand thorough examination. Given the substantial investments in AI research, many of the brightest minds are dedicated to addressing this challenge, which suggests a positive outlook. However, as of now, hallucinations remain an inherent aspect of large language models, and ongoing efforts continue to mitigate this issue.” 
Nathan Walter, CEO, Briefpoint: “Hallucinations are a symptom of LLM’s infancy – not a requisite part of their functionality. They can and will be solved through ‘trust but verify’ implementations wherein all generated citations can be quickly verified by the user.” 
Dorna Moini, CEO/Founder, Gavel: “LLMs sometimes generate errors or ‘hallucinations’ due to the way they predict text based on patterns in data. Developers are making progress with safeguards and improved models to reduce these occurrences. While it may not be possible to completely eliminate them, continuous improvements should help make AI more reliable for legal applications.” 
Colin Levy, Director of Legal, Malbek: “Unclear. This seems to be awfully dependent on a) how these models are designed and b) the amount (breadth + depth) of data used to train the models on. Currently, data set size is a major limitation of existing models.” 
Gil Banyas. Co-Founder & COO, Chamelio: “Hallucinations are an inherent feature of LLMs, but that’s okay. Leading legal tech companies are building comprehensive systems where LLMs are just one component. With the right checks and balances in place, hallucinations can be effectively contained.” 
Arunim Samat, CEO, TrueLaw: “LLMs, by their nature, are probabilistic generating machines, and with probability, nothing is certain. Hallucinations are highly use-case-specific. In document classification for eDiscovery, hallucinations are easier to measure using standard precision and recall metrics. In contrast, generative tasks present greater challenges, though the risk can be minimized—almost to zero—using grounding techniques. However, given the probabilistic nature of LLMs, there are no statistical guarantees. Eliminating hallucinations entirely would imply creating an “information black hole”—a system where infinite information can be stored within a finite model and retrieved with 100% accuracy. In its current form, I don’t believe this is possible.” 
Greg Siskind, Co-founder, Visalaw.ai: “I think we will have this problem for a couple of years but at a diminishing rate. I think after about five years or so the problem a lot have largely disappeared.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “Legal tech companies might not be able to eliminate hallucinations entirely, but they’re putting stronger guardrails in place to keep them in check. Engineers are using structured prompts, fine-tuning models, and building smarter architectures to reduce them. Plus, companies are rolling out advanced validation layers, fact-checking systems, and other safeguards to catch and correct errors. While hallucinations are likely to stick around as a natural part of LLMs, these improvements will go a long way in making legal AI more accurate and reliable.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “As with any technology, flaws like these will probably never be completely eliminated. However, there have been significant advances both in model technology and in data augmentation in just the last 6 months that have vastly improved accuracy. When paired with improved explainability and citation features, AI-generated responses are becoming much more verifiable and trustworthy.” 
Troy Doucet, Founder, AI.Law: “This actually isn’t hard to avoid from a programming issue for a company building on top of LLMs. The LLMs themselves will figure this out too once they make it a priority.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “It’s important to reference citations and rely on validated sources to manage the risk of inaccurate outputs. Some degree of error remains inherent in current AI models, necessitating human review.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “AI hallucinations are an inherent challenge of LLMs. While they may never fully disappear, legal tech companies can significantly reduce them with smarter AI design. In law, where accuracy is everything, even one AI-generated mistake—like the Canadian lawyer citing fake precedents—can be a serious liability. But fixing this issue isn’t just about human oversight—it starts with using AI built for the job. How do we cut down on hallucinations? Extractive AI. Instead of generating new interpretations, extractive AI pulls and organizes key details directly from source documents, keeping everything factually accurate.”

Will AI Change the Billable Hour Model?
AI’s increasing role in legal workflows may be putting pressure on the billable hour model. While some firms have already transitioned to flat-fee and subscription-based billing structures, others remain hesitant to abandon traditional hourly billing. 
Most legal tech leaders agree that AI will drive efficiency and encourage alternative pricing models, but the complete demise of the billable hour remains unlikely in the near future:

Scott Stevenson, CEO, Spellbook: “Yes. We see many boutique firms moving into flat fee billing, increasing their margins substantially.” 
Daniel Lewis, CEO, LegalOn Technologies: “Reports about the death of the billable hour continue to feel exaggerated. AI-driven efficiencies will push clients to put pressure on the amount of time billed, but rather than a dramatic overturning, we’ll see adaptation. Many clients still prefer the billable hour for certain work, and firms will evolve to deliver more value and perhaps different services — faster and in less time. The real competition will be in who can leverage AI to provide the best services, not just on billable hours and rates.” 
Kara Peterson, Co-Founder, descrybe.ai: “Absolutely. The billable hour likely won’t disappear entirely, but there will be significant pressure on this payment model, forcing it to evolve. I can envision a future where flat fees and even subscription-based models become far more common. While these changes may start at the lower end of the market, that’s not a given. Additionally, as AI makes time-consuming tasks more efficient, we may actually see hourly rates rise for human lawyers—especially for high-level legal expertise.” 
Katon Luaces, President & CTO, PointOne: “AI will continue to put pressure on law firms to adopt alternative fee arrangements and even make some tasks completely non-billable.” 
Ted Theodoropoulos, CEO, Infodash: “The billable hour has been the cornerstone of the law firm economic model for 50 years, but AI is increasingly challenging its dominance. AI-driven tools are significantly reducing time spent on tasks like document review legal research, and contract drafting. As a result, clients are demanding value-based pricing, pushing firms toward alternative fee arrangements (AFAs) such as fixed fees and subscription models. ALSPs and Big Four firms like KPMG are already leveraging AI for scalable, cost-effective legal services. If traditional firms do not adapt, these entrants will capture the value-driven segment of the market.” 
Nathan Walter, CEO, Briefpoint: “AI is changing the billable hour – many firms using Briefpoint have switched to flat rate billing on the tasks Briefpoint automates (discovery response and request drafting). Elimination of the billable hour is another story, and I don’t think we’ll see that in the next five years – eliminating the billable hour would require a fundamental restructuring of firms’ business model, not to mention a revision of attorney fee-shifting statutes. Law firms will maintain their business model until it doesn’t work. For the business model to ‘not work,’ law firms must lose business because of the billable hour. While we’re seeing significant increases in in-house teams asking about AI usage, that’s a far cry from conditioning representation on flat-rate billing.” 
Dorna Moini, CEO/Founder, Gavel: “AI is already shifting the landscape away from the traditional billable hour by enabling alternative business models. These models can offer clients greater transparency on costs and outcomes while allowing lawyers to work more efficiently and profitably. This change benefits both sides by aligning pricing with value rather than time spent.” 
Colin Levy, Director of Legal, Malbek: “AI will allow law firms to more easily scale work, e.g. take on more work without increasing headcount. AI will also increase competitive pressures from ALSPs and alternative fee arrangements. The outright disappearance of the billable hour is unlikely given current economic and technical factors.” 
Gil Banyas. Co-Founder & COO, Chamelio: “AI won’t eliminate the billable hour in the short term, but it will force firms to evolve their pricing. As routine tasks become automated, firms will need to shift toward value-based pricing for complex work while offering fixed fees for AI-assisted tasks.” 
Arunim Samat, CEO, TrueLaw: “We believe that law firms investing in proprietary AI models will unlock new revenue streams by monetizing their expertise. By training AI on their unique knowledge and experience, firms can offer novel, AI-powered services that were previously impractical. Since these services have predictable costs, they lend themselves well to flat-fee arrangements, allowing firms to introduce new revenue models without immediately disrupting the traditional billable hour structure. We’re already seeing firms offer proactive litigation risk monitoring and other recurring AI-driven services to their clients.” 
Greg Siskind, Co-founder, Visalaw.ai: “I think that is inevitable. And practice areas like immigration, which are largely flat billed already, we’re seeing more rapid adaptation of AI and more innovation in that space. I think the rest of the bar will follow.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “AI will definitely change the billable hour, but it won’t make it disappear entirely. As legal AI tools streamline workflows and improve efficiency, more firms will likely shift toward flat-fee or value-based pricing models, especially for routine work. However, billable hours will still play a role, particularly for complex matters that require deep legal expertise. That said, once firms fully leverage AI, the billable hour may no longer be the most lucrative model.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI will push the needle towards alternative fee arrangements at a rate faster than ever before. It is inspiring more innovation in how law firms bill their clients, especially for the work that AI is accelerating. However, AI is also increasing efficiency and allowing firms to take on more cases, which could offset this trend and even lead to increased profitability.” 
Troy Doucet, Founder, AI.Law: “In 10 years, we won’t have billable hours the way they exist today, if at all. Value of lawyers will be derived from broader engagement with their clients- things like strategy and risk management.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “Efficiency gains from using AI is reducing the time spent on routine work, leading to alternative billing models (such as flat fees or value-based pricing), while the billable hour might still be used as an internal performance metric. Ultimately, AI will likely change how legal work is billed without entirely eliminating traditional metrics.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “Absolutely—but not everyone is on board just yet. Some law firms are resistant to AI because it threatens the traditional billable hour model. If AI can handle document review, legal research, and analysis in a fraction of the time, that means fewer billable hours. But here’s the catch: as more firms embrace AI, clients will start expecting the same efficiency everywhere. Law firms that resist AI risk falling behind as clients demand faster, more cost-effective legal services. Instead of measuring value by hours worked, the industry will shift toward a more results-driven approach—where expertise, strategy, and outcomes matter more than time spent on tedious tasks. Billable hours won’t disappear overnight, but AI is already pushing the legal industry toward a future where efficiency and results take center stage.”

Will AI Replace Lawyers?
As AI continues to improve, the question of whether it will replace lawyers remains a topic of debate. While AI excels at automating routine legal tasks, legal tech leaders largely agree that it lacks the judgment, strategic thinking, and interpersonal skills necessary to fully replace attorneys. Instead, AI is largely expected to augment legal professionals, allowing them to focus on higher-value work while automating administrative and repetitive processes.
Here’s what legal tech leaders had to say about whether AI will replace lawyers: 

Scott Stevenson, CEO, Spellbook: “No. Even if you can automate legal work, clients can’t understand what the documents you produce mean, and they ultimately need to be able to trust a human’s judgment. We built a product that cut out lawyers that worked fairly well, six years ago, but most users didn’t like it because they had “DIY anxiety” and ultimately needed a human to guide them through their matter.” 
Daniel Lewis, CEO, LegalOn Technologies: “No, AI will not replace lawyers, but it will change and elevate how they work. In contracts, for example, AI helps with line-by-line contract review, freeing lawyers to focus on judgment, context, and strategy—where expertise makes the biggest impact.” 
Kara Peterson, Co-Founder, descrybe.ai: “In a sense, yes—but not in the way many fear. AI will replace rote, mundane legal tasks, but not the entire legal process. If anything, AI will augment lawyers rather than fully replace them. In fact, the “human-in-the-loop” lawyer will become more critical than ever to handle nuance and complexity that AI simply can’t.” 
Katon Luaces, President & CTO, PointOne: “AI will certainly replace some of the work of lawyers just as lawyers no longer manually shepardize nor personally walk documents to the courthouse. That said, the practice of law is fundamental to government and commerce. While the work of a lawyer may become unrecognizable, there will be individuals who practice law.” 
Ted Theodoropoulos, CEO, Infodash: “AI will not replace lawyers in the immediate future, but it will fundamentally reshape their roles. While AI excels at automating routine tasks (e.g. contract analysis, eDiscovery, and brief drafting), it lacks the human judgment, emotional intelligence, and ethical reasoning required for complex legal matters. Over the next 2-3 years, we will see AI shift legal work toward advisory and strategic functions, but full lawyer replacement remains unlikely. The firms that embrace AI as a complement rather than a competitor will be the ones that thrive in the evolving legal landscape.” 
Nathan Walter, CEO, Briefpoint: “There are some components of the job that need human-to-human connection – I don’t think a jury will ever warm up to an AI trial attorney in the same way I lose interest in a piece of media once I find out it’s made by AI. The parts that don’t need a human touch? Those will be gone.” 
Dorna Moini, CEO/Founder, Gavel: “No, AI will not replace lawyers. There’s a large gap in legal services that needs to be filled, and while AI can assist with some routine functions, it can’t bridge that gap in legal services on its own. Lawyers will continue to be vital in offering the nuanced support and guidance that many clients need. Instead of replacing lawyers, AI can serve as a tool to help them better serve not just underserved communities, but the middle class and digitally-inclined clients as well.” 
Colin Levy, Director of Legal, Malbek: “We do not know how our own brains work especially around self-awareness, so to somehow expect AI to do the same anytime soon is highly unlikely.” 
Gil Banyas. Co-Founder & COO, Chamelio: “AI won’t replace lawyers, but it will reduce the number needed as it automates routine legal work. While AI excels at tasks like document review, it can’t replicate lawyers’ judgment, strategic thinking, and emotional intelligence. The future lawyer will be more efficient and focused on high-value work, but firms will likely need fewer attorneys to handle the same workload.” 
Arunim Samat, CEO, TrueLaw: “We believe AI will create 10x lawyers—legal professionals who can accomplish 10 times more work in the same amount of time. While certain legal functions will inevitably be affected, this shift isn’t unique to the legal industry—it’s happening across every sector. To thrive in an AI-augmented world, professionals must reimagine their workflows and daily operations. The key is adaptability: those who embrace AI and rethink how they work will unlock unprecedented efficiency and value, while those who remain rigid and resistant to change will struggle to keep up in this evolving landscape.” 
Greg Siskind, Co-founder, Visalaw.ai: “No. But lawyers will lead legal teams that include paralegals, lawyers, and AI. With the rise of genAI, roles will evolve where a lot of the tasks performed by paralegals and lawyers will be performed by AI, and humans will increasingly play more of a ‘sherpa’ role managing the tech and personally guiding their clients through the legal process.” 
Charein Faraj, Legal Operations Manager, LexCheck Inc.: “No, AI won’t replace lawyers, but it will fundamentally change the practice of law. It can help struggling associates learn faster, adapt more quickly, and gain expertise in less time. AI will also reshape the business model of law firms—potentially leading to more hiring as firms take on an increased volume of work. Additionally, AI is creating new opportunities for attorneys in adjacent fields like legal operations and AI-driven legal tech, opening up career paths that didn’t exist before. In some areas of law, AI may reduce the need for as many attorneys by cutting down busy work, but overall, it’s more about transformation than replacement. Rather than making lawyers obsolete, AI is redefining how they work and where their skills are most valuable.” 
Mitchell Kossoris, Co-Founder and CEO, Deposely: “AI won’t replace lawyers anytime soon. Lawyers don’t simply recite law and design legal strategies. They provide nuanced judgment, empathy, and advocacy—qualities that are crucial in client relationships and that AI still struggles with. The human aspect of attorney-client relationships cannot be understated, and clients need a real person they can connect with to reassure them, especially in high-stakes matters.” 
Troy Doucet, Founder, AI.Law: “No. Lawyers will become managers of AI.” 
Chris Williams, Head of Strategic Partnerships & Community, Leya: “The quote, ‘Artificial Intelligence won’t replace lawyers, but lawyers using it will’ still stands true.” 
Jenna Earnshaw, Co-founder & COO, Wisedocs: “No, AI won’t replace lawyers—because the law isn’t just about processing information; it’s about judgment, strategy, and advocacy. AI can be a powerful tool for streamlining tasks like document review and legal research, but it can’t think critically, navigate ethical dilemmas, or argue a case in court. Plus, human oversight is essential to ensure fairness and catch biases in AI models. Rather than replacing lawyers, AI is helping them by handling tedious admin work, freeing up time for higher-level thinking and client advocacy. The future of law isn’t AI vs. lawyers—it’s AI empowering lawyers to work smarter and deliver better results.”

***
As legal tech companies continue to improve their offerings, the legal profession may continue to undergo fundamental changes to the practice of law— reshaping workflows, redefining the billable hour, and transforming the role of lawyers in ways we are only just beginning to understand. 

The AI Royal Flush: The Five Foundations of Artificial Intelligence, Part 2

This is Part 2 of a summary of Ward and Smith Certified AI Governance Professional and IP attorney Angela Doughty’s comprehensive overview of the potential impacts of the use of Artificial Intelligence (AI) for in-house counsel.
Generative AI Best Practices for In-House Counsel
Considering all of the risks and responsibilities outlined in Part 1 of this series, Doughty advises in-house counsel to mandate training, discuss with the management team control of approved tools, and organize a review of what data can be used with AI at each company level.
TRAINING
Training has the potential to bridge generational gaps and create a working environment where people are more comfortable sharing new ideas. “I am very proud of our firm for the way that we have adapted to the modern business landscape,” noted Doughty. “We have some folks who have been practicing for 30 to 40 years, and we have some right out of law school. With everyone evolving and learning at the same rate, we’re using training to build a more inclusive culture.”
HUMAN REVIEW
Having an actual human review key decisions is another best practice. In a case where an experienced person would not have required an additional layer of review, but AI is being used to streamline the process, prudent companies should conduct a secondary review of the AI-generated work.
MONITOR REGULATIONS
Similar to the technology, the regulatory environment is constantly in flux. Most of the potential regulations are currently just proposals that are not legally binding. For example:

The EU AI Act is a comprehensive legislative framework that aims to regulate AI technologies based on their level of risk. It’s designed to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values.
The U.S. Blueprint for an AI Bill of Rights outlines principles to protect the public from the potential harms of AI and automated systems, focusing on civil rights and democratic values in the U.S.
The FTC enforces consumer protection laws relevant to AI, focusing on issues like fairness, transparency, bias, and data privacy, but it currently operates within a more reactive and general deceptive trade practices legal framework.

Like the future, the final rules are impossible to predict. Doughty expects that transparency, fairness, and explainability will be common themes.
Regulators will want to know how decisions were made, whether AI was involved, how data is processed, and how data is protected. “They will not hesitate to hold senior-level people accountable. This is partially why clear policies are an effective strategy for minimizing risk,” commented Doughty.
Different regions have different ideas regarding ethics and bias. This increases the challenge of navigating the evolving regulatory framework.
COMPLIANCE
“Compliance with all of the standards is practically impossible, which makes this very similar to data protection and privacy. One of my worst nightmares is when a client asks me to make them compliant,” added Doughty, “because, in most cases, it’s simply not feasible.”
Penalties are likely to vary in proportion to the risk to society. Companies should consider whether using AI is worth the reputational damage and harm it could cause to individuals.
Businesses operating in high-risk sectors may see additional regulations compared to other businesses. It is a patchwork of inconsistent, overlapping laws, and that is unlikely to change. “If there is a positive to this, it is that it will keep us in business for a long time,” joked Doughty.
Legal knowledge will continue to be vital for helping clients make decisions. Critical thinking skills and an understanding of jurisprudence will also continue to support job security for attorneys.
Remember, AI is a Tool, not a Replacement.
“As attorneys, we have empathy skills. People don’t want to sit in front of a computer and talk about really difficult, hard things. They want to look you in the eyes,” Doughty explained.
AI is just a tool, and fears over being replaced may be overblown. Doughty is using the technology on a daily basis. Along with using it to edit her presentation into bullet points for experienced in-house attorneys, she uses it to draft legal scenarios.
Doughty advises not to use a person’s real name because of privacy. “I also use it when I am frustrated with someone, so I draft how I really feel, then ask the AI to make it more professional,” noted Doughty.
AI can quickly write an article if provided with a topic, a target audience, and a few links. The speed and accuracy are astonishing, but many believe it is difficult, if not impossible, to determine whether the copy was plagiarized. This is likely to be the subject of ongoing litigation.
Audience Questions Answered
In the Q&A portion of the presentation, the audience questions came in quickly. Doughty attempted to address all she could in the time allotted and offered to take calls regarding questions after the program.
In response to a question centered on navigating ongoing regulations, Doughty advised following the National Institute of Standards and Technology Cybersecurity Framework.
Another audience member wondered, “Can AI be trusted?”
“No, it cannot be trusted at all,” said Doughty. “There is not a single tool that I would recommend using as the basis for legal decisions without substantial human oversight – same as you would with any other technology tool.”
“What technology or change, if any, compares to the effect generative AI is having on the legal system and profession?”
“I don’t think we’ve seen anything like this since Word, Excel, and Outlook came out in terms of changing the way that we practice law and prepare legal work products. I remember having to go from the book stacks to Westlaw; it was just a different way to do research, but I still had to do all of those things. This is even more revolutionary than what we saw at that point.”
“How do you mitigate the risk of harmful bias in a vendor agreement?”
“The short answer is to fully vet your vendors. Many vendors understand the risks and will include representations and warranties within their contracts, but this one is difficult. Understanding the training model and data used for training can be key, as it was with the earlier examples of AI hiring tools trained in male-dominated industries that preferred male applicants.”
“Any tips to bring up the topic of AI to organizational leadership?”
“Quantify the risk and discuss all of the penalties that could occur, along with the opportunity costs associated with ruining a deal.”
“Are there any legal-specific AI tools that you see as a good value?”
“In terms of legal research, writing, or counsel, I would not advise using AI for any of that right now – outside of the (very) expensive, but known, traditional legal vendors, such as Westlaw and Thompson Reuters. This is partially because most of the AI tools people are using are open AI tools. This means every question and answer – right or wrong – is being used to train the technology. This is also partially because to ethically use these tools, we must understand their strengths and weaknesses enough to provide sufficient oversight, and many attorneys are not there yet.”
“What about IP infringement?”
“If AI has been predominantly trained on existing content and you use it to create an article, does the writer have an infringement claim? This is to be determined, and it’s one of the biggest issues being litigated right now. There are a slew of artists that are suing Gen AI companies for this purpose.”
“What about the environmental impact of AI processing?”
“Generative AI significantly impacts the environment due to its high energy consumption, especially during the training and operation of large models, leading to substantial carbon emissions. The use of resource-intensive hardware and the cooling needs of data centers further exacerbate this impact.”
“Any suggestions for when the IT department believes their understanding of the risks supersede the opinions of the legal department?”
“This is when the C-suite needs to come in because the legal risk and responsibility are already out there, and implementation is under a completely different department. It’s a business decision. I look at the IT department no different than the marketing or sales departments in determining the legal risk and making a recommendation.”
“Any recommendations for AI-based tools to stay on top of the regulatory tsunami?”
“Not yet, but I spend a lot of time listening to demos and participating in vendor training sessions. Signing up for trade association newsletters is another way to stay current. These are free resources for training and help with staying current on industry trends and proposed regulations.”
Conclusion
Doughty concluded the session by reminding the group of In-House Counsel that their ethical duties and responsibilities extend to governance, compliance, risk management, and an ongoing understanding of the ever-evolving landscape of Generative AI.

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”
At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law.
The court rejected Ross Intelligence merger and scènes à faire arguments. Though the headnotes were drawn directly from uncopyrightable judicial opinions, the court analogized them to the choices made by a sculptor in selecting what to remove from a slab of marble. Thus, even though the words or phrases used in the headnotes might be found in the underlying opinions, Thompson Reuters’ selection of which words and phrases to use was entitled to copyright protection. Interestingly, the court stated that “even a headnote taken verbatim from an opinion is a carefully chosen fraction of the whole,” which “expresses the editor’s idea about what the important point of law from the opinion is.” According to the court, that is enough of a “creative spark” to be copyrightable. In other words, even if a work is selected entirely from the public domain, the simple act of selection is enough to give rise to copyright protection.
Relying on testimony from Thompson Reuters’ expert, the court compared “one by one” how similar 2,830 Bulk Memos were to the West headnotes at issue. The found that 2,243 of the 2,830 Bulk Memos were infringing as a matter of law. Whether Ross Intelligence’s contractor had access to the headnotes was an open question, the court reasoned that a Bulk Memo that “looks more like a headnote that it does the underlying judicial opinion is strong circumstantial evidence of copying.” Questions of infringement are, of course, normally left for the fact finder, but the court found a reasonable juror could not conclude that the Bulk Memos were not copied from the West headnotes.
The court then went on to rule as a matter of law that Ross Intelligence’s fair use defense failed – even though only two of the four fair use factors favored Thompson Reuters. The court specifically found that Ross Intelligence’s use was commercial in nature and non-transformative because the use did not have a “further purpose or character” apart from Thompson Reuters’ use. The court also found dispositive that Ross’ intended purpose was to compete with Thompson Reuters, and thus would impact the market for Thompson Reuters’ service. The court, on the other hand, found that the relative lack of creativity in the headnotes, and the fact that users of Ross’ systems would never see them, also favored Ross.
The court distinguished cases holding that intermediate copying of computer source code was fair use, reasoning that those courts held that the intermediate copying was necessary to “reverse engineer access to the unprotected functional elements within a program.” Here, copying Thompson Reuters’ protected expression was not needed to gain access to underlying ideas. How this reasoning will play out in other pending artificial intelligence cases where fair use will be hotly contested is anyone’s guess – in most of those cases, the defendants would argue that they are not competing with the rights owners and that, in fact, the underlying ideas (not the expression) are precisely what the copying is trying access.
The court left many issues for trial (including whether Ross infringed the West Key Number system and thousands of other headnotes). Nonetheless, the opinion seems to be striking victory for content owners in their fight against the AI onslaught. Although Judge Bibas has only been on the Third Circuit bench since 2017, he has gained a reputation for his thoughtful and scholarly approach to the law. Whether his ruling sitting by designation as a trial judge in the District of Delaware can make it past his colleagues on the Third Circuit will be worth watching.
The case is Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc., Docket No. 1:20-cv-00613 (D. Del. May 06, 2020).