Healthcare Industry Leaders Predict Four Areas to Watch After the U.S. Election: Takeaways from the Business of Health Care Conference Hosted by the Miami Herbert Business School
The recent U.S. election has had profound implications for the healthcare industry, prompting industry leaders to reexamine their strategies and day-to-day operations. At the Miami Herbert Business School’s annual “The Business of Health Care” conference on January 24, 2025, a pivotal forum brought together stakeholders across key sectors—home care, hospital systems, payors, and others—to assess the election’s impact and chart a path forward. The conference highlighted the need for collaboration, innovative solutions, and strategic leadership in addressing the challenges ahead.
The speakers emphasized the need for collaboration across the healthcare spectrum to harness technological advancements, ensure sustainable healthcare financing, and address systemic challenges like provider shortages and public trust, with four key areas for potential change:
Deploying Artificial Intelligence Solutions: The administration has made funding and supporting AI technology development a priority, and AI has the potential to achieve significant administrative efficiencies and cost-savings for payors, health systems and providers. It can also contribute to developing novel drug therapies, support public health surveillance, and drive new clinical treatment modalities to support physicians. It may also help to reduce discrimination in claims processing, ensuring fairer results for patients regardless of race. However, the conference also underscored the challenges of integrating AI into health care. Valid data and clinician oversight are essential to ensure that AI systems enhance human judgment rather than replace it, particularly in pre-authorization decisions. The speakers stressed the importance of balancing technological innovation with the preservation of medical expertise to ensure equitable and effective outcomes.
Ensuring Sustainability for Medicare Advantage and ACA Policies. The oldest Baby Boomers are starting to reach 80 years of age, and the U.S. population will continue to age at a rapid pace in the coming decades. Our country’s healthcare spending is unsustainable at current pace, and value-based care solutions will be necessary to provide patients with timely access to needed care. In addition, dealing with obesity and other co-morbid conditions (e.g., hypertension, diabetes mellitus, heart failure, arthritis, etc.) requires a multi-faceted approach and collaboration among payors, providers, and the government. In strategizing about potential solutions, the panelists identified the following critical action items: controlling pharmacy costs (including via pharmacy benefit manager regulation), providing access to preventative care and early-stage interventions, having healthcare payments be consistent across various sites-of-service, and expanding the use of and access to hospice for patients at the end of their lives. Early-stage interventions and preventative care were emphasized as cost-effective strategies to improve outcomes while managing resources effectively. These value-based initiatives represent a necessary shift to meet the demands of an aging and increasingly complex patient population.
Solving for Provider Shortages, Particularly in Rural Areas. Along with an aging population, the U.S. is also grappling with widespread provider shortages, particularly in rural parts of the country, and provider burnout with physicians, nurses, and other clinical roles across the industry. The conference highlighted several strategies to address this issue, including financial incentives to attract providers to underserved areas and workplace violence prevention programs to improve working conditions for nursing staff. The panelists also highlighted that the number of medical school graduates working on the administrative side of health care has increased significantly in recent years, with lower numbers of practicing clinicians available to render care. In terms of potential solutions, the panelists highlighted that AI support may help alleviate some of the stressors, as would efforts to reduce provider burnout. The panelists also highlighted the need to make clinical work – particularly in rural areas – attractive for physicians from a financial standpoint. Finally, they identified the adoption of team-based care strategies, alone or in conjunction with value-based care solutions, is a potential way to solve for the challenges to access and timely care caused by the shortages. The speakers also emphasized the adoption of team-based care strategies, which can alleviate pressure on individual clinicians and improve the efficiency of care delivery. By fostering collaboration among healthcare professionals, team-based models can help bridge gaps in access and address the growing demand for services.
Bolstering Public Trust in Science and Institutions. The erosion of public trust in science and healthcare institutions, exacerbated by the COVID-19 pandemic and political polarization, was another pressing issue discussed. Patients’ concerns about cost, access, and claims challenges have fueled frustration, making it critical for healthcare leaders to foster transparency and combat misinformation. The panelists noted that patients have genuine concerns with respect to access, cost, and claims challenges. In addition, the fragmenting of information from social media may negatively influence patients’ opinions and attitudes toward providers. The biggest concern is that in response to another pandemic or major issue like widespread antibiotic resistance, there may be insufficient support and funding for science-based solutions. While that is a significant concern, the panelists noted that many of them are actively engaging with and educating the administration on potential policy initiatives, their outcomes, and hoping to support the strength of those institutions and their ability to respond in a crisis. The panelists noted the importance of actively engaging with communities and policymakers to address these issues and rebuild confidence in the healthcare system. They stressed that public trust is essential not only for managing future crises, but also for advancing systemic reforms that benefit all stakeholders.
The Miami Herbert Business School conference underscored the importance of strategic collaboration and adaptive leadership in addressing the healthcare industry’s most pressing challenges. As highlighted throughout the discussions, sectors such as home care, hospital systems, and payors must work together to harness AI, implement value-based care, and address workforce shortages while fostering public trust.
By prioritizing innovation, equity, and transparency, healthcare leaders can navigate these challenges and build a more efficient, sustainable, and resilient healthcare system for the future. The lessons and insights from this pivotal forum offer a roadmap for turning challenges into opportunities and delivering meaningful progress for patients, providers and payors alike.
The Double-Edged Sword of AI Disclosures: Insurance & AI Risk Mitigation
Artificial intelligence (AI) is reshaping the corporate landscape, offering transformative potential and fostering innovation across industries. But as AI becomes more deeply integrated into business operations, it introduces complex challenges, particularly around transparency and the disclosure of AI-related risks. A recent lawsuit filed in the US District Court for the Southern District of New York—Sarria v. Telus International (Cda) Inc. et al., No. 1:25-cv-00889 (S.D.N.Y. Jan 30, 2025)—highlights the dual risks associated with AI-related disclosures: the dangers posed by action and inaction alike. The Telus lawsuit underscores not only the importance of legally compliant corporate disclosures, but also the dangers that can accompany corporate transparency. Maintaining a carefully tailored insurance program can help to mitigate those dangers.
Background
On January 30, 2025, a class action was brought against Telus International (CDA) Inc., a Canadian company, along with its former and current corporate leaders. Known for its digital solutions enhancing customer experience, including AI services, cloud solutions and user interface design, Telus faces allegations of failing to disclose crucial information about its AI initiatives.
The lawsuit claims that Telus failed to inform stakeholders that its AI offerings required the cannibalization of higher-margin products, that profitability declines could result from its AI development and that the shift toward AI could exert greater pressure on company margins than had been disclosed. When these risks became reality, Telus’ stock dropped precipitously and the lawsuit followed. According to the complaint, the omissions allegedly constitute violations of Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5.
Implications for Corporate Risk Profiles
As we have explained previously, businesses face AI-related disclosure risks for affirmative misstatements. Telus highlights another important part of this conversation in the form of potential liability for the failure to make AI-related risk disclosures. Put differently, companies can face securities claims for both understating and overstating AI-related risks (the latter often being referred to as “AI washing”).
These risks are growing. Indeed, according Cornerstone’s recent securities class action report, the pace of AI-related securities litigation has increased, with 15 filings in 2024 after only 7 such filings in 2023. Moreover, every cohort of AI-related securities filings were dismissed at a lower rate than other core federal filings.
Insurance as a Risk Management Tool
Considering the potential for AI-related disclosure lawsuits, businesses may wish to strategically consider insurance as a risk mitigation tool. Key considerations include:
Audit Business-Specific AI Risk: As we have explained before, AI risks are inherently unique to each business, heavily influenced by how AI is integrated and the jurisdictions in which a business operates. Companies may want to conduct thorough audits to identify these risks, especially as they navigate an increasingly complex regulatory landscape shaped by a patchwork of state and federal policies.
Involve Relevant Stakeholders: Effective risk assessments should involve relevant stakeholders, including various business units, third-party vendors and AI providers. This comprehensive approach ensures that all facets of a company’s AI risk profile are thoroughly evaluated and addressed
Consider AI Training and Educational Initiatives: Given the rapidly developing nature of AI and its corresponding risks, businesses may wish to consider education and training initiatives for employees, officers and board members alike. After all, developing effective strategies for mitigating AI risks can turn in the first instance on a familiarity with AI technologies themselves and the risks they pose.
Evaluate Insurance Needs Holistically: Following business-specific AI audits, companies may wish to meticulously review their insurance programs to identify potential coverage gaps that could lead to uninsured liabilities. Directors and officers (D&O) programs can be particularly important, as they can serve as a critical line of defense against lawsuits similar to the Telus class action. As we explained in a recent blog post, there are several key features of a successful D&O insurance review that can help increase the likelihood that insurance picks up the tab for potential settlements or judgments.
Consider AI-Specific Policy Language: As insurers adapt to the evolving AI landscape, companies should be vigilant about reviewing their policies for AI exclusions and limitations. In cases where traditional insurance products fall short, businesses might consider AI-specific policies or endorsements, such as Munich Re’s aiSure, to facilitate comprehensive coverage that aligns with their specific risk profiles.
Conclusion
The integration of AI into business operations presents both a promising opportunity and a multifaceted challenge. Companies may wish to navigate these complexities with care, ensuring transparency in their AI-related disclosures while leveraging insurance and stakeholder involvement to safeguard against potential liabilities.
Regulation Round Up: January 2025
Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in January 2025:
31 January
UK Listing Rules: The FCA published a consultation paper (CP25/2) on further changes to the public offers and admissions to trading regime and to the UK Listing Rules.
Cryptoassets: The European Securities and Markets Authority (“ESMA”) published a supervisory briefing on best practices relating to the authorisation of cryptoasset service providers under the Regulation on markets in cryptoassets ((EU) 2023/1114) (“MiCA”).
FCA Handbook: The Financial Conduct Authority (“FCA”) published Handbook Notice 126, which sets out changes to the FCA Handbook made by the FCA board on 30 January 2025.
Public Offer Platforms: The FCA published a consultation paper on further proposals for firms operating public offer platforms (CP25/3).
30 January
FCA Regulation Round-Up: The FCA published its regulation round-up for January 2025, which covers, among other things, the launch of “My FCA” in spring 2025 and changes to FCA data collection.
29 January
EU Competitiveness: The European Commission published a communication on a Competitiveness Compass for the EU (COM(2025) 30). Please refer to our dedicated article on this topic here.
EMIR 3: ESMA published a speech given by Klaus Löber, Chair of the ESMA CCP Supervisory Committee, that sets out ESMA’s approach to the mandates assigned to it by Regulation (EU) 2024/2987 (“EMIR 3”).
28 January
EMIR 3: The European Systemic Risk Board published its response to ESMA’s consultation paper on the conditions of the active account requirement under EMIR 3.
ESG: The FCA published its adaptation report, which provides an overview of the climate change adaptation challenges faced by financial services firms.
27 January
Artificial Intelligence: The Global Financial Innovation Network published a report setting out key insights on the use of consumer-facing AI in global financial services and the implications for global financial innovation.
DORA: The Joint Committee of the European Supervisory Authorities (“ESAs”) published the terms of reference for the EU-SCICF Forum established under the Regulation on digital operational resilience for the financial sector ((EU) 2022/2554) (“DORA”).
24 January
Cryptoassets: ESMA published an opinion on draft regulatory technical standards specifying certain requirements in relation to conflicts of interest for cryptoasset service providers under MiCA.
MiFIR: The European Commission adopted a Delegated Regulation (C(2025) 417 final) (here) supplementing the Markets in Financial Instruments Regulation (600/2014) (“MiFIR”) as regards OTC derivatives identifying reference data to be used for the purposes of the transparency requirements laid down in Articles 8a(2), 10 and 21.
ESG: The EU Platform on Sustainable Finance published a report providing advice to the European Commission on the development and assessment of corporate transition plans.
23 January
Financial Stability Board: The Financial Stability Board published its work programme for 2025.
20 January
Motor Finance: The FCA published its proposed summary grounds of intervention in support of its application under Rule 26 of the Supreme Court Rules 2009 to intervene in the Supreme Court motor finance appeals.
Motor Finance: The FCA published its response to a letter from the House of Lords Financial Services Regulation Committee relating to the Court of Appeal judgment on motor finance commissions.
Cryptoassets: ESMA published a statement on the provision of certain cryptoasset services in relation to asset-referenced tokens and electronic money tokens that are non-compliant under MiCA.
17 January
DORA: The ESAs published a joint report (JC 2024 108) on the feasibility of further centralisation of reporting of major ICT-related incidents by financial entities, as required by Article 21 of DORA.
Basel 3.1: The Prudential Regulation Authority published a press release announcing that, in consultation with HM Treasury, it delayed the UK implementation of the Basel 3.1 reforms to 1 January 2027.
16 January
Cryptoassets: The European Banking Authority and ESMA published a joint report (EBA/Rep/2025/01 / ESMA75-453128700-1391) on recent developments in cryptoassets under MiCA.
14 January
FMSB’s Workplan: The Financial Markets Standards Board (“FMSB”) published its workplan for 2025.
FSMA: The Financial Services and Markets Act 2000 (Designated Activities) (Supervision and Enforcement) Regulations 2025 (SI 2025/22) were published, together with an explanatory memorandum. The amendments allow the FCA to supervise, investigate and enforce the requirements of the designated activities regime.
Sanctions: HM Treasury and the Office of Financial Sanctions Implementation published a memorandum of understanding with the US Office of Foreign Assets Control.
13 January
BMR: The European Parliament published the provisionally agreed text (PE767.863v01-00) of the proposed Regulation amending the Benchmarks Regulation ((EU) 2016/1011) (“BMR”) as regards the scope of the rules for benchmarks, the use in the Union of benchmarks provided by an administrator located in a third country and certain reporting requirements (2023/0379(COD)).
10 January
Artificial Intelligence: The UK Government published its response to the House of Commons Science, Innovation and Technology Committee report on the governance of AI.
9 January
Collective Investment Schemes: The Financial Services and Markets Act 2000 (Collective Investment Schemes) (Amendment) Order 2025 (SI 2025/17) was published, together with an explanatory memorandum. The amendments clarify that arrangements for qualifying cryptoasset staking do not amount to a collective investment scheme.
8 January
EU Taxonomy: The EU Platform on Sustainable Finance published a draft report and a call for feedback on activities and technical screening criteria to be updated or included in the EU taxonomy. Please refer to our dedicated article on this topic here.
3 January
Consolidate Tape: ESMA published a press release launching the first selection for the consolidated tape provider for bonds.
Sulaiman Malik & Michael Singh contributed to this article.
Copyright Office Says AI-Generated Works Based on Text Prompts Are Not Protected
Highlights
The latest report from the U.S. Copyright Office clarifies that the use of AI to assist human creativity does not necessarily preclude copyright protection for the resulting work
The key distinction lies in whether AI is merely a tool aiding human creativity or whether it serves as a substitute for human authorship
The Office reassures creators that using AI for tasks such as outlining a book or generating song ideas does not affect the copyrightability of the final work, provided the author is “referencing, but not incorporating, the output”
The U.S. Copyright Office released its January 2025 report to address the legal and policy issues related to artificial intelligence (AI) and copyright, as outlined in the Office’s August 2023 Notice of Inquiry. This report has clarified that outputs generated by AI based solely on text prompts – regardless of their complexity – are not protected under current copyright law.
According to the Office, while generative AI represents an evolving technology, existing copyright principles remain applicable without requiring changes to the law. These principles, however, provide limited protection for many AI-generated works.
The Office’s report states that AI-generated outputs lack the necessary human control to confer authorship on users, as AI systems themselves cannot hold copyrights. The Office emphasized that whether a prompt is simple or highly detailed, it does not establish the user as the author of the resulting work. The Office argues that even when users refine and resubmit prompts multiple times, the final output ultimately reflects the AI system’s interpretation rather than the user’s original authorship.
Exemplifying the distinction between AI-generated works and human authorship, the Office contrasts the AI-generation process with Jackson Pollock’s painting technique. While Pollock did not precisely control the placement of each paint splatter, he exercised creative authority over key artistic choices, such as color selection, layering, texture, and composition. His physical movements were integral to executing these choices, demonstrating a level of human control the Office says is absent in AI-generated content.
However, the Office says some degree of protection may also apply when artists modify their own work using AI. For instance, an artist who enhances an illustration with AI-generated 3D effects may retain copyright protection, provided the original work remains recognizable. While AI-generated elements themselves remain uncopyrightable, the “perceptible human expression” in the modified work could still qualify for protection.
Similarly, the Office notes that works that incorporate AI-generated content may be eligible for copyright if they involve significant human creative input. A comic book featuring AI-generated images could receive protection if a human arranges the images and pairs them with original text, though the AI-generated images alone would not be covered. Likewise, a film with AI-generated special effects or background artwork remains copyrightable, even if the individual AI-generated elements are not.
The Office notes that, on a case-by-case basis, even AI-generated images prompted by users could receive protection if a human selects, modifies, and remixes specific portions, drawing an analogy to derivative works of human-created art – except without an original human author.
The U.S. Copyright Office identifies three primary scenarios in which AI-generated material may qualify for copyright registration and receive an official certificate of copyright:
When AI-generated output includes human-authored content
When a human significantly modifies, arranges, or edits the AI-generated material
When the human contribution demonstrates a sufficient degree of creativity and originality
The report also addresses whether AI-generated text prompts themselves can be copyrighted. Generally, the Office likens prompts to “instructions” that convey uncopyrightable ideas. However, the Office acknowledges that particularly creative prompts may contain “expressive elements,” though this does not extend copyright protection to the outputs they generate.
This guidance forms part of the Copyright Office’s broader initiative to address AI-related legal and policy issues. It follows a July 2024 report advocating for new deepfake regulations, and the Office plans to release a final report examining “the legal implications of training AI models on copyrighted works.”
Takeaways
The Copyright Office does not rule out the possibility that this legal landscape could evolve alongside AI technology. It notes that, in theory, AI systems could eventually allow users to exert such a high degree of control over the output that the system’s role becomes purely mechanical. However, under current conditions, prompts do not “adequately determine the expressive elements produced, or control how the system translates them into an output.”
Ultimately, the Copyright Office emphasizes that the critical issue is not the predictability of the outcome but the degree of human control over the creative process.
Copyright Office: Copyrighting AI-Generated Works Requires “Sufficient Human Control Over the Expressive Elements” – Prompts Are Not Enough
On January 29, 2025, the Copyright Office (the “Office”) released its second report in a three-part series on artificial intelligence and copyright. Part 1 was released in July 2024 and addressed digital replicas. Part 2 focuses on the copyrightability of AI-generated work – that is, providing greater detail into what level of human interaction is required for a work containing AI-generated works to rise to the level of copyrightability. The report includes eight conclusions to guide copyright applicants and concludes that existing law is sufficient to address copyrighting AI-generated works.
In short, the report finds that protection of AI-generated works requires “sufficient human control over the expressive elements [of a work]” (emphasis added). Thus, not surprisingly, the report finds that prompts alone do not meet this threshold because they are merely unprotectable ideas. Despite this bright-line rule on prompts, the Office seemingly makes an exception for when humans input their own original works as prompts, such as uploading an original drawing into an AI-art generator. If that human-authored work is “perceptible in the output,” copyright protection is at least available for that portion of the AI-generated work.
The Office distinguishes between using AI tools to “assist” with creation and using the AI as a “stand in” for the human’s creativity. Assistance should not impact the copyrightability of the overall work, but copyright protection is less likely once the generative AI stands in as the creative. The Office did not expand upon when “assisting” becomes “standing in,” but noted that using AI to “brainstorm” is likely not a bar to copyrighting the completed work so long as the AI output is not “incorporated” in the finished product.
While it is now clear that prompts alone are insufficient to “control” the expressive elements, it is less clear what will reach this “sufficient” threshold to garner copyright protection, as the Office will make these determinations on a case-by-case (and examiner-by-examiner) basis. For works including AI-generated content, applicants should continue to provide statements detailing their human contributions.
Importantly, post-Loper Bright, the Copyright Office’s report, while it can be influential for courts and academics, does not have the final say on the matter. Specifically, assuming the degree of human input necessary for copyright protection to exist in works created using AI is found to be ambiguous, Loper Bright holds that only courts can determine what AI-generated outputs are protectible, and ultimately SCOTUS if any case goes that far. For more on this shift of regulatory power from agencies to courts, see our Loper Bright blog post.
Takeaways from the Report
• Copyrightability of AI will be addressed with the existing law, which includes the “human authorship” requirement.
• No copyright protection for works purely generated by AI or where there was “insufficient human control over the expressive elements.”
• Prompts alone – even if extremely detailed – do not exert “sufficient control” over the output to make it copyrightable, but “where a human inputs their own copyrightable work and that work is perceptible in the output, they will be the author of at least that portion of the output.”
• Using AI tools to assist with the creation of the work should not interfere with the copyrightability of the overall work. If, however, the AI “stands in for human creativity,” copyright may not be available. Using AI to “brainstorm” (e.g., for song ideation or creating a preliminary outline for writing) should not affect copyrightability if the user is “prompting” the AI and “referencing, but not incorporating, the output in the development of her own work of authorship.” (Emphasis added.)
• Original expression by a human author is still copyrightable, “even if the work also includes AI-generated material.” For example, adding AI special effects to a human-authored film would not destroy the copyrightability of the film itself (though the AI special effects would not have protection and should be disclaimed when filing an application).
• Copyright protection is still available for “the creative selection, coordination, or arrangement of material” in the AI-generated outputs, or “creative modifications of the outputs.” Applications for such works will be analyzed by the Office on a case-by-case basis, so applicants should include a detailed statement of their human contributions.
The Copyright Office’s highly anticipated third report is expected to address “the training of AI models on copyrighted works, licensing considerations, and allocation of any liability.” We expect this report to be the most impactful on the AI market and its future.
Trump’s Executive Order Reshapes U.S. AI Policy; Italian Regulator Blocks DeepSeek’s Processing of Personal Data
On January 23, 2025, President Donald Trump issued an executive order aimed at reinforcing American leadership in artificial intelligence (AI) by eliminating regulatory barriers and revoking prior policies perceived as restrictive. This order follows an initial January 20, 2025, executive action that rescinded more than 50 prior executive orders, including Executive Order 14110 (2023), which established a framework for the responsible development and use of AI. Wilson Elser covered the prior Executive Order in our January 22, 2025, Insight.
The January 23, 2025, Executive Order adopts the definition of “artificial intelligence” from 15 U.S.C. 9401(3), which provides as follows:
The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to:
A. Perceive real and virtual environmentsB. Abstract such perceptions into models through analysis in an automated mannerC. Use model inference to formulate options for information or action.
The adoption of the definition is significant, given that many definitions of artificial intelligence exist and a definition gives clarity to which systems are encompassed.
Parameters of the New Executive OrderThe new executive order prioritizes AI systems free from ideological bias, emphasizing economic competitiveness, national security, and human flourishing. It mandates the creation of an AI Action Plan within 180 days, led by top White House advisers, and the immediate review of prior AI regulations to ensure alignment with the new strategy. Additionally, the Office of Management and Budget will revise key memoranda to reflect this policy shift.
The revocation of Executive Order 14110 signals a shift away from prior AI governance principles, which emphasized safety, privacy, and international collaboration, toward a deregulatory approach focused on innovation and economic growth. The move aligns with the 2024 Republican Party platform, which criticized regulatory constraints on AI and advocated for a development model rooted in free speech and human flourishing. Notably, CEOs of major tech companies, including Amazon, Meta, Google, and Tesla, attended the recent inauguration, suggesting industry interest in the administration’s AI policy direction.
With the administration signaling that these revocations are only the beginning of broader regulatory reforms, it remains to be seen how future federal actions will shape AI governance in 2025 and beyond.
The DeepSeek EffectCoinciding with the administration’s strides in revising U.S. AI governance policy, a major shake-up in the AI landscape occurred this week when DeepSeek, a Chinese startup, unveiled an advanced AI model on January 20, leading to a significant market downturn, with U.S chipmakers including Nvidia suffering market losses. DeepSeek’s cost-efficient approach to building large language models has raised concerns about the demand for high-end AI chips and the power required for AI-centric data centers. The revelation challenges existing investment assumptions in AI infrastructure, with DeepSeek claiming it developed its model for under $6 million.
Industry experts suggested that this development could disrupt the dominance of U.S. firms such as OpenAI, forcing them to adopt similar cost-cutting strategies. Experts also commented that greater AI efficiency could lead to even more widespread adoption. Analysts predict a potential shift in AI investment strategies, with capital moving away from a chip-heavy infrastructure toward AI applications and services. Geopolitical implications also are at play, with venture capitalist Marc Andreessen calling DeepSeek’s R1 model “AI’s Sputnik Moment.”
The U.S. ResponseThe U.S. response to DeepSeek’s emergence has been swift. President Trump’s newly announced “Stargate” initiative revealed on January 21, 2025, aims to counter China’s AI advances by investing up to $500 billion in AI infrastructure. Meanwhile, export controls on Nvidia chips remain a contentious issue, with speculation that DeepSeek may have acquired advanced AI hardware through third-party sources.
A Global ResponseItaly’s Data Protection Authority has urgently ordered a restriction on DeepSeek’s processing of Italian users’ data, citing unsatisfactory responses from the Chinese companies behind the chatbot. DeepSeek has gained millions of downloads globally but claimed they do not operate in Italy and are not subject to European data laws, a stance the regulator rejected. An investigation has been opened.
Italy is not the only country that is concerned about DeepSeek. News outlets have reported that the Data Protection Commissions from Ireland, South Korea, Australia, and France have made requests to DeepSeek about its data practices. This raises the question, will the United States do the same as part of its strategy to ensure U.S. primacy in AI development.? On a related note, what will the United States do to protect its citizens on a national level with respect to the privacy of their data?
SummaryThe competition between the United States and China now extends to global AI investments, particularly in the Middle East and Asia, where both nations seek partners to build energy-intensive AI data centers. Some analysts argue that cooperation on AI governance still may be possible, drawing parallels to past U.S.-China agreements on nuclear safety.
While DeepSeek’s breakthrough introduces volatility in AI markets, and resulted in a swift action from the Italian regulator, it also accelerates the AI technology worldwide and is likely to spur further regulatory response from the Trump Administration. The United States at a national level has not enacted any comprehensive data privacy laws, though there have been several proposals. State governments – 19 of which have enacted data privacy laws – may also uncover any perceived holes in privacy protection. Thus, it remains to be seen what will be considered in 2025.
AI at Work: Design Use Mismatches [Podcast]
In the final installment of our AI at Work series, partner Guy Brenner and senior counsel Jonathan Slowik tackle a critical issue: mismatches between how artificial intelligence (or AI) tools are designed and how they are actually used in practice. Many AI developers emphasize their rigorous efforts to eliminate bias, reassuring employers that their tools are fair and objective, but a system designed to be bias-free can still produce biased outcomes if used improperly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.
Guy Brenner: Welcome to The Proskauer Brief: Hot Topics in Labor and Employment Law. I’m Guy Brenner, a partner in Proskauer’s Employment Litigation & Counseling group, based in Washington, D.C. I’m joined by my colleague, Jonathan Slowik, a special employment law counsel in the practice group, based in Los Angeles. This is the final installment of our initial multi-part series detailing what employers need to know about the use of artificial intelligence, or AI when it comes to employment decisions, such as hiring and promotions. Jonathan, thank you for joining me today.
Jonathan Slowik: It’s great to be here, Guy.
Guy Brenner: So if our listeners haven’t heard the earlier installments of the series, we encourage you to go back and listen to them. In part one, we go through what we hope is a useful background about what AI is and the solutions it offers to employers. In part two, we talk about issues with training data and how that can lead to biased or otherwise problematic outputs with AI tools. In part three, we discussed so-called black box issues. In other words, issues that arise due the fact that may be difficult to understand the inner workings of many advanced AI systems. Today’s episode is about mismatches between the design of an AI tool and how the tool is used in practice. Jonathan, for background, AI developers generally put a lot of effort in eliminating bias from their products, isn’t that right?
Jonathan Slowik: Yes, that’s right. And that’s a major selling point for a lot of these developers. Employers obviously have a great interest in ensuring that they’re deploying a tool that’s not going to create bias in an unintended way. And so, if you go to just about any of these developers’ websites, you can find statements or even full pages about the efforts and lengths they’re going through to ensure that they’re putting out products that are bias free. And this should provide some measure of comfort for employers. It’s clearly something that the developers are competing on. But even if a product is truly bias free, it could still produce biased results if it’s deployed in a way that the developer didn’t intend to make this concrete. I want to go through a few examples. So first, suppose an employer instructs their resume scanner to screen out applicants that are more than a certain distance from the workplace. Perhaps on the theory that these people are less likely to be serious candidates for the position. And if you remember, in part one of this series, hiring managers are overwhelmed with applications these days. Given the ability to submit resumes at scale on platforms like LinkedIn or indeed. Guy, do you see any problem with this particular screening criteria?
Guy Brenner: Well, Jonathan, I can see the attractiveness of it. And I can also see how I can make something like this that hiring managers may have thought of in the past possible when otherwise it would be impossible. Just by virtue of the speed and efficiency and ability of AI to do things, you know, in a matter of seconds. And it sounds unbiased and objective, and it’s a rational basis for trying to cull through the numerous resumes that employers are inundated with whenever they’re trying to fill a position. But the fact is that many of the places in which we live are highly segregated by race and ethnicity. So depending on where the workplace is located, this kind of approach might disproportionately screen out legitimate candidates of certain races, even though that may not be the intent.
Jonathan Slowik: Right. And even though this is something that you could do manually, a hiring manager could just decide to toss out all the resumes of a certain zip code. Doing this with technology increases the risk. So again, a hiring manager doing this manually might start to notice a pattern at some point and realize that this screening criterion was creating an unrepresentative pool. The difference with using software to do this kind of thing is that it can be done at scale very quickly, and only show you the output. And so, the same hiring manager doing this with technology might screen out mostly racial minorities and have no idea that that was even the case. All right. Next hypothetical. What if an employer uses a tool that tries to verify candidate’s backgrounds by cross-referencing social media, and then boosts candidates whose backgrounds are verifiable in that way? Any issues with that one?
Guy Brenner: Well, the one that comes to mind is, I mean, I don’t think this is a controversial proposition that, generally speaking, younger applicants are more active on social media than older applicants. And I think that’s exacerbated depending on which platform we’re talking about.
Jonathan Slowik: So we actually have data on that. So it’s not a stereotype. It’s actually on the Pew Research has issued data confirming what all of us I think suspect.
Guy Brenner: Right. And so it’s not hard to imagine an enterprising plaintiff’s lawyer arguing that a screening tool like this may have a disparate impact on older applicants. I would also be concerned if the scoring takes into account other information on social media pages that could be used as proxy for discriminatory decisions.
Jonathan Slowik: Okay, one more hypothetical. Suppose an employer trying to fill positions for a call center uses a test that tries to predict whether the applicant would be adept at handling distractions under typical working conditions. And supposing this call center that includes a lot of background values. So this is clearly a screening mechanism that’s testing something job related. The employer wants to see how this person is going to perform under the conditions we expect them to be placed in when we actually put them in the job. Is there any problem with this kind of test?
Guy Brenner: Well, first, like any other test, you’d want to know if the test itself has any disparate impact on any particular group, you would want to have it validated. But I also want to know if the company had considered whether some applicants would be entitled to a reasonable accommodation. For example, you can imagine someone who’s neurodiverse performing poorly on this type of simulation, but doing just fine if they were provided with some noise canceling headphones.
Jonathan Slowik: For sure. And this is something the EEOC has issued guidance about. Many of these types of job skills simulations are designed to test an applicant’s ability to perform tasks, assuming typical working conditions, as the employer did in this example. But what the EEOC has made clear is that many employees with disabilities don’t work under atypical working conditions because they work with reasonable accommodations. So for that reason, over reliance on the test without considering the impact on people with disabilities and whether the test should allow for accommodations is potentially problematic.
Guy Brenner: Well, thanks, Jonathan, and to those listening, thank you for joining us on The Proskauer Brief today. We hope you found this series informative. And please note that as developments warrant, we will be recording new podcasts to help you stay on top of this fascinating and ever-changing area of the law and technology.
Thinking Like a Lawyer: Agentic AI and the New Legal Playbook
In the 20th century, mastering “thinking like a lawyer” meant developing a rigorous, precedent-driven mindset. Today, we find ourselves on the cusp of yet another evolution in legal thinking—one driven by agentic AI models that can plan, deliberate, and solve problems in ways that rival and complement human expertise.
In this article, we’ll explore how agentic reasoning powers cutting-edge AI like OpenAI’s o1 and o3, as well as DeepSeek’s R1 model. We’ll also look at a technical approach, the Mixture of Experts (MoE) architecture, that makes these models adept at “thinking” through complex legal questions. Finally, we’ll connect the dots for practicing attorneys, showing how embracing agentic AI can boost profitability, improve efficiency, and elevate legal practice in an ever-competitive marketplace.
The Business of Law Meets Agentic Reasoning
Legal practice is as much about economics as it is about jurisprudence. When Richard Susskind speaks of technology forcing lawyers to reconsider traditional business models, or when Ethan Mollick highlights the way AI can empower us with a co-inteligence, they’re tapping into the same reality: law firms are businesses first and foremost. Profit margins and client satisfaction matter, and integrating agentic AI is quickly becoming a competitive imperative.
Still, many lawyers hesitate, fearing automation will erode billable hours or overshadow human expertise. The key is to realize that agentic AI, tools that can autonomously plan, analyze, and even execute tasks, don’t aim to replace lawyers. Instead, they empower lawyers to practice at a higher level. By offloading rote tasks to AI, legal professionals gain the freedom to focus on nuanced advocacy, strategic thinking, and relationship-building.
A Quick Tour: o1, o3, and DeepSeek R1
OpenAI’s o1: Laying the Agentic Foundation
Introduced in September 2024, o1 marked a significant leap forward in AI’s reasoning capabilities. Its defining feature is its “private chain of thought,” an internal deliberation process that allows it to tackle problems step by step before generating a final output. This approach is akin to an associate who silently sketches out arguments on a legal pad before presenting a polished brief to the partner.
This internal “thinking” has proven especially useful in scientific, mathematical, and legal reasoning tasks, where superficial pattern-matching often falls short. The trade-off? Increased computational demands and slightly slower response times. But for most law firms, especially those dealing with complex litigation or regulatory analysis, accuracy often trumps speed.
OpenAI’s o3: Pushing Boundaries
Building on o1, o3 arrived in December 2024 with even stronger agentic capabilities. Designed to dedicate more deliberation time to each query, o3 consistently outperforms o1 in coding, mathematics, and scientific benchmarks. For lawyers, this improvement translates to more thorough statutory analysis, contract drafting, and fewer oversights in due diligence.
One highlight is o3’s performance on the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). It scores nearly three times higher than o1, underscoring the leap in its ability to handle abstract reasoning, akin to spotting hidden legal issues or anticipating an opponent’s argument.
DeepSeek R1: The Open-Source Challenger
January 2025 saw the release of DeepSeek R1, an open-source model from a Chinese AI startup. With performance on key benchmarks (like the American Invitational Mathematics Examination and Codeforces) exceeding o1 but just shy of o3, DeepSeek R1 has quickly attracted viral attention. Perhaps its biggest draw is cost-effectiveness: it’s reportedly 90-95% cheaper than o1. That kind of pricing is hard to ignore, especially for smaller firms or legal tech startups that need powerful AI without breaking the bank. DeepSeek R1’s open-source license also opens the door to customization: imagine a specialized “legal edition” any firm can adapt.
The market impact has been swift: DeepSeek R1’s launch catapulted its associated app to the top of the Apple App Store and triggered a sell-off in AI tech stocks. This frenzy underscores a critical lesson: the world of AI is volatile, competitive, and global. Law firms shouldn’t pin their entire strategy on a single vendor or model; instead, they should stay agile, ready to explore whichever AI solution best fits their needs.
How Agentic Reasoning Actually Works
All these models—o1, o3, and DeepSeek R1—share a common thread: agentic reasoning. They’re built to do more than just respond; they deliberate. Picture an AI “intern” that doesn’t just copy-and-paste from a template but weighs the merits of different statutes, checks your prior briefs, and even flags contradictory language before you finalize a contract.
But how do they manage this level of autonomy under the hood? Enter the Mixture of Experts (MoE) architecture.
Mixture of Experts (MoE) Architecture
Experts: Think of each expert as a specialized “mini-model” focusing on a single domain—perhaps case law parsing, contract drafting, or statutory interpretation.
Gating Mechanism: This is the brains of the operation. Upon receiving an input (e.g., “Draft a motion to compel in a federal product liability case”), the gating system selects the subset of experts most capable of handling that task.
The process is akin to sending your question to the right department in a law firm: corporate experts for an M&A agreement, litigation experts for a discovery motion. By activating only the relevant experts for a given task, the AI remains computationally efficient, scaling easily without ballooning resource needs. This sparse activation mirrors an attorney’s own approach to problem-solving; you don’t bring in your tax partner for a maritime dispute, and you don’t put your entire legal team on every single project.
For agentic reasoning, MoE models shine because they allow the AI to break down multi-faceted tasks into manageable chunks, using the best “sub-models” for each piece. In other words, the AI can autonomously plan which mini-experts to consult, deliberate internally on their advice, and then execute a cohesive final output, much like a senior partner synthesizing input from various practice groups into one winning brief.
Practical Impacts on Legal Workflows
Research and Drafting
Lawyers spend countless hours researching regulations and precedents. With agentic AI, that time shrinks dramatically. For instance, an MoE-based system could route textual queries to the “case law expert” while simultaneously consulting a “regulatory expert.” The gating mechanism ensures each question goes to the sub-model best suited to answer it. That means more accurate, tailored research in less time.
Document Review and Due Diligence
High-stakes M&A deals or massive litigation cases involve reviewing thousands of pages of documents. Agentic AI can quickly triage which documents to flag for deeper human review, finding hidden clauses or issues that might otherwise take an associate weeks to spot. The result? Faster, cheaper due diligence that can be billed in alternative ways: flat fees, success fees, or other value-based structures, enhancing client satisfaction and firm profitability.
Strategic Advisory
Perhaps the most exciting application is strategic planning. By running different hypothetical arguments or settlement options through an agentic model, attorneys can gain insights into possible outcomes. Imagine a “simulation-expert” sub-model that compares potential trial outcomes based on past jury verdicts, local court rules, and judge profiles. While final decisions rest with the lawyer (and client), AI offers a data-driven edge in deciding whether to settle, proceed, or counter-offer.
Profitability: Beyond the Billable Hour
One of the biggest hurdles to adopting AI is the fear that automated tasks will reduce billable hours. But consider how value-based billing or flat-fee arrangements can transform the equation. If AI cuts a 10-hour research task down to 2, you can offer clients a predictable cost and still maintain or even improve your margin. Clients often prefer certainty, and they value speed if it means resolving matters sooner.
Additionally, adopting agentic AI can allow your firm to take on more cases or offer new services, like real-time compliance monitoring or rapid contract generation. Scaling your practice to handle more volume without expanding headcount can be a powerful revenue driver.
The Human Element: Lawyers as Conductors
Agentic AI models are not a substitute for the judgment, empathy, and moral responsibility that define great lawyering. Rather, think of AI as your personal ensemble of experts, each playing a specialized instrument. You remain the conductor, guiding the orchestra to create a harmonious legal argument or transaction.
If anything, the lawyer’s role becomes more vital in an AI-driven world. Your expertise ensures the AI’s recommendations make sense in the real world of courts, regulations, and human relationships. Your ethical obligations and professional standards guarantee that client confidentiality is safeguarded, conflicts of interest are managed, and justice is served.
Closing Thoughts
The real paradigm shift here comes from recognizing how AI agents, powered by a Mixture of Experts architecture, can function like a fully staffed legal team, all contained within a single system. Picture a virtual army of associates, each specialized in key practice areas, orchestrated to dynamically route tasks to the right “expert.” The result? A law firm that can harness collective knowledge at scale, ensuring top-notch work product and drastically reducing turnaround times.
Rather than replacing human talent, this approach enhances it. Lawyers can channel their energy into strategic thinking, client relationships, and creative advocacy, those tasks that define the very essence of the profession. Meanwhile, agentic AI handles heavy lifting in research, analysis, and repetitive drafting, enabling teams to serve more clients, tackle more complex matters, and ultimately become more impactful and profitable than ever before.
Far from an existential threat, these AI advancements offer us the freedom to practice law at its best, delivering deeper insights with greater efficiency. In embracing these technologies, we build a future where legal professionals can make more meaningful contributions to both their firms and the broader society they serve.
The Proskauer Brief Episode 52: AI at Work – Design Use Mismatches [Podcast]
In the final installment of our AI at Work series, partner Guy Brenner and senior counsel Jonathan Slowik tackle a critical issue: mismatches between how artificial intelligence (or AI) tools are designed and how they are actually used in practice. Many AI developers emphasize their rigorous efforts to eliminate bias, reassuring employers that their tools are fair and objective, but a system designed to be bias-free can still produce biased outcomes if used improperly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.
UK ICO Sets Out Proposals to Promote Sustainable Economic Growth
On January 24, 2025, the UK Information Commissioner’s Office (“ICO”) published the letter it sent to the UK Prime Minister, Chancellor of the Exchequer, and Secretary of State for Business and Trade, in response to their request for proposals to boost business confidence, improve the investment climate, and foster sustainable economic growth in the UK. In the letter, the ICO sets out its proposals for doing so, including:
New rules for AI: The ICO recognizes that regulatory uncertainty can be a barrier to innovation, so it proposes a single set of rules for those developing or deploying AI products, supporting the UK government in legislating for such rules.
New guidance on other emerging technologies: The ICO will support businesses and “innovators” by publishing innovation focused guidance in areas such as neurotech, cloud computing and Internet of Things devices.
Reducing costs for small and medium-sized companies (“SMEs”): Focusing on the administrative burden that SMEs face when complying with a complex regulatory framework, the ICO commits to simplifying existing requirements and easing the burden of compliance, including by launching a Data Essentials training and assurance programme for SMEs during 2025/26.
Sandboxes: The ICO will expand on its previous sandbox services by launching an “experimentation program” where companies will get a “time-limited derogation” from specific legal requirements, under the strict control of the ICO, to test new ideas. The ICO would support legislation from UK government in this area.
Privacy-preserving digital advertising: The ICO recognizes the financial and societal benefits provided by the digital advertising economy but notes there are aspects of the regulatory landscape that businesses find difficult to navigate. The ICO wishes to help reduce the burdens for both businesses and customers related to digital advertising. To do so, the ICO, amongst other things, referred to its approach to regulating digital advertising as detailed in the 2025 Online Tracking Strategy (as discussed here).
International transfers: Recognizing the importance of international transfers to the UK economy, the ICO will, amongst other things, publish new guidance to enable quicker and easier transfers of data, and work through international fora, such as G7, to build international agreement on increasing data transfer mechanisms.
Promote information sharing between regulators: The ICO acknowledges that engaging with multiple regulators can be resource intensive, especially for SMEs. The ICO will work with the Digital Regulation Cooperation Forum to simplify this process, and would encourage legislation to simplify information sharing between regulators.
Read the letter from the ICO.
Rules on AI Literacy and Prohibited Systems Under the EU AI Act Become Applicable
On February 2, 2025, the EU AI Act’s rules on AI literacy, along with the prohibition of certain types of AI system, became applicable in the EU.
Under the new AI literacy obligations, providers and deployers will be required to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems on their behalf. For this purpose, organizations should put in place robust AI training programs.
Additionally, under the new rules, the placing on the market, the putting into service or the use of AI systems that present unacceptable risks to the fundamental rights of individuals will be prohibited in the EU. AI systems covered by the new prohibition include AI used for:
social scoring for public and private purposes;
exploitation of vulnerabilities of persons through the use of subliminal techniques;
real time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions;
biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation;
individual predictive policing;
emotion recognition in the workplace and education institutions, unless for medical or safety reasons; and
untargeted scraping of Internet or CCTV for facial images to build-up or expand databases.
Literacy and prohibitions of the abovementioned AI systems will be the first obligations under the AI Act to become applicable. The remaining will apply to entities under scope in stages following a transition period. The length of the transition period will vary depending on the type of AI system or model.
Specific obligations applicable to general-purpose AI models will become applicable on August 2, 2025.
Most obligations under the AI Act, including the rules applicable to high-risk AI systems under Annex III and systems subject to specific transparency requirements will become applicable on August 2, 2026.
Obligations related to high-risk systems included in Annex I of the AI Act will become applicable on August 2, 2027.
Certain AI systems and models already on the market may be exempt or have longer compliance deadlines.
Read the AI Act.
The Copyright Office’s Latest Guidance on AI and Copyrightability
The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection. The Office offers a framework for assessing human authorship for works involving AI, outlining three scenarios: (1) using AI as an assistive tool rather than a replacement for human creativity, (2) incorporating human-created elements into AI-generated output, and (3) creatively arranging or modifying AI-generated elements.
A key takeaway from the report is that text prompts alone – even detailed ones – cannot currently provide sufficient human control over their execution to attribute authorship rights in the resulting output to the user. While highly detailed, iterative prompts may describe the user’s desired expressive elements, the same prompt can generate widely different results each time, demonstrating a lack of meaningful human control over the AI’s internal “black box” processes, at least as the technology stands now. The Office acknowledged that this conclusion could change if future AI tools offer users a higher degree of direct control over the expressive aspects of the final output.
To illustrate its stance, the Office points to Jackson Pollock’s paintings as examples of copyrightable works even where the output of the creative process appears random or unpredictable. While the final arrangement of paint in a Pollock piece may not have been fully predictable, Pollock himself chose the colors, number of layers, texture, and used his own body movements to physically execute these choices. By contrast, AI-generated works often involve an algorithmic process largely outside the user’s direct control. Pollock’s work, the Office notes, came from tangible and deliberate human decisions—rather than from a system where the user simply prompts and then observes. The key issue is the degree of human control over the creative process, rather than the predictability of the result.
The Office distinguishes between simple text prompts and other forms of creative input provided by a user to an AI system (such as a photograph or drawing). If the user’s creative input is perceptible in the AI-generated output, or the user makes creative modifications to AI-generated material, those portions of the work may be eligible for copyright protection. Similarly, where AI is used to augment or enhance human-created works (like films using AI visual effects), the overall work may remain copyrightable as a whole, even if the AI-generated components would not be individually protectable.
Many AI platforms now allow users to select, edit, and re-arrange individual elements of AI-generated content, offering a greater level of human engagement than text prompts alone. The Office reiterates that a case-by-case analysis of the creation process is necessary to determine whether the human contributions are sufficient for copyright protection, leaving it to the courts to provide further guidance on the extent of human authorship required in specific contexts.
The Office believes the existing legal frameworks are flexible enough to address emerging AI-related copyright issues, and that enacting new regulations would not provide the desired clarity given the inherently fact-specific nature of the analysis and AI’s wide and evolving role in creative processes. The Office also raises policy concerns that extending blanket copyright protection to AI-generated works could flood the market with AI-generated content, potentially devaluing or disincentivizing human-created works.
In light of this guidance, it is essential for creators and businesses to document their creative process, including human-created elements, modifications, and arrangements of AI-generated outputs, and how the AI tool is being used—whether as an assistive technology or as the principal decision-maker in shaping the creative elements of the final output. This documentation may be required or helpful during the copyright application process and in potential ownership or infringement disputes.
While AI continues to transform creative industries, the Office’s guidance is a reminder of the fundamental role of human creativity in works seeking U.S. copyright protection. Notably, the Office’s position largely aligns with emerging international consensus—jurisdictions including Korea, Japan, and EU member states have similarly reaffirmed that meaningful human creative contribution is a prerequisite for copyright protection. Looking ahead, Part 3 of the report will address the training of AI models on copyrighted works, licensing considerations, fair use, and allocation of liability. These topics are among the most complex and closely watched issues at the intersection of AI and copyright.