UK Publishes AI Cyber Security Code of Practice and Implementation Guide

On January 31, 2025, the UK government published the Code of Practice for the Cyber Security of AI (the “Code”) and the Implementation Guide for the Code (the “Guide”). The purpose of the Code is to provide cyber security requirements for the lifecycle of AI. Compliance with the Code is voluntary. The purpose of the Guide is to provide guidance to stakeholders on how to meet the cyber security requirements outlined in the Code, including by providing examples of compliance. The Code and the Guide will also be submitted to the European Telecommunications Standards Institute (“ETSI”) where they will be used as the basis for a new global standard (TS 104 223) and accompanying implementation guide (TR 104 128).
The Code defines each of the stakeholders that form part of the AI supply chain, such as developers (any business across any sector, as well as individuals, responsible for creating or adapting an AI model and/or system), system operators (any business across any sector that has responsibility for embedding/deploying an AI model and system within their infrastructure) and end-users (any employee within a business and UK consumers who use an AI model and/or system for any purpose, including to support their work and day-to-day activities). The Code is broken down into 13 principles, each of which contains provisions, compliance with which is either required, recommended or a possibility. While the Code is voluntary, if a business chooses to comply, it must adhere to those provisions which are stated as required. The principles are:

Principle 1: Raise awareness of AI security threats and risks.
Principle 2: Design your AI system for security as well as functionality and performance.
Principle 3: Evaluate the threats and manage the risks to your AI system.
Principle 4: Enable human responsibility for AI systems.
Principle 5: Identify, track and protect your assets.
Principle 6: Secure your infrastructure.
Principle 7: Secure your supply chain.
Principle 8: Document your data, models and prompts.
Principle 9: Conduct appropriate testing and evaluation.
Principle 10: Communication and processes associated with End-users and Affected Entities.
Principle 11: Maintain regular security updates, patches and mitigations.
Principle 12: Monitor your system’s behavior.
Principle 13: Ensure proper data and model disposal.

The Guide breaks down each principle by its provisions, detailing associated risks/threats with each provision and providing example measures/controls that could be implemented to comply with each provision. 
Read the press release, the Code, and the Guide.

Workplace AI – Presidential Change and Unknown Expectations for Retail Employers

The use of Artificial Intelligence (“AI”) in the workplace has spread rapidly since President Trump left the White House in early 2021.  In recent years, retail employers have started using AI technology in a variety of ways from automating tasks, to implementing data-driven decision making, to enhancing customer experience.  Though the Biden administration started to grapple with the use of AI in the workplace, the second Trump administration could mark a dramatic shift in the federal government’s response to these issues.
The Biden administration took a somewhat cautious approach to the proliferation of AI in the workplace.  In response to criticism, including the possibility of AI technology allegedly  exhibiting implicit biases in hiring decisions, President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established parameters for AI usage and directed federal agencies to take steps to protect workers and consumers from the potential harms of AI.
President Trump repealed the Biden Executive order on January 23, 2025, but has not yet implemented his own policy. The Trump Executive Order directs the Assistant to the President for Science and Technology and other administration officials to develop an “Artificial Intelligence Action Plan” within 180 days of the order to advance the administration’s policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The specifics of the “Artificial Intelligence Action Plan” remain unclear. President Trump signed an executive order regarding AI during his first term in 2019 which encouraged AI research and implementation, however, the technology has since developed rapidly. Given the Executive Order’s statement that previous government action constituted “barriers to American AI innovation” it is likely the “Artificial Intelligence Action Plan” will promote the development and use of AI rather than create new red tape for employers.
In the wake of the Trump Executive order, federal agencies  have taken down the limited guidance regarding the use of AI in the workplace they had released during the Biden administration. The Equal Employment Opportunity Commission (“EEOC”), for example, released guidance documents outlining the ways in which AI tools in the workplace could violate the ADA or Title VII of the Civil Rights Act, particularly with respect to hiring.  The Department of Labor also issued guidance addressing wage and hour issues related to AI and laying out best practices for implementing these tools to ensure transparency in AI use and support workers who are impacted by AI. Both these documents have been pulled from their respective agencies’ websites.
President Trump’s decision to appoint David Sacks as an “AI & Crypto Czar” also signals what retail employers can expect from the administration moving forward. Sacks is an entrepreneur and venture capitalist who has espoused pro-industry stances on his podcast, “All-In.” He also has a personal stake in AI being utilized as employers as the owner of “Glue” a software program that integrates AI into work place chats as a rival to platforms like Slack or Teams.
If the federal government does not regulate AI’s use in the workplace, states may attempt fill this vacuum of regulation with legislation addressing emerging issues or counteracting the Trump administration’s actions. This could lead to a patchwork of different compliance standards for employers from state to state. New York City’s Local Law 144 creates obligations for employers including conducting bias audits where automated tools play a predominant role in hiring decisions.  Illinois has prohibited employers from using AI in a manner that causes a discriminatory effect.  Other states may further complicate this landscape in attempts to correct perceived issues with the use of AI in the workplace. 
While President Trump’s stance encourages the use of AI, retail employers should remember that existing anti-discrimination statutes may still provide a vehicle to challenge employers’ use of AI. For example, if AI used in hiring disadvantages a certain race, the employer could still face liability under Title VII. Retail employers should be on the look-out for further actions from the Trump administration and developments regarding AI in the coming year.

Europe – The AI Revolution Is Underway but Not Quite Yet in HR?

A couple of weeks ago we asked readers of this blog to answer a couple of questions on their organisation’s use of (generative) artificial intelligence, and we promised to circle back with the results. So, drum roll, the results are now in.
1. In our first question, we wanted to know whether your organisation allows its employees to use generative AI, such as ChatGPT, Claude or DALL-E.
While a modest majority allows it, almost 28% of respondents have indicated that use of genAI is still forbidden, and another 17% allow it only for certain positions or departments.

This first question was the logical build-up to the second: 
2. If the use of genAI is allowed to any extent, does that mean the organisation has a clear set of rules around such use?

A solid 50% of respondents have effectively introduced guidelines in this respect. A further 22% are working on it. And that is indeed the sensible approach. It is important that employees know the organisation’s position on (gen)AI, if they can use it and for what, or why they cannot. They should understand the risks of using genAI inappropriately and what may be the sanction if they use it without complying with company rules.
Essential in the rules of play is transparency. Management should have a good understanding of the areas within the organisation where genAI is being used. In particular when genAI is being used for research purposes or in areas where IP infringements may be a concern, it is essential that employees are transparent about the help they have had from their algorithmic co-worker. The risk of “hallucinations” in genAI is still very real, and knowing that work product has come about with the help of genAI should make a manager look at it with different and more attentive eyes.
Please also note in this respect that under the EU AI Act, as from last weekend, providers and deployers of AI systems must ensure that their employees and contractors using AI have an adequate degree of AI literacy, for example by implementing training. The required level of AI literacy is determined “taking into account [the employees’] technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”.
Since we had anticipated there would be quite a number of organisations that still prohibit the use of genAI, we had also asked:
3. What was the main driver for companies’ prohibition of the use of genAI in the workplace?

The response was not surprising. Organisations are mostly concerned about the risk of errors in AI’s responses and of its inadvertently leaking their confidential information.
While this fear of leaks is completely justified for free applications, such as the free version of ChatGPT and popular search engines such as Bing and Google that are increasingly powered by Large Language Models (LLMs), this fear is largely unjustified for the paid versions. Their business model depends on trust, and they guarantee that in the paid version of their LLM, no data will ever get reused for training purposes. To our knowledge, there have been no incidents and not even the slightest indications that the large vendors would disregard their promises in this regard.
This leads to the somewhat ironic conclusion that prohibiting the use of genAI by your employees may be more likely to realise the risks that the company fears, as employees may then be tempted to use a free, less safe version of the application on their personal devices instead.
4. In which areas of HR do our respondents use AI?

Where respondents indicated other areas of use in HR, they mentioned intelligence gathering, improvement of communication and specific areas of recruitment, such as writing job descriptions and skills testing.
5. Does your organisations plan to increase its use of AI in the next twelve months?,
The narrow majority responded that this would not be the case:

Those respondents which anticipated increased use of AI considered that there will be an increased use generally, in all areas. Specific predictions for increased use are in areas such as the use of HR bots for benefits enquiries, and forecasting.
6. If your organisation does not currently employ AI in HR, why not?

The response to this question is probably the most surprising: a majority of organisations which are not yet using AI in HR are not reluctant for philosophical, technical or employment relations reasons, but have simply not yet got round to it. It is expected that the next 12-18 months will see an important increase in usage overall, which will also lead to a similar uptick in the HR sector.
We ended our survey with perhaps the most delicate question:
7. Do you expect that in the next 12 to 24 months, there will be redundancies within your organisation due to increased use of AI?
For the large majority of organisations, this is not the case.

To this same question, ChatGPT itself responded the following:
The use of AI in businesses can indeed lead to job loss in certain sectors, especially in roles that rely heavily on routine, repetitive tasks. For example, administrative roles, customer service, or even certain manufacturing and warehouse jobs could be replaced by AI, as it can often perform these tasks more efficiently and cost-effectively. On the other hand, AI can also create new jobs, especially in fields like data analysis, machine learning, AI development, and management. Businesses will likely focus more on roles that require creativity and strategy, areas where human input is essential, like decision-making and improving customer relationships. The key will be how companies combine the use of AI with upskilling their workforce, enabling employees to adapt to the changing job landscape.

As is often – though certainly not always – the case, ChatGPT is not wrong about this. We didn’t ask it specifically about its impact on staffing levels in HR, but we think that considerable comfort can be taken from its reference to the continued sanctity of roles where “human input is essential”. It is a very far-off future where many of the more sensitive and difficult aspects of HR management will be accepted as adequately discharged by an algorithm.

The Opening Act: Significant Developments in Trump’s First Two Weeks

During the first two weeks in office, President Donald Trump’s administration released many policies impacting employers in areas like immigration, labor, and workplace safety, and reshaping federal regulatory and enforcement policies regarding artificial intelligence (AI) and unlawful employment discrimination and harassment.
Here is a roundup summarizing the key provisions of the executive orders and other policies from the first two weeks of the new administration. 
Quick Hits

Changes to immigration policy included stopping entry of refugees and restricting birthright citizenship.
The federal government now recognizes only two genders, male and female. This policy included removing previous guidance that protected LGBTQ workers from discrimination and harassment.

Immigration Policy
On January 20, 2025, President Trump issued an executive order (EO 14160) limiting birthright citizenship. The executive order asserts that children born in the United States on or after February 19, 2025, who do not have at least one lawful permanent resident or U.S. citizen parent, will not have a claim to birthright citizenship.
On January 23, 2025, a federal judge in Seattle, WA, blocked enforcement of this executive order in response to four states (Washington, Illinois, Arizona, and Oregon) seeking a temporary restraining order. Two weeks later, on February 5, a Maryland federal judge issued a nationwide preliminary injunction blocking the executive order in response to a request by five pregnant undocumented women who argued that the order is unconstitutional and violates several federal laws[SF1] .
A different executive order revisits and reviews the United States-Mexico-Canada Agreement (USMCA) and other U.S. trade agreements. The United States’ participation in the UMSCA makes the TN professional work visa available for citizens of Canada and Mexico.
A separate executive order aims to utilize in-depth vetting and screening of all individuals seeking admission to the United States, including obtaining information to confirm any claims made by those individuals and assess public safety threats.
Another executive order suspended the entry of refugees into the United States under the United States Refugee Admissions Program (USRAP). That order took effect on January 27, 2025.
A separate executive order tightens enforcement of border policies. That includes:

detaining undocumented people “apprehended on suspicion of violating federal or state law,” and removing them promptly;
pursuing criminal charges against undocumented people and “those who facilitate their unlawful presence in the United States”;
terminating parole programs for Cubans, Haitians, Nicaraguans, and Venezuelans; and
utilizing advanced vetting techniques to determine familial relationships and biometrics scanning for all individuals encountered or apprehended by the U.S. Department of Homeland Security (DHS).

LGBTQ+ Employees
On January 20, 2025, President Trump issued EO 14168, which states that the federal government recognizes only two genders: male and female. The federal government will no longer use nonbinary gender categories in compliance and enforcement actions.
On January 28, 2025, U.S. Equal Employment Opportunity Commission (EEOC) Acting Chair Andrea R. Lucas rolled back much of the EEOC’s Biden-era guidance on antidiscrimination and antiharassment protections for LGBTQ+ employees.
On January 27, 2025, President Trump removed Democratic EEOC commissioners Charlotte A. Burrows and Jocelyn Samuels and discharged EEOC general counsel Karla Gilbride.
Labor
President Trump also took the unprecedented move of removing National Labor Relations Board (NLRB) Member Gwynne Wilcox, a Democratic appointee whose term was not set to end until August 2028. The president also discharged NLRB general counsel Jennifer Abruzzo before the end of her term and later tapped William Cowen, who was serving as the regional director for the NLRB’s Los Angeles Region Office (Region 21), as the new acting general counsel.
The discharge of the general counsel was expected after former President Biden discharged the general counsel who served during President Trump’s first term, which was upheld in the courts. However, the removal of a sitting NLRB member was surprising and leaves the Board without a quorum to hear cases. Former Member Wilcox has filed a lawsuit challenging her removal, which is likely to lead to a lengthy court case that could ultimately land before the Supreme Court of the United States.
Workplace Safety
The Occupational Safety and Health Administration’s (OSHA) proposed Biden-era rules on “Heat Injury and Illness Prevention in Outdoor and Indoor Work Settings” and the “Emergency Response Standard” appear to be on the chopping block following President Trump’s “Regulatory Freeze Pending Review” issued on January 20, 2025. The presidential memorandum directed the agency to refrain from issuing or proposing any new rules until a department or agency head designated by the president has had a chance to approve it.
Higher Education and Title IX
On January 31, 2025, the U.S. Department of Education announced that it would not enforce Title IX of the Education Amendments of 1972 in accordance with a 2024 Biden-era rule that had expanded the definition of “on the basis of sex” to include gender identity, sex stereotypes, sex characteristics, and sexual orientation, and mandated that schools allow students and employees to access facilities, programs, and activities consistent with their self-identified gender.
Instead, the department said it will enforce the protections under the prior 2020 Title IX rule. The change aligns the department with EO 14168 and follows federal court decisions that have vacated or enjoined the 2024 Title IX final rule, finding that it violated the plain text and original meaning of Title IX.
Artificial Intelligence (AI)
President Trump is also reshaping federal policy on artificial intelligence, moving away from the Biden administration’s focus on mitigating potential negative impacts on workers and consumers.
On January 23, 2025, President Trump signed EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order states, “[i]t is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
The EO came after President Trump, on his first day in office on January 20, 2025, rescinded President Biden’s EO 14110, which was signed in October 2023 and had sought to implement safeguards for the “responsible development and use of AI.”
Next Steps
President Trump’s recent executive orders and other actions over the first two weeks in office have disrupted labor and employment law and created uncertainty for employers, at least in the near term. It remains to be seen what the lasting effects could be, particularly as it appears the administration has more changes in store. However, some of the executive orders and other actions are being challenged, or are expected to be challenged, in the courts, which could answer questions about the constitutional authority of the president and other statutes creating federal agencies. It is unclear what the outcome of the court cases will be.

Healthcare Industry Leaders Predict Four Areas to Watch After the U.S. Election: Takeaways from the Business of Health Care Conference Hosted by the Miami Herbert Business School

The recent U.S. election has had profound implications for the healthcare industry, prompting industry leaders to reexamine their strategies and day-to-day operations. At the Miami Herbert Business School’s annual “The Business of Health Care” conference on January 24, 2025, a pivotal forum brought together stakeholders across key sectors—home care, hospital systems, payors, and others—to assess the election’s impact and chart a path forward. The conference highlighted the need for collaboration, innovative solutions, and strategic leadership in addressing the challenges ahead.
The speakers emphasized the need for collaboration across the healthcare spectrum to harness technological advancements, ensure sustainable healthcare financing, and address systemic challenges like provider shortages and public trust, with four key areas for potential change:

Deploying Artificial Intelligence Solutions: The administration has made funding and supporting AI technology development a priority, and AI has the potential to achieve significant administrative efficiencies and cost-savings for payors, health systems and providers. It can also contribute to developing novel drug therapies, support public health surveillance, and drive new clinical treatment modalities to support physicians. It may also help to reduce discrimination in claims processing, ensuring fairer results for patients regardless of race. However, the conference also underscored the challenges of integrating AI into health care. Valid data and clinician oversight are essential to ensure that AI systems enhance human judgment rather than replace it, particularly in pre-authorization decisions. The speakers stressed the importance of balancing technological innovation with the preservation of medical expertise to ensure equitable and effective outcomes.
Ensuring Sustainability for Medicare Advantage and ACA Policies. The oldest Baby Boomers are starting to reach 80 years of age, and the U.S. population will continue to age at a rapid pace in the coming decades. Our country’s healthcare spending is unsustainable at current pace, and value-based care solutions will be necessary to provide patients with timely access to needed care. In addition, dealing with obesity and other co-morbid conditions (e.g., hypertension, diabetes mellitus, heart failure, arthritis, etc.) requires a multi-faceted approach and collaboration among payors, providers, and the government. In strategizing about potential solutions, the panelists identified the following critical action items: controlling pharmacy costs (including via pharmacy benefit manager regulation), providing access to preventative care and early-stage interventions, having healthcare payments be consistent across various sites-of-service, and expanding the use of and access to hospice for patients at the end of their lives. Early-stage interventions and preventative care were emphasized as cost-effective strategies to improve outcomes while managing resources effectively. These value-based initiatives represent a necessary shift to meet the demands of an aging and increasingly complex patient population.
Solving for Provider Shortages, Particularly in Rural Areas. Along with an aging population, the U.S. is also grappling with widespread provider shortages, particularly in rural parts of the country, and provider burnout with physicians, nurses, and other clinical roles across the industry. The conference highlighted several strategies to address this issue, including financial incentives to attract providers to underserved areas and workplace violence prevention programs to improve working conditions for nursing staff. The panelists also highlighted that the number of medical school graduates working on the administrative side of health care has increased significantly in recent years, with lower numbers of practicing clinicians available to render care. In terms of potential solutions, the panelists highlighted that AI support may help alleviate some of the stressors, as would efforts to reduce provider burnout. The panelists also highlighted the need to make clinical work – particularly in rural areas – attractive for physicians from a financial standpoint. Finally, they identified the adoption of team-based care strategies, alone or in conjunction with value-based care solutions, is a potential way to solve for the challenges to access and timely care caused by the shortages. The speakers also emphasized the adoption of team-based care strategies, which can alleviate pressure on individual clinicians and improve the efficiency of care delivery. By fostering collaboration among healthcare professionals, team-based models can help bridge gaps in access and address the growing demand for services.
Bolstering Public Trust in Science and Institutions. The erosion of public trust in science and healthcare institutions, exacerbated by the COVID-19 pandemic and political polarization, was another pressing issue discussed. Patients’ concerns about cost, access, and claims challenges have fueled frustration, making it critical for healthcare leaders to foster transparency and combat misinformation. The panelists noted that patients have genuine concerns with respect to access, cost, and claims challenges. In addition, the fragmenting of information from social media may negatively influence patients’ opinions and attitudes toward providers. The biggest concern is that in response to another pandemic or major issue like widespread antibiotic resistance, there may be insufficient support and funding for science-based solutions. While that is a significant concern, the panelists noted that many of them are actively engaging with and educating the administration on potential policy initiatives, their outcomes, and hoping to support the strength of those institutions and their ability to respond in a crisis. The panelists noted the importance of actively engaging with communities and policymakers to address these issues and rebuild confidence in the healthcare system. They stressed that public trust is essential not only for managing future crises, but also for advancing systemic reforms that benefit all stakeholders.

The Miami Herbert Business School conference underscored the importance of strategic collaboration and adaptive leadership in addressing the healthcare industry’s most pressing challenges. As highlighted throughout the discussions, sectors such as home care, hospital systems, and payors must work together to harness AI, implement value-based care, and address workforce shortages while fostering public trust.
By prioritizing innovation, equity, and transparency, healthcare leaders can navigate these challenges and build a more efficient, sustainable, and resilient healthcare system for the future. The lessons and insights from this pivotal forum offer a roadmap for turning challenges into opportunities and delivering meaningful progress for patients, providers and payors alike.

The Double-Edged Sword of AI Disclosures: Insurance & AI Risk Mitigation

Artificial intelligence (AI) is reshaping the corporate landscape, offering transformative potential and fostering innovation across industries. But as AI becomes more deeply integrated into business operations, it introduces complex challenges, particularly around transparency and the disclosure of AI-related risks. A recent lawsuit filed in the US District Court for the Southern District of New York—Sarria v. Telus International (Cda) Inc. et al., No. 1:25-cv-00889 (S.D.N.Y. Jan 30, 2025)—highlights the dual risks associated with AI-related disclosures: the dangers posed by action and inaction alike. The Telus lawsuit underscores not only the importance of legally compliant corporate disclosures, but also the dangers that can accompany corporate transparency. Maintaining a carefully tailored insurance program can help to mitigate those dangers.
Background
On January 30, 2025, a class action was brought against Telus International (CDA) Inc., a Canadian company, along with its former and current corporate leaders. Known for its digital solutions enhancing customer experience, including AI services, cloud solutions and user interface design, Telus faces allegations of failing to disclose crucial information about its AI initiatives.
The lawsuit claims that Telus failed to inform stakeholders that its AI offerings required the cannibalization of higher-margin products, that profitability declines could result from its AI development and that the shift toward AI could exert greater pressure on company margins than had been disclosed. When these risks became reality, Telus’ stock dropped precipitously and the lawsuit followed. According to the complaint, the omissions allegedly constitute violations of Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5.
Implications for Corporate Risk Profiles
As we have explained previously, businesses face AI-related disclosure risks for affirmative misstatements. Telus highlights another important part of this conversation in the form of potential liability for the failure to make AI-related risk disclosures. Put differently, companies can face securities claims for both understating and overstating AI-related risks (the latter often being referred to as “AI washing”).
These risks are growing. Indeed, according Cornerstone’s recent securities class action report, the pace of AI-related securities litigation has increased, with 15 filings in 2024 after only 7 such filings in 2023. Moreover, every cohort of AI-related securities filings were dismissed at a lower rate than other core federal filings.
Insurance as a Risk Management Tool
Considering the potential for AI-related disclosure lawsuits, businesses may wish to strategically consider insurance as a risk mitigation tool. Key considerations include:

Audit Business-Specific AI Risk: As we have explained before, AI risks are inherently unique to each business, heavily influenced by how AI is integrated and the jurisdictions in which a business operates. Companies may want to conduct thorough audits to identify these risks, especially as they navigate an increasingly complex regulatory landscape shaped by a patchwork of state and federal policies.
Involve Relevant Stakeholders: Effective risk assessments should involve relevant stakeholders, including various business units, third-party vendors and AI providers. This comprehensive approach ensures that all facets of a company’s AI risk profile are thoroughly evaluated and addressed
Consider AI Training and Educational Initiatives: Given the rapidly developing nature of AI and its corresponding risks, businesses may wish to consider education and training initiatives for employees, officers and board members alike. After all, developing effective strategies for mitigating AI risks can turn in the first instance on a familiarity with AI technologies themselves and the risks they pose.
Evaluate Insurance Needs Holistically: Following business-specific AI audits, companies may wish to meticulously review their insurance programs to identify potential coverage gaps that could lead to uninsured liabilities. Directors and officers (D&O) programs can be particularly important, as they can serve as a critical line of defense against lawsuits similar to the Telus class action. As we explained in a recent blog post, there are several key features of a successful D&O insurance review that can help increase the likelihood that insurance picks up the tab for potential settlements or judgments.
Consider AI-Specific Policy Language: As insurers adapt to the evolving AI landscape, companies should be vigilant about reviewing their policies for AI exclusions and limitations. In cases where traditional insurance products fall short, businesses might consider AI-specific policies or endorsements, such as Munich Re’s aiSure, to facilitate comprehensive coverage that aligns with their specific risk profiles.

Conclusion
The integration of AI into business operations presents both a promising opportunity and a multifaceted challenge. Companies may wish to navigate these complexities with care, ensuring transparency in their AI-related disclosures while leveraging insurance and stakeholder involvement to safeguard against potential liabilities.

Regulation Round Up: January 2025

Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in January 2025:
31 January
UK Listing Rules: The FCA published a consultation paper (CP25/2) on further changes to the public offers and admissions to trading regime and to the UK Listing Rules.
Cryptoassets: The European Securities and Markets Authority (“ESMA”) published a supervisory briefing on best practices relating to the authorisation of cryptoasset service providers under the Regulation on markets in cryptoassets ((EU) 2023/1114) (“MiCA”).
FCA Handbook: The Financial Conduct Authority (“FCA”) published Handbook Notice 126, which sets out changes to the FCA Handbook made by the FCA board on 30 January 2025.
Public Offer Platforms: The FCA published a consultation paper on further proposals for firms operating public offer platforms (CP25/3).
30 January
FCA Regulation Round-Up: The FCA published its regulation round-up for January 2025, which covers, among other things, the launch of “My FCA” in spring 2025 and changes to FCA data collection.
29 January
EU Competitiveness: The European Commission published a communication on a Competitiveness Compass for the EU (COM(2025) 30). Please refer to our dedicated article on this topic here.
EMIR 3: ESMA published a speech given by Klaus Löber, Chair of the ESMA CCP Supervisory Committee, that sets out ESMA’s approach to the mandates assigned to it by Regulation (EU) 2024/2987 (“EMIR 3”).
28 January
EMIR 3: The European Systemic Risk Board published its response to ESMA’s consultation paper on the conditions of the active account requirement under EMIR 3.
ESG: The FCA published its adaptation report, which provides an overview of the climate change adaptation challenges faced by financial services firms.
27 January
Artificial Intelligence: The Global Financial Innovation Network published a report setting out key insights on the use of consumer-facing AI in global financial services and the implications for global financial innovation.
DORA: The Joint Committee of the European Supervisory Authorities (“ESAs”) published the terms of reference for the EU-SCICF Forum established under the Regulation on digital operational resilience for the financial sector ((EU) 2022/2554) (“DORA”).
24 January
Cryptoassets: ESMA published an opinion on draft regulatory technical standards specifying certain requirements in relation to conflicts of interest for cryptoasset service providers under MiCA.
MiFIR: The European Commission adopted a Delegated Regulation (C(2025) 417 final) (here) supplementing the Markets in Financial Instruments Regulation (600/2014) (“MiFIR”) as regards OTC derivatives identifying reference data to be used for the purposes of the transparency requirements laid down in Articles 8a(2), 10 and 21.
ESG: The EU Platform on Sustainable Finance published a report providing advice to the European Commission on the development and assessment of corporate transition plans.
23 January
Financial Stability Board: The Financial Stability Board published its work programme for 2025.
20 January
Motor Finance: The FCA published its proposed summary grounds of intervention in support of its application under Rule 26 of the Supreme Court Rules 2009 to intervene in the Supreme Court motor finance appeals.
Motor Finance: The FCA published its response to a letter from the House of Lords Financial Services Regulation Committee relating to the Court of Appeal judgment on motor finance commissions.
Cryptoassets: ESMA published a statement on the provision of certain cryptoasset services in relation to asset-referenced tokens and electronic money tokens that are non-compliant under MiCA.
17 January
DORA: The ESAs published a joint report (JC 2024 108) on the feasibility of further centralisation of reporting of major ICT-related incidents by financial entities, as required by Article 21 of DORA.
Basel 3.1: The Prudential Regulation Authority published a press release announcing that, in consultation with HM Treasury, it delayed the UK implementation of the Basel 3.1 reforms to 1 January 2027.
16 January
Cryptoassets: The European Banking Authority and ESMA published a joint report (EBA/Rep/2025/01 / ESMA75-453128700-1391) on recent developments in cryptoassets under MiCA.
14 January
FMSB’s Workplan: The Financial Markets Standards Board (“FMSB”) published its workplan for 2025.
FSMA: The Financial Services and Markets Act 2000 (Designated Activities) (Supervision and Enforcement) Regulations 2025 (SI 2025/22) were published, together with an explanatory memorandum. The amendments allow the FCA to supervise, investigate and enforce the requirements of the designated activities regime.
Sanctions: HM Treasury and the Office of Financial Sanctions Implementation published a memorandum of understanding with the US Office of Foreign Assets Control.
13 January
BMR: The European Parliament published the provisionally agreed text (PE767.863v01-00) of the proposed Regulation amending the Benchmarks Regulation ((EU) 2016/1011) (“BMR”) as regards the scope of the rules for benchmarks, the use in the Union of benchmarks provided by an administrator located in a third country and certain reporting requirements (2023/0379(COD)).
10 January
Artificial Intelligence: The UK Government published its response to the House of Commons Science, Innovation and Technology Committee report on the governance of AI.
9 January
Collective Investment Schemes: The Financial Services and Markets Act 2000 (Collective Investment Schemes) (Amendment) Order 2025 (SI 2025/17) was published, together with an explanatory memorandum. The amendments clarify that arrangements for qualifying cryptoasset staking do not amount to a collective investment scheme.
8 January
EU Taxonomy: The EU Platform on Sustainable Finance published a draft report and a call for feedback on activities and technical screening criteria to be updated or included in the EU taxonomy. Please refer to our dedicated article on this topic here.
3 January
Consolidate Tape: ESMA published a press release launching the first selection for the consolidated tape provider for bonds.
 
Sulaiman Malik & Michael Singh contributed to this article.

Copyright Office Says AI-Generated Works Based on Text Prompts Are Not Protected

Highlights
The latest report from the U.S. Copyright Office clarifies that the use of AI to assist human creativity does not necessarily preclude copyright protection for the resulting work 
The key distinction lies in whether AI is merely a tool aiding human creativity or whether it serves as a substitute for human authorship
The Office reassures creators that using AI for tasks such as outlining a book or generating song ideas does not affect the copyrightability of the final work, provided the author is “referencing, but not incorporating, the output”

The U.S. Copyright Office released its January 2025 report to address the legal and policy issues related to artificial intelligence (AI) and copyright, as outlined in the Office’s August 2023 Notice of Inquiry. This report has clarified that outputs generated by AI based solely on text prompts – regardless of their complexity – are not protected under current copyright law.
According to the Office, while generative AI represents an evolving technology, existing copyright principles remain applicable without requiring changes to the law. These principles, however, provide limited protection for many AI-generated works.
The Office’s report states that AI-generated outputs lack the necessary human control to confer authorship on users, as AI systems themselves cannot hold copyrights. The Office emphasized that whether a prompt is simple or highly detailed, it does not establish the user as the author of the resulting work. The Office argues that even when users refine and resubmit prompts multiple times, the final output ultimately reflects the AI system’s interpretation rather than the user’s original authorship.
Exemplifying the distinction between AI-generated works and human authorship, the Office contrasts the AI-generation process with Jackson Pollock’s painting technique. While Pollock did not precisely control the placement of each paint splatter, he exercised creative authority over key artistic choices, such as color selection, layering, texture, and composition. His physical movements were integral to executing these choices, demonstrating a level of human control the Office says is absent in AI-generated content.
However, the Office says some degree of protection may also apply when artists modify their own work using AI. For instance, an artist who enhances an illustration with AI-generated 3D effects may retain copyright protection, provided the original work remains recognizable. While AI-generated elements themselves remain uncopyrightable, the “perceptible human expression” in the modified work could still qualify for protection.
Similarly, the Office notes that works that incorporate AI-generated content may be eligible for copyright if they involve significant human creative input. A comic book featuring AI-generated images could receive protection if a human arranges the images and pairs them with original text, though the AI-generated images alone would not be covered. Likewise, a film with AI-generated special effects or background artwork remains copyrightable, even if the individual AI-generated elements are not.
The Office notes that, on a case-by-case basis, even AI-generated images prompted by users could receive protection if a human selects, modifies, and remixes specific portions, drawing an analogy to derivative works of human-created art – except without an original human author.
The U.S. Copyright Office identifies three primary scenarios in which AI-generated material may qualify for copyright registration and receive an official certificate of copyright:

When AI-generated output includes human-authored content
When a human significantly modifies, arranges, or edits the AI-generated material
When the human contribution demonstrates a sufficient degree of creativity and originality

The report also addresses whether AI-generated text prompts themselves can be copyrighted. Generally, the Office likens prompts to “instructions” that convey uncopyrightable ideas. However, the Office acknowledges that particularly creative prompts may contain “expressive elements,” though this does not extend copyright protection to the outputs they generate.
This guidance forms part of the Copyright Office’s broader initiative to address AI-related legal and policy issues. It follows a July 2024 report advocating for new deepfake regulations, and the Office plans to release a final report examining “the legal implications of training AI models on copyrighted works.”
Takeaways
The Copyright Office does not rule out the possibility that this legal landscape could evolve alongside AI technology. It notes that, in theory, AI systems could eventually allow users to exert such a high degree of control over the output that the system’s role becomes purely mechanical. However, under current conditions, prompts do not “adequately determine the expressive elements produced, or control how the system translates them into an output.”
Ultimately, the Copyright Office emphasizes that the critical issue is not the predictability of the outcome but the degree of human control over the creative process. 

Copyright Office: Copyrighting AI-Generated Works Requires “Sufficient Human Control Over the Expressive Elements” – Prompts Are Not Enough

On January 29, 2025, the Copyright Office (the “Office”) released its second report in a three-part series on artificial intelligence and copyright. Part 1 was released in July 2024 and addressed digital replicas. Part 2 focuses on the copyrightability of AI-generated work – that is, providing greater detail into what level of human interaction is required for a work containing AI-generated works to rise to the level of copyrightability. The report includes eight conclusions to guide copyright applicants and concludes that existing law is sufficient to address copyrighting AI-generated works.
In short, the report finds that protection of AI-generated works requires “sufficient human control over the expressive elements [of a work]” (emphasis added). Thus, not surprisingly, the report finds that prompts alone do not meet this threshold because they are merely unprotectable ideas. Despite this bright-line rule on prompts, the Office seemingly makes an exception for when humans input their own original works as prompts, such as uploading an original drawing into an AI-art generator. If that human-authored work is “perceptible in the output,” copyright protection is at least available for that portion of the AI-generated work.
The Office distinguishes between using AI tools to “assist” with creation and using the AI as a “stand in” for the human’s creativity. Assistance should not impact the copyrightability of the overall work, but copyright protection is less likely once the generative AI stands in as the creative. The Office did not expand upon when “assisting” becomes “standing in,” but noted that using AI to “brainstorm” is likely not a bar to copyrighting the completed work so long as the AI output is not “incorporated” in the finished product.
While it is now clear that prompts alone are insufficient to “control” the expressive elements, it is less clear what will reach this “sufficient” threshold to garner copyright protection, as the Office will make these determinations on a case-by-case (and examiner-by-examiner) basis. For works including AI-generated content, applicants should continue to provide statements detailing their human contributions.
Importantly, post-Loper Bright, the Copyright Office’s report, while it can be influential for courts and academics, does not have the final say on the matter. Specifically, assuming the degree of human input necessary for copyright protection to exist in works created using AI is found to be ambiguous, Loper Bright holds that only courts can determine what AI-generated outputs are protectible, and ultimately SCOTUS if any case goes that far. For more on this shift of regulatory power from agencies to courts, see our Loper Bright blog post.
Takeaways from the Report
• Copyrightability of AI will be addressed with the existing law, which includes the “human authorship” requirement.
• No copyright protection for works purely generated by AI or where there was “insufficient human control over the expressive elements.”
• Prompts alone – even if extremely detailed – do not exert “sufficient control” over the output to make it copyrightable, but “where a human inputs their own copyrightable work and that work is perceptible in the output, they will be the author of at least that portion of the output.”
• Using AI tools to assist with the creation of the work should not interfere with the copyrightability of the overall work. If, however, the AI “stands in for human creativity,” copyright may not be available. Using AI to “brainstorm” (e.g., for song ideation or creating a preliminary outline for writing) should not affect copyrightability if the user is “prompting” the AI and “referencing, but not incorporating, the output in the development of her own work of authorship.” (Emphasis added.)
• Original expression by a human author is still copyrightable, “even if the work also includes AI-generated material.” For example, adding AI special effects to a human-authored film would not destroy the copyrightability of the film itself (though the AI special effects would not have protection and should be disclaimed when filing an application).
• Copyright protection is still available for “the creative selection, coordination, or arrangement of material” in the AI-generated outputs, or “creative modifications of the outputs.” Applications for such works will be analyzed by the Office on a case-by-case basis, so applicants should include a detailed statement of their human contributions.
The Copyright Office’s highly anticipated third report is expected to address “the training of AI models on copyrighted works, licensing considerations, and allocation of any liability.” We expect this report to be the most impactful on the AI market and its future.

Trump’s Executive Order Reshapes U.S. AI Policy; Italian Regulator Blocks DeepSeek’s Processing of Personal Data

On January 23, 2025, President Donald Trump issued an executive order aimed at reinforcing American leadership in artificial intelligence (AI) by eliminating regulatory barriers and revoking prior policies perceived as restrictive. This order follows an initial January 20, 2025, executive action that rescinded more than 50 prior executive orders, including Executive Order 14110 (2023), which established a framework for the responsible development and use of AI. Wilson Elser covered the prior Executive Order in our January 22, 2025, Insight.
The January 23, 2025, Executive Order adopts the definition of “artificial intelligence” from 15 U.S.C. 9401(3), which provides as follows: 
The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to:
A. Perceive real and virtual environmentsB. Abstract such perceptions into models through analysis in an automated mannerC. Use model inference to formulate options for information or action.
The adoption of the definition is significant, given that many definitions of artificial intelligence exist and a definition gives clarity to which systems are encompassed.
Parameters of the New Executive OrderThe new executive order prioritizes AI systems free from ideological bias, emphasizing economic competitiveness, national security, and human flourishing. It mandates the creation of an AI Action Plan within 180 days, led by top White House advisers, and the immediate review of prior AI regulations to ensure alignment with the new strategy. Additionally, the Office of Management and Budget will revise key memoranda to reflect this policy shift.
The revocation of Executive Order 14110 signals a shift away from prior AI governance principles, which emphasized safety, privacy, and international collaboration, toward a deregulatory approach focused on innovation and economic growth. The move aligns with the 2024 Republican Party platform, which criticized regulatory constraints on AI and advocated for a development model rooted in free speech and human flourishing. Notably, CEOs of major tech companies, including Amazon, Meta, Google, and Tesla, attended the recent inauguration, suggesting industry interest in the administration’s AI policy direction.
With the administration signaling that these revocations are only the beginning of broader regulatory reforms, it remains to be seen how future federal actions will shape AI governance in 2025 and beyond.
The DeepSeek EffectCoinciding with the administration’s strides in revising U.S. AI governance policy, a major shake-up in the AI landscape occurred this week when DeepSeek, a Chinese startup, unveiled an advanced AI model on January 20, leading to a significant market downturn, with U.S chipmakers including Nvidia suffering market losses. DeepSeek’s cost-efficient approach to building large language models has raised concerns about the demand for high-end AI chips and the power required for AI-centric data centers. The revelation challenges existing investment assumptions in AI infrastructure, with DeepSeek claiming it developed its model for under $6 million.
Industry experts suggested that this development could disrupt the dominance of U.S. firms such as OpenAI, forcing them to adopt similar cost-cutting strategies. Experts also commented that greater AI efficiency could lead to even more widespread adoption. Analysts predict a potential shift in AI investment strategies, with capital moving away from a chip-heavy infrastructure toward AI applications and services. Geopolitical implications also are at play, with venture capitalist Marc Andreessen calling DeepSeek’s R1 model “AI’s Sputnik Moment.”
The U.S. ResponseThe U.S. response to DeepSeek’s emergence has been swift. President Trump’s newly announced “Stargate” initiative revealed on January 21, 2025, aims to counter China’s AI advances by investing up to $500 billion in AI infrastructure. Meanwhile, export controls on Nvidia chips remain a contentious issue, with speculation that DeepSeek may have acquired advanced AI hardware through third-party sources.
A Global ResponseItaly’s Data Protection Authority has urgently ordered a restriction on DeepSeek’s processing of Italian users’ data, citing unsatisfactory responses from the Chinese companies behind the chatbot. DeepSeek has gained millions of downloads globally but claimed they do not operate in Italy and are not subject to European data laws, a stance the regulator rejected. An investigation has been opened.
Italy is not the only country that is concerned about DeepSeek. News outlets have reported that the Data Protection Commissions from Ireland, South Korea, Australia, and France have made requests to DeepSeek about its data practices. This raises the question, will the United States do the same as part of its strategy to ensure U.S. primacy in AI development.? On a related note, what will the United States do to protect its citizens on a national level with respect to the privacy of their data?
SummaryThe competition between the United States and China now extends to global AI investments, particularly in the Middle East and Asia, where both nations seek partners to build energy-intensive AI data centers. Some analysts argue that cooperation on AI governance still may be possible, drawing parallels to past U.S.-China agreements on nuclear safety.
While DeepSeek’s breakthrough introduces volatility in AI markets, and resulted in a swift action from the Italian regulator, it also accelerates the AI technology worldwide and is likely to spur further regulatory response from the Trump Administration. The United States at a national level has not enacted any comprehensive data privacy laws, though there have been several proposals. State governments – 19 of which have enacted data privacy laws – may also uncover any perceived holes in privacy protection. Thus, it remains to be seen what will be considered in 2025. 

AI at Work: Design Use Mismatches [Podcast]

In the final installment of our AI at Work series, partner Guy Brenner and senior counsel Jonathan Slowik tackle a critical issue: mismatches between how artificial intelligence (or AI) tools are designed and how they are actually used in practice. Many AI developers emphasize their rigorous efforts to eliminate bias, reassuring employers that their tools are fair and objective, but a system designed to be bias-free can still produce biased outcomes if used improperly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.

Guy Brenner: Welcome to The Proskauer Brief: Hot Topics in Labor and Employment Law. I’m Guy Brenner, a partner in Proskauer’s Employment Litigation & Counseling group, based in Washington, D.C. I’m joined by my colleague, Jonathan Slowik, a special employment law counsel in the practice group, based in Los Angeles. This is the final installment of our initial multi-part series detailing what employers need to know about the use of artificial intelligence, or AI when it comes to employment decisions, such as hiring and promotions. Jonathan, thank you for joining me today.
Jonathan Slowik: It’s great to be here, Guy.
Guy Brenner: So if our listeners haven’t heard the earlier installments of the series, we encourage you to go back and listen to them. In part one, we go through what we hope is a useful background about what AI is and the solutions it offers to employers. In part two, we talk about issues with training data and how that can lead to biased or otherwise problematic outputs with AI tools. In part three, we discussed so-called black box issues. In other words, issues that arise due the fact that may be difficult to understand the inner workings of many advanced AI systems. Today’s episode is about mismatches between the design of an AI tool and how the tool is used in practice. Jonathan, for background, AI developers generally put a lot of effort in eliminating bias from their products, isn’t that right?
Jonathan Slowik: Yes, that’s right. And that’s a major selling point for a lot of these developers. Employers obviously have a great interest in ensuring that they’re deploying a tool that’s not going to create bias in an unintended way. And so, if you go to just about any of these developers’ websites, you can find statements or even full pages about the efforts and lengths they’re going through to ensure that they’re putting out products that are bias free. And this should provide some measure of comfort for employers. It’s clearly something that the developers are competing on. But even if a product is truly bias free, it could still produce biased results if it’s deployed in a way that the developer didn’t intend to make this concrete. I want to go through a few examples. So first, suppose an employer instructs their resume scanner to screen out applicants that are more than a certain distance from the workplace. Perhaps on the theory that these people are less likely to be serious candidates for the position. And if you remember, in part one of this series, hiring managers are overwhelmed with applications these days. Given the ability to submit resumes at scale on platforms like LinkedIn or indeed. Guy, do you see any problem with this particular screening criteria?
Guy Brenner: Well, Jonathan, I can see the attractiveness of it. And I can also see how I can make something like this that hiring managers may have thought of in the past possible when otherwise it would be impossible. Just by virtue of the speed and efficiency and ability of AI to do things, you know, in a matter of seconds. And it sounds unbiased and objective, and it’s a rational basis for trying to cull through the numerous resumes that employers are inundated with whenever they’re trying to fill a position. But the fact is that many of the places in which we live are highly segregated by race and ethnicity. So depending on where the workplace is located, this kind of approach might disproportionately screen out legitimate candidates of certain races, even though that may not be the intent.
Jonathan Slowik: Right. And even though this is something that you could do manually, a hiring manager could just decide to toss out all the resumes of a certain zip code. Doing this with technology increases the risk. So again, a hiring manager doing this manually might start to notice a pattern at some point and realize that this screening criterion was creating an unrepresentative pool. The difference with using software to do this kind of thing is that it can be done at scale very quickly, and only show you the output. And so, the same hiring manager doing this with technology might screen out mostly racial minorities and have no idea that that was even the case. All right. Next hypothetical. What if an employer uses a tool that tries to verify candidate’s backgrounds by cross-referencing social media, and then boosts candidates whose backgrounds are verifiable in that way? Any issues with that one?
Guy Brenner: Well, the one that comes to mind is, I mean, I don’t think this is a controversial proposition that, generally speaking, younger applicants are more active on social media than older applicants. And I think that’s exacerbated depending on which platform we’re talking about.
Jonathan Slowik: So we actually have data on that. So it’s not a stereotype. It’s actually on the Pew Research has issued data confirming what all of us I think suspect.
Guy Brenner: Right. And so it’s not hard to imagine an enterprising plaintiff’s lawyer arguing that a screening tool like this may have a disparate impact on older applicants. I would also be concerned if the scoring takes into account other information on social media pages that could be used as proxy for discriminatory decisions.
Jonathan Slowik: Okay, one more hypothetical. Suppose an employer trying to fill positions for a call center uses a test that tries to predict whether the applicant would be adept at handling distractions under typical working conditions. And supposing this call center that includes a lot of background values. So this is clearly a screening mechanism that’s testing something job related. The employer wants to see how this person is going to perform under the conditions we expect them to be placed in when we actually put them in the job. Is there any problem with this kind of test?
Guy Brenner: Well, first, like any other test, you’d want to know if the test itself has any disparate impact on any particular group, you would want to have it validated. But I also want to know if the company had considered whether some applicants would be entitled to a reasonable accommodation. For example, you can imagine someone who’s neurodiverse performing poorly on this type of simulation, but doing just fine if they were provided with some noise canceling headphones.
Jonathan Slowik: For sure. And this is something the EEOC has issued guidance about. Many of these types of job skills simulations are designed to test an applicant’s ability to perform tasks, assuming typical working conditions, as the employer did in this example. But what the EEOC has made clear is that many employees with disabilities don’t work under atypical working conditions because they work with reasonable accommodations. So for that reason, over reliance on the test without considering the impact on people with disabilities and whether the test should allow for accommodations is potentially problematic.
Guy Brenner: Well, thanks, Jonathan, and to those listening, thank you for joining us on The Proskauer Brief today. We hope you found this series informative. And please note that as developments warrant, we will be recording new podcasts to help you stay on top of this fascinating and ever-changing area of the law and technology.
 

Thinking Like a Lawyer: Agentic AI and the New Legal Playbook

In the 20th century, mastering “thinking like a lawyer” meant developing a rigorous, precedent-driven mindset. Today, we find ourselves on the cusp of yet another evolution in legal thinking—one driven by agentic AI models that can plan, deliberate, and solve problems in ways that rival and complement human expertise.
In this article, we’ll explore how agentic reasoning powers cutting-edge AI like OpenAI’s o1 and o3, as well as DeepSeek’s R1 model. We’ll also look at a technical approach, the Mixture of Experts (MoE) architecture, that makes these models adept at “thinking” through complex legal questions. Finally, we’ll connect the dots for practicing attorneys, showing how embracing agentic AI can boost profitability, improve efficiency, and elevate legal practice in an ever-competitive marketplace.

The Business of Law Meets Agentic Reasoning

Legal practice is as much about economics as it is about jurisprudence. When Richard Susskind speaks of technology forcing lawyers to reconsider traditional business models, or when Ethan Mollick highlights the way AI can empower us with a co-inteligence, they’re tapping into the same reality: law firms are businesses first and foremost. Profit margins and client satisfaction matter, and integrating agentic AI is quickly becoming a competitive imperative.
Still, many lawyers hesitate, fearing automation will erode billable hours or overshadow human expertise. The key is to realize that agentic AI, tools that can autonomously plan, analyze, and even execute tasks, don’t aim to replace lawyers. Instead, they empower lawyers to practice at a higher level. By offloading rote tasks to AI, legal professionals gain the freedom to focus on nuanced advocacy, strategic thinking, and relationship-building.

A Quick Tour: o1, o3, and DeepSeek R1

OpenAI’s o1: Laying the Agentic Foundation
Introduced in September 2024, o1 marked a significant leap forward in AI’s reasoning capabilities. Its defining feature is its “private chain of thought,” an internal deliberation process that allows it to tackle problems step by step before generating a final output. This approach is akin to an associate who silently sketches out arguments on a legal pad before presenting a polished brief to the partner.
This internal “thinking” has proven especially useful in scientific, mathematical, and legal reasoning tasks, where superficial pattern-matching often falls short. The trade-off? Increased computational demands and slightly slower response times. But for most law firms, especially those dealing with complex litigation or regulatory analysis, accuracy often trumps speed.
OpenAI’s o3: Pushing Boundaries
Building on o1, o3 arrived in December 2024 with even stronger agentic capabilities. Designed to dedicate more deliberation time to each query, o3 consistently outperforms o1 in coding, mathematics, and scientific benchmarks. For lawyers, this improvement translates to more thorough statutory analysis, contract drafting, and fewer oversights in due diligence.
One highlight is o3’s performance on the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). It scores nearly three times higher than o1, underscoring the leap in its ability to handle abstract reasoning, akin to spotting hidden legal issues or anticipating an opponent’s argument.
DeepSeek R1: The Open-Source Challenger
January 2025 saw the release of DeepSeek R1, an open-source model from a Chinese AI startup. With performance on key benchmarks (like the American Invitational Mathematics Examination and Codeforces) exceeding o1 but just shy of o3, DeepSeek R1 has quickly attracted viral attention. Perhaps its biggest draw is cost-effectiveness: it’s reportedly 90-95% cheaper than o1. That kind of pricing is hard to ignore, especially for smaller firms or legal tech startups that need powerful AI without breaking the bank. DeepSeek R1’s open-source license also opens the door to customization: imagine a specialized “legal edition” any firm can adapt.
The market impact has been swift: DeepSeek R1’s launch catapulted its associated app to the top of the Apple App Store and triggered a sell-off in AI tech stocks. This frenzy underscores a critical lesson: the world of AI is volatile, competitive, and global. Law firms shouldn’t pin their entire strategy on a single vendor or model; instead, they should stay agile, ready to explore whichever AI solution best fits their needs.

How Agentic Reasoning Actually Works

All these models—o1, o3, and DeepSeek R1—share a common thread: agentic reasoning. They’re built to do more than just respond; they deliberate. Picture an AI “intern” that doesn’t just copy-and-paste from a template but weighs the merits of different statutes, checks your prior briefs, and even flags contradictory language before you finalize a contract.
But how do they manage this level of autonomy under the hood? Enter the Mixture of Experts (MoE) architecture.
Mixture of Experts (MoE) Architecture

Experts: Think of each expert as a specialized “mini-model” focusing on a single domain—perhaps case law parsing, contract drafting, or statutory interpretation.
Gating Mechanism: This is the brains of the operation. Upon receiving an input (e.g., “Draft a motion to compel in a federal product liability case”), the gating system selects the subset of experts most capable of handling that task.

The process is akin to sending your question to the right department in a law firm: corporate experts for an M&A agreement, litigation experts for a discovery motion. By activating only the relevant experts for a given task, the AI remains computationally efficient, scaling easily without ballooning resource needs. This sparse activation mirrors an attorney’s own approach to problem-solving; you don’t bring in your tax partner for a maritime dispute, and you don’t put your entire legal team on every single project.
For agentic reasoning, MoE models shine because they allow the AI to break down multi-faceted tasks into manageable chunks, using the best “sub-models” for each piece. In other words, the AI can autonomously plan which mini-experts to consult, deliberate internally on their advice, and then execute a cohesive final output, much like a senior partner synthesizing input from various practice groups into one winning brief.

Practical Impacts on Legal Workflows

Research and Drafting
Lawyers spend countless hours researching regulations and precedents. With agentic AI, that time shrinks dramatically. For instance, an MoE-based system could route textual queries to the “case law expert” while simultaneously consulting a “regulatory expert.” The gating mechanism ensures each question goes to the sub-model best suited to answer it. That means more accurate, tailored research in less time.
Document Review and Due Diligence
High-stakes M&A deals or massive litigation cases involve reviewing thousands of pages of documents. Agentic AI can quickly triage which documents to flag for deeper human review, finding hidden clauses or issues that might otherwise take an associate weeks to spot. The result? Faster, cheaper due diligence that can be billed in alternative ways: flat fees, success fees, or other value-based structures, enhancing client satisfaction and firm profitability.
Strategic Advisory
Perhaps the most exciting application is strategic planning. By running different hypothetical arguments or settlement options through an agentic model, attorneys can gain insights into possible outcomes. Imagine a “simulation-expert” sub-model that compares potential trial outcomes based on past jury verdicts, local court rules, and judge profiles. While final decisions rest with the lawyer (and client), AI offers a data-driven edge in deciding whether to settle, proceed, or counter-offer.

Profitability: Beyond the Billable Hour

One of the biggest hurdles to adopting AI is the fear that automated tasks will reduce billable hours. But consider how value-based billing or flat-fee arrangements can transform the equation. If AI cuts a 10-hour research task down to 2, you can offer clients a predictable cost and still maintain or even improve your margin. Clients often prefer certainty, and they value speed if it means resolving matters sooner.
Additionally, adopting agentic AI can allow your firm to take on more cases or offer new services, like real-time compliance monitoring or rapid contract generation. Scaling your practice to handle more volume without expanding headcount can be a powerful revenue driver.

The Human Element: Lawyers as Conductors

Agentic AI models are not a substitute for the judgment, empathy, and moral responsibility that define great lawyering. Rather, think of AI as your personal ensemble of experts, each playing a specialized instrument. You remain the conductor, guiding the orchestra to create a harmonious legal argument or transaction.
If anything, the lawyer’s role becomes more vital in an AI-driven world. Your expertise ensures the AI’s recommendations make sense in the real world of courts, regulations, and human relationships. Your ethical obligations and professional standards guarantee that client confidentiality is safeguarded, conflicts of interest are managed, and justice is served.
Closing Thoughts
The real paradigm shift here comes from recognizing how AI agents, powered by a Mixture of Experts architecture, can function like a fully staffed legal team, all contained within a single system. Picture a virtual army of associates, each specialized in key practice areas, orchestrated to dynamically route tasks to the right “expert.” The result? A law firm that can harness collective knowledge at scale, ensuring top-notch work product and drastically reducing turnaround times.
Rather than replacing human talent, this approach enhances it. Lawyers can channel their energy into strategic thinking, client relationships, and creative advocacy, those tasks that define the very essence of the profession. Meanwhile, agentic AI handles heavy lifting in research, analysis, and repetitive drafting, enabling teams to serve more clients, tackle more complex matters, and ultimately become more impactful and profitable than ever before.
Far from an existential threat, these AI advancements offer us the freedom to practice law at its best, delivering deeper insights with greater efficiency. In embracing these technologies, we build a future where legal professionals can make more meaningful contributions to both their firms and the broader society they serve.