New York Proposal to Protect Workers Displaced by Artificial Intelligence

On 14 January 2025, during her State of the State Address (the Address), New York Governor Kathy Hochul announced a new proposal aimed at supporting workers displaced by artificial intelligence (AI).1 This proposal would require employers to disclose whether AI tools played a role in mass layoffs or closings subject to New York’s Worker Adjustment and Retraining Notification Act (NY WARN Act). Governor Hochul announced that she is directing the New York State Department of Labor (DOL) to enact and enforce this requirement. The DOL does not have a timeline for implementing the new requirement, and Labor Commissioner Roberta Reardon acknowledged that “defining what counts as an AI-related layoff would be a challenge [to implementation].”2
In the Address, Governor Hochul acknowledged the benefits of AI, stating, “[innovations in AI] have the ability to change the way businesses operate, leading to greater efficiency, fewer business disruptions, and increased responsiveness to customer needs.” However, the implementation of AI tools in the workplace leads to increased automation, which may result in increased job loss, wage stagnation or loss, reduced hiring, lack of job satisfaction, and skill obsolescence—all of which are major concerns for US workers.3
The primary goals of imposing these employer disclosures are to: (i) aid transparency and gather data on the impact of AI technologies on employment and employees; and (ii) ensure the integration of AI tools into the workforce creates an environment where workers can thrive.
Implications for Employers
Disclosure Requirement
Employers in New York will need to disclose in their NY WARN Act notices whether layoffs are due to the implementation of AI tools replacing employees. 
Scope
While specific details about the scope of the new disclosure requirement are not yet available, employers should prepare for this additional obligation as part of the existing complex notice requirements under the NY WARN Act.4
Compliance
Employers contemplating a NY WARN Act-triggering event should consult with legal counsel to ensure compliance with these disclosure requirements and expanded NY WARN Act obligations.
NY WARN Act
The Worker Adjustment and Retraining Notification Act (WARN Act) is a federal law that requires covered employers to provide employees with 60-day advance notice before closing a plant or conducting a mass layoff.5 The purpose of the WARN Act is to give workers and their families time to adjust to potential layoffs and to seek or train for new jobs.6 New York is one of 18 states with its own “mini-WARN Act.” The NY WARN Act imposes stricter requirements than the federal WARN Act. For example, the NY WARN Act applies to employers with 50 or more employees while the federal WARN Act applies to employers with 100 or more employees. The NY WARN Act also requires a 90-day advance notice, compared to the 60-day notice required under federal law. The early warning notices of closures and layoffs are provided to affected employees, their representatives, and the Department of Labor and local officials. If Governor Hochul’s proposal is enforced, NY WARN Act notices will also need to include the required AI disclosure.
Takeaways for Employers
Employers should be well versed in how AI tools are being used and the impact they are having on workers, especially if such impacts may lead to mass layoffs. Specifically, legal and human resources leaders should understand how the business is automating certain processes through AI tools and the implications the tools have on headcount requirements, employee job satisfaction and morale.
Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of issues related to emerging issues in labor, employment, and workplace safety law, and are well-positioned to provide guidance and assistance to clients on AI developments.
Footnotes

1 https://www.governor.ny.gov/news/governor-hochul-announces-new-proposals-support-small-businesses-and-boost-economic-growth
2 https://news.bloomberglaw.com/product/blaw/bloomberglawnews/exp/eyJpZCI6IjAwMDAwMTk0LTcxNTYtZDIzYy1hYmZjLTc1ZmU5NDhiMDAwMSIsImN0eHQiOiJETE5XIiwidXVpZCI6ImhqMGRvcTNKdGdrSkpKckZyL01QaUE9PU9seW0rTExPbVdiODlZZ1N6aWtDZHc9PSIsInRpbWUiOiIxNzM3Mzc0NjI2MDg5Iiwic2lnIjoiYTZXMnkwZnczcGZ3SnVpdlFrclV0S3FERFlnPSIsInYiOiIxIn0=?source=newsletter&item=body-link&region=text-section&channel=daily-labor-report
3 https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity#:~:text=Roughly%20half%20the%20exposed%20jobs,of%20these%20jobs%20may%20disappear; https://cepr.org/voxeu/columns/workers-responses-threat-automation.
4 12 NYCRR Part 92
5 https://www.dol.gov/general/topic/termination/plantclosings
6 Id.

Are Employees Receiving Regular Data Protection Training? Are They AI Literate?

Employee security awareness training is a best practice and a “reasonable safeguard” for protecting the privacy and security of an organization’s sensitive data. The list of data privacy and cybersecurity laws mandating employee data protection training continues to grow and now includes the EU AI Act. The following list is a high-level sample of employee training obligations. 
EU AI Act. Effective February 2, 2025, Article 4 of the Act requires that all providers and deployers of AI models or systems must ensure their workforce is “AI literate”. This means training workforce members to achieve a sufficient level of AI literacy considering various factors such as the intended use of the AI system. Training should incorporate privacy and security awareness given the potential risks. Notably, the Act applies broadly and has extraterritorial reach. As a result, this training obligation may apply to organizations including but not limited to:

providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country (e.g., U.S.);
deployers of AI systems that have their place of establishment or are located within the Union; and
providers and deployers of AI systems that have their place of establishment or are located in a third country (e.g., U.S.), where the output produced by the AI system is used in the Union.

California Consumer Privacy Act, as amended (CCPA). Cal. Code Regs. Tit. 11 sec. 7100 requires that all individuals responsible for the business’s compliance with the CCPA, or involved in handling consumer inquiries about the business’s information practices, must be informed of all of the requirements in the CCPA including how to direct consumers to exercise their rights under the CCPA. Under the CCPA, “consumer” means a California resident and includes employees, job applicants and individuals whose personal data is collected in the business to business context.
HIPAA. Under HIPAA, a covered entity or business associate must provide HIPAA privacy training as well as security awareness training to all workforce members. Note that this training requirement may apply to employers in their role as a plan sponsor of a self-insured health plan.
Massachusetts WISP law (201 CMR 17.03 201). Organizations that own or license personal information about a resident of the Commonwealth are subject to a duty to protect that information. This duty includes implementing a written information security program that addresses ongoing employee training. 
23 NYCRR 500. The New York Department of Financial Services’ cybersecurity requirement for financial services companies requires that covered entities provide cybersecurity personnel with cybersecurity updates and sufficient training to address relevant cybersecurity risks. 
Gramm-Leach-Bliley Act and the Safeguards Rule. The Safeguards Rule requires covered financial institutions to implement a written information security program to safeguard non-public information. The program must include employee security awareness training. In 2023, the FTC expanded the definition of financial institutions to include additional industries such as automotive dealerships and retailers that process financial transactions. 
EU General Data Protection Regulation (“EU GDPR”). Under Art. 39 of the EU GDPR, the tasks of a Data Protection Officer include training staff involved in the organization’s data processing activities.
In addition to the above, there are express or implied security awareness training obligations in numerous other laws and regulations including certain Department of Homeland Security contractors, licensees under state insurance laws modelled on the NAIC Insurance Data Security Model Law, and organizations that process payments via credit cards in accordance with PCI DSS.
Whether mandated by law or implemented as a best practice, ongoing employee privacy and security training plays a key role in safeguarding an organization’s sensitive data. Responsibility for protecting data is no longer the sole province of IT professionals. All workforce members with access to the organization’s sensitive data and information systems share that responsibility. And various stakeholders, including HR professionals, play a vital role in supporting that training.

Fair Use Falls Short: Judge Bibas Rejects AI Training Data Defense in Thomson Reuters v. ROSS

Fair use — a critical defense in copyright law that allows limited use of copyrighted material without permission — has emerged as a key battleground in the wave of artificial intelligence (AI) copyright litigation. In a significant revision of his earlier position, Judge Stephanos Bibas in the United States District Court for the District of Delaware has dealt a blow to artificial intelligence companies by blocking their ability to rely on this defense in Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc.
Fair use serves as a safety valve in copyright law, permitting uses of copyrighted works for purposes such as criticism, commentary, news reporting, teaching, scholarship, or research. Courts evaluate fair use through four factors, with particular emphasis on whether the use transforms the original work and how it affects the market for the copyrighted work.
The case originated when Thomson Reuters sued ROSS Intelligence for using content from Westlaw — the legal research platform’s headnotes and proprietary Key Number System — to train an AI-powered legal research competitor. ROSS had initially sought to license Westlaw content, but when Thomson Reuters refused, ROSS turned to a third-party company, LegalEase Solutions. LegalEase created approximately 25,000 legal question-and-answer pairs, allegedly derived from Westlaw’s copyrighted content, which ROSS then used to train its AI system.
In September 2023, Bibas denied Thomson Reuters’ motion for summary judgment on fair use, finding that the question was heavily fact-dependent and required jury determination. He particularly emphasized factual disputes about whether ROSS’s use was transformative and how it affected potential markets.
However, in August 2024, Bibas took the unusual step of continuing the scheduled trial and inviting renewed summary judgment briefing. His opinion represents a dramatic shift, explicitly acknowledging that his “prior opinion wrongly concluded that I had to send this factor to a jury.” While noting that fair use involves mixed questions of law and fact, Bibas recognized that the ultimate determination in this case “primarily involves legal work.”
Bibas’ unusual move to invite renewed briefing stemmed from his realization, upon deeper study of fair use doctrine, that his earlier ruling afforded too much weight to factual disputes and did not fully account for how courts should assess transformative use in AI-related cases. Rather than viewing transformation through the lens of whether ROSS created a novel product, Bibas recognized that the key question was whether ROSS’s use of Thomson Reuters’ content served substantially the same purpose as the original works.
He concluded that while fair use involves factual elements, the dispositive questions in the case were ultimately legal ones appropriate for resolution by the court. His key analytical shifts included rejecting the notion that ROSS’s use might be transformative merely because it created a “brand-new research platform.”
Critically, Bibas distinguished ROSS’s use from cases like Google v. Oracle where copying was necessary to access underlying functional elements. While Google needed to copy Java API code to enable interoperability between software programs, ROSS had no similar technical necessity to copy Westlaw’s headnotes and organizational system. ROSS could have developed its own legal summaries and classification scheme to train its AI — it simply found it more expedient to build upon Thomson Reuters’ existing work.
The concept of “transformative” use lies at the heart of the fair use analysis. This principle was recently examined by the Supreme Court in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, where the Court significantly narrowed the scope of transformative use under the first fair use factor. The Court held that when two works serve “substantially the same purpose,” the fact that the second work may add “new expression, meaning, or message” is not enough to tip the first factor in favor of fair use.
This framework for analyzing transformative use provides important context for understanding how courts may evaluate AI companies’ fair use defenses. AI companies have consistently argued that their use of copyrighted materials for training data is transformative because the AI systems learn patterns and relationships from the works rather than reproducing their expressive content. They contend that this process fundamentally transforms the original works’ purpose and character. However, Bibas’ ruling suggests courts may be increasingly skeptical of such arguments, particularly when the resulting AI products compete in similar markets to the original works.
While this ruling represents a setback for one of the key defenses believed to be available to AI companies in copyright litigation, fair use is only one of several defenses these companies are raising in more than 30 pending lawsuits. Other defenses include arguments about copyrightability, substantial similarity, and whether training data uses constitute copying at all. A weakening of the fair use defense, while significant, does not necessarily predict the ultimate outcome of these cases.
Additionally, this case involved a particularly direct form of market competition — an AI system trained on legal content to compete with the original legal research platform. Other cases involving different types of training data or AI applications that don’t directly compete with the source materials might be distinguished. For instance, an AI trained on literary works to generate news articles might present a more compelling case for transformative use since the end product serves a fundamentally different purpose than the training data.
Nevertheless, Bibas’ ruling may alter how AI companies approach training data acquisition. If other courts follow his lead in viewing fair use primarily as a legal rather than factual determination, these companies may explore licensing agreements with copyright holders — a process that some have already undertaken.
Listen to this article 

Clock is Ticking for Responses to UK Government Consultation on Copyright and Artificial Intelligence

Ever since the emergence of generative AI, a major concern for all participants has been the extent to which copyright works can and should be used in training AI models.
The application of UK copyright law for this purpose is disputed, leading to inevitable high-profile tension between, on one hand, rights holders keen to control and be paid for use of their work and, on the other, developers who argue that this legal uncertainty is undermining investment in, and the development of, AI in the UK.
Whilst cases are making their way through the courts in the UK and further afield (such as in Germany[1] and the US[2]) on this issue, there have been frequent calls for specific legislation (including by the UK government itself, which has publicly stated that the status quo cannot continue).
As a result, the UK government has launched a consultation[3] open until 25 February 2025 inviting interested parties to submit feedback on potential changes to UK copyright legislation in light of AI. The options set out in the consultation, and on which feedback is sought, range from doing nothing through to the introduction of broad data mining rights which would allow use of copyright works for AI training (including for commercial use), without rights holders’ permission and subject to few or no restrictions.
The Options
The consultation invites feedback on four potential options under consideration:

Do nothing and leave UK copyright and other related laws as they are – essentially this would defer the matter to the courts to resolve on a piecemeal and ad-hoc basis. Whilst feedback on this option is invited the consultation makes clear this is not an option looked upon favourably by the government, given that it would prolong the current legal uncertainty.
The opt-in model requiring licensing in all cases – this would strengthen copyright protection for rights holders by providing that AI models could only be trained on copyright works in the UK if an express licence to do so has been granted. This option is likely to be popular with rightsholders, but at odds with the government’s desire to turbocharge the AI economy in the UK.
A broad data mining exception – this would follow a similar approach already seen in Singapore[4] (and to some extent in the US under its “fair use” standards) and allow data mining on copyrighted works in the UK, including for AI training, without the rights holder’s permission. Under this approach, copyrighted material could be used for commercial purposes, and would be subject to few, if any, restrictions. Needless to say, this option is likely to be very popular with AI developers but is the least favoured by rights holders.
Allow data mining but copyright holders to reserve their rights along with increased transparency measures – this is the middle ground between options 2 and 3, and would allow AI developers to train AI models using material to which they have lawful access, but only to the extent that right holders have not expressly reserved their rights. Any such use would also be subject to robust transparency measures requiring developers to be transparent about what material their AI models have been trained on. For rights holders, this means an “opt-out” as opposed to “opt-in” model and pro-active monitoring to identify unauthorised use.

The Unanswered Questions
Option 4 broadly follows the approach which has already been seen in the EU under the not uncontroversial text and data mining exception in the EU Directive on the Digital Single Market,[5] which has been further enhanced by the EU AI Act[6] which declared these text and data mining exceptions to be applicable to general-purpose AI models. The government’s view expressed in the consultation is that option 4  is the option which would balance the rights of all participants, although the EU approach was rejected by the previous government as being a threat to rightsholders interests.
However, at this stage it does not represent a “silver bullet” as many issues would still need to be resolved, including those set out below:

It is unclear how a “rights reserved” model would work in practice and how exactly copyright owners would be able to reserve their rights. The EU equivalent provision requires opt-outs to be machine readable, but query how this works once multiple copies are available. There is also the question of what “machine-readable” means in the context of machines designed to read anything (including handwriting).
How would such a model apply to works that are already publicly available and how does it address works which have been previously used to train current AI models? It is uncommon for legislation to have retroactive effect. This would then leave it open to debate what will happen with works that have been mined prior to the effective date of the legislation, and would leave early entrants into the AI market with a huge advantage (or disadvantage) depending on what shape any future legislation and court cases take.
Does this apply to works in non-digital formats? The EU legislation on data mining specifically refers to “automated analytical technique aimed at analysing text and data in digital form.” But how does this apply to books which are scanned in (a process which Google went through many years ago)?
What happens to AI models if all or a significant volume of rights holders opt-out? An AI opt-out could soon become ubiquitous at which point developers could find themselves wading through a significant volume of claims making the UK an unpopular location for AI development.
How will rights holders know that their material has been used? The consultation states that robust measures will be put in place to ensure that developers are transparent about the works their models are trained on, but what will be the penalties for failing to be transparent and will there be robust enforcement against non-compliance?
To what extent can and should any new legislation have extraterritorial application? With many major AI players headquartered outside of the UK, any new legislation which is limited to those based in the UK may have limited impact and an increased legislative burden in the UK could make it a less attractive location for AI businesses.

Ultimately the outcome may be collective licensing deals between rights holders and AI developers as has already happened for a number of news outlets and websites. However, that will be reliant on collective will and action by rights holders and a willingness to embrace AI, which so far has not been forthcoming.
[1] Breaking News from Germany! Hamburg District Court breaks new ground with judgment on the use of copyrighted material as AI training data | Global IP & Technology Law Blog
[2] Copyright Office: Copyrighting AI-Generated Works Requires “Sufficient Human Control Over the Expressive Elements” – Prompts Are Not Enough | Global IP & Technology Law Blog
[3] Copyright and Artificial Intelligence – GOV.UK
[4] Artificial Intelligence and Intellectual Property Legal Frameworks in the Asia-Pacific Region | Global IP & Technology Law Blog
[5] EU Directive 2019/790.
[6] EU Regulation 24/1689.
Sumaiyah Razzaq contributed to this article

European Commission Withdraws ePrivacy Regulation and AI Liability Directive Proposals

On February 11, 2025, the European Commission made available its 2025 work program (the “Work Program”). The Work Program sets out the key strategies, action plans and legislative initiatives to be pursued by the European Commission.
As part of the Work Program, the European Commission announced that it plans to withdraw its proposals for a new ePrivacy Regulation (aimed at replacing the current ePrivacy Directive) and AI Liability Directive (aimed at complementing the new Product Liability Directive) due to lack of a consensus for their adoption. The withdrawal means that the current ePrivacy Directive and its national transposition laws will remain in force and postpones the regulation of non-contractual liability for damages arising from the use of AI at the EU level. 
Read the Work Program.

State Regulators Eye AI Marketing Claims as Federal Priorities Shift

With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part two of our series, in which we turn our focus to regulators, where we’re seeing increased scrutiny at the state level amidst uncertainty at the federal level.
FTC Led the Charge but Unlikely to Continue AI “Enforcement Sweep”
As mentioned in part one of our series, last year regulators at the Federal Trade Commission (FTC) launched “Operation AI Comply,” which it described as a “new law enforcement sweep” related to using new AI technologies in misleading or deceptive ways.
In September 2024, the FTC announced five cases against AI technology providers for allegedly deceptive claims or unfair trade practices. While some of these cases involve traditional get-rich-quick schemes with an AI slant, others highlight the risks inherent in the rapid adoption of new AI technologies. Specifically, the complaints filed by the FTC involve:

An “AI lawyer” who was supposedly able to draft legal documents in the U.S. and automatically analyze a customer’s website for potential violations.
Marketing of a “risk free” business powered by AI that refused to honor money-back guarantees when the business began to fail.
Claims of a get-rich-quick scheme that attracted investors by claiming they could easily invest in online businesses “powered by artificial intelligence.”
Positioning a business opportunity supposedly powered by AI as a “surefire” investment and threatening people who attempted to share honest reviews.
An “AI writing assistant” that enabled users to quickly generate thousands of fake online reviews of their businesses.

Since these announcements, dramatic changes have occurred at the FTC (and across the federal government) as a result of the new administration. Last month, the Trump administration appointed FTC Commissioner Andrew N. Ferguson as the new FTC chair, and Mark Meador’s nomination to fill the FTC Commissioner seat left vacant by former chair Lina M. Khan appears on track for confirmation. These leadership and composition changes will likely impact whether and how the FTC pursues cases against AI technology providers.
For example, Commissioner Ferguson strongly dissented from the FTC’s complaint and consent agreement with the company that created the “AI writing assistant,” arguing that the FTC’s pursuit of the company exceeded its authority.
And in a separate opinion supporting the FTC’s action against the “AI lawyer” mentioned above, Commissioner Ferguson emphasized that the FTC does not have authority to regulate AI on a standalone basis, but only where AI technologies interact with its authority to prohibit unfair methods of competition and unfair or deceptive acts and practices.
While it is impossible to predict precisely how the FTC under the Trump administration will approach AI, Commissioner Ferguson’s prior writings provide insight into the FTC’s future regulatory focus for AI, along with the focus in Chapter 30 of Project 2025 (drafted by Adam Candeub, who served in the first Trump administration) on protecting children online.
The impact of the new administration’s different approach to AI regulation is not limited to the FTC and likely will affect all federal regulatory and enforcement activity. This is due in part to one of President Trump’s first executive orders, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”
That order repealed the Biden administration’s 2023 executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established guidelines for the development and use of AI. An example of this broader impact is the SEC’s proposed rule on the use of AI by broker-dealers and registered investment advisors, which is likely to be withdrawn based on the recent executive order, especially given the acting chair’s public hostility toward the rule and the emphasis on reducing securities regulation outlined in Chapter 27 of Project 2025. 
The new administration has also been outspoken in international settings regarding its view that regulating AI will give advantages to authoritarian nations in the race to develop the powerful technology.
State Attorneys General Likely to Take on Role of AI Regulation and Enforcement
Given the dramatic shifts in direction and focus at the federal level, it is likely that short-term regulatory action will increasingly shift to the states.
In fact, state attorneys general of both parties have taken recent action to regulate AI and issue guidance. As discussed in a previous client alert, Massachusetts Attorney General Andrea Campbell has emphasized that AI development and use must conform with the Massachusetts Consumer Protection Act (Chapter 93A), which prohibits practices similar to those targeted by the FTC.
In particular, she has highlighted practices such as falsely advertising the quality or usability of AI systems or misrepresenting the safety or conditions of an AI system, including representations that the AI system is free from bias.
Attorney General Campbell also recently joined a coalition of 38 other attorneys general and the Department of Justice in arguing that Google engaged in unfair methods of competition by making its AI functionality mandatory for Android devices, and by requiring publishers to share data with Google for the purposes of training its AI.
Most recently, California Attorney General Rob Bonta issued two legal advisories emphasizing that developers and users of AI technologies must comply with existing California law, including new laws that went into effect on January 1, 2025. The scope of his focus on AI seems to extend beyond competition and consumer protection laws to include laws related to civil rights, publicity, data protection, and election misinformation.
Bonta’s second advisory emphasizes that the use of AI in health care poses increased risks of harm that necessitate enhanced testing, validation, and audit requirements, potentially signaling to the health care industry that its use of AI will be an area of focus for future regulatory action.
Finally, in a notable settlement that was the first of its kind, Texas Attorney General Ken Paxton resolved allegations that an AI health care technology company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its AI products, including error and hallucination rates.
As AI technology continues to impact consumers, we expect other attorneys general to follow suit in bringing enforcement actions based on existing consumer protection laws and future AI legislation.
Moving Forward with Caution
Recent success by plaintiffs, combined with an active focus on AI by state regulators, should encourage businesses to be thoughtfully cautious when investing in new technology. Fortunately, as we covered in our chatbot alert, there are a wide range of measures businesses can take to reduce risk, both during the due diligence process and upon implementing new technologies, including AI technologies, notwithstanding the change in federal priorities. Other countries – particularly in Europe – may also continue their push to regulate AI. 
At a minimum, businesses should review their consumer-facing disclosures — usually posted on the company website — to ensure that any discussion of technology is clear, transparent, and aligned with how the business uses these technologies. Companies should expect the same transparency from their technology providers. Businesses should also be wary of so-called “AI washing,” which is the overstatement of AI capabilities and understatement of AI risks, and scrutinize representations to business partners, consumers, and investors.
Future alerts in this series will cover:

Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.

The BR International Trade Report: February 2025

Recent Developments
President Trump drives forward with “America First” trade policy. Shortly after taking office on January 20, President Trump issued a memorandum to various department heads outlining his “America First” trade policy. Notably, the memorandum paves the way for robust tariffs and calls for executive branch review of various elements of U.S. trade policy. Read our alert for additional analysis. 
United States delays tariffs on imports from Canada and Mexico but imposes 10 percent tariffs on imports from China. On February 1, President Trump, acting under the authority of the International Emergency Economic Powers Act (“IEEPA”), imposed a 25 percent tariff on imports from Canada and Mexico (excluding energy resources from Canada, which were subject to a tariff of 10 percent) and a 10 percent tariff on imports from China. After first threatening to respond in kind—with retaliatory tariffs or other measures—both Canada and Mexico negotiated a 30-day pause in exchange for increased enforcement measures at America’s borders. There was no similar agreement between the United States and China, which became subject to additional tariffs on February 4. Notably, the president initially eliminated the de minimis exemption for certain Chinese-origin imports of items valued under $800, but then later reinstated the exemption.
President Trump announces 25 percent tariff on all steel and aluminum imports entering the United States. On February 10, President Trump signed a proclamation imposing 25 percent tariffs on imports of steel and aluminum from all countries and cancelling previous tariff exemptions. Peter Navarro, a trade advisor to the president, remarked that “[t]he steel and aluminum tariffs 2.0 will put an end to foreign dumping, boost domestic production, and secure our steel and aluminum industries as the backbone and pillar industries of America’s economic and national security.” The new tariffs will take effect on March 12. 
President Trump announces reciprocal tariff regime. On February 13, the president paved the way for what he called “the big one,” reciprocal tariffs directed against countries that impose trade barriers on the United States. Under the new framework, the United States will impose tariffs on imports from countries that levy tariffs on imports of U.S. goods, maintain a value-added tax (“VAT”) system, issue certain subsidies, or implement “nonmonetary trade barriers” against the United States. The president stated that the U.S. Department of Commerce will conduct an assessment, expected to be completed by April 1, to determine the appropriate tariff level for each country.
President Trump sets tariff sights on European Union. President Trump has said he “absolutely” plans to impose tariffs on goods from the European Union to address what he considers “terrible” treatment on trade. In an effort to stave off such measures, the European Union reportedly has offered to lower tariffs on imports of U.S. automobiles. Experts suggest that, in the event of U.S. tariffs, the European Union may retaliate with countermeasures against U.S. technology services. 
Trump and Putin discuss commencing negotiations to end the war in Ukraine. President Trump stated on February 12 that he had a “lengthy and productive” phone call with Russian President Vladimir Putin in which the two leaders discussed “start[ing] negotiations immediately” and “visiting each other’s nations.” The president followed up with a call to Ukrainian President Volodymyr Zelensky, who reported that the call was “meaningful” and focused on “opportunities to achieve peace.” The dialogue comes amidst Russia and Belarus releasing American detainees in recent days.
President Trump and Indian Prime Minister Narendra Modi meet to discuss deepening cooperation. On January 27, President Trump spoke with Indian Prime Minister Narendra Modi to discuss regional security issues, including in the Indo-Pacific, the Middle East, and Europe. Notably, following the phone call, India cut import duties on certain U.S.-origin motorcycles, potentially in an effort to distance itself from President Trump’s claims on the campaign trail that India was a “very big abuser” of the U.S.-India trade relationship. Prime Minister Modi followed up the discussion with a meeting with President Trump at the White House on February 13.
Secretary of State Marco Rubio meets with “Quad” ministers on President Trump’s first full day in office. On January 21, foreign ministers of the “Quad”—a diplomatic partnership between the United States, India, Japan and Australia—convened in Washington, D.C. In a joint statement, the group expressed its opposition to “unilateral actions that seek to change the status quo [in the Indo-Pacific] by force or coercion.”
U.S. Secretary of State Marco Rubio meets with Panamanian President José Raúl Mulino. In early February, Secretary of State Marco Rubio traveled to Panama to meet with Panama’s President José Raúl Mulino and Foreign Minister Javier Martínez-Acha. During the meeting, Secretary Rubio criticized Chinese “influence and control” over the Panama Canal area. Notably, following the meeting with Secretary Rubio, Panama announced that it would let its involvement in China’s Belt and Road initiative expire.
DeepSeek launches an artificial intelligence app, prompting U.S. national security concerns. In January, DeepSeek—a Chinese artificial intelligence (“AI”) startup—released DeepSeek R1, an AI app reportedly less expensive to develop than rival apps. Reports indicate that the United States is investigating whether DeepSeek, in developing its platform, accessed AI chips subject to U.S. export controls in contravention of U.S. law. Commerce Secretary nominee Howard Lutnick echoed these concerns in his recent confirmation hearing.
President Trump issues memorandum launching “maximum pressure” campaign against Iran. On February 4, the president issued a National Security Presidential Memorandum (“NSPM”) restoring his prior administration’s “maximum pressure” policy towards Iran, with a focus on denying Iran a nuclear weapon and intercontinental ballistic missiles. The NSPM directs the U.S. Department of the Treasury and the U.S. Department of State to take various measures exerting such pressure, including imposing sanctions or pursuing enforcement against parties that have violated sanctions against Iran; reviewing all aspects of U.S. sanctions regulations and guidance that provide economic relief to Iran; issuing updated guidance to the shipping and insurance sectors and to port operators; modifying or rescinding sanctions waivers, including those related to Iran’s Chabahar port project (which India has developed at considerable expense); and “driv[ing] Iran’s export of oil to zero.” See the White House fact sheet.
President Trump signs executive order calling for establishment of a U.S. sovereign wealth fund. On February 3, the president issued an executive order directing the Secretary of the Treasury, the Secretary of Commerce, and the Assistant to the President for Economic Policy to develop a plan for the creation of a sovereign wealth fund. A corresponding fact sheet describes the White House’s goals for the fund, including “to invest in great national endeavors for the benefit of all of the American people.” Treasury Secretary Scott Bessent stated that he expects the fund to be operational within the next year.
Dispute between the United States and Colombia over deportation flights prompts brief tariff threat. On January 26, Colombian President Gustavo Petro barred “U.S. planes carrying Colombian migrants from entering [Colombia’s] territory” due to concerns over migrants’ treatment. President Trump responded by ordering 25 percent tariffs on Colombian goods, to be raised to 50 percent in one week, visa restrictions on Colombian government officials and their families, and cancellation of visa applications. The standoff between the two countries was resolved later that same day, signaling President Trump’s intention to use tariffs as a key foreign policy tool. 
Impeached South Korean President Yoon Suk Yeol officially charged with insurrection. On January 26, South Korean prosecutors formally charged impeached President Yoon Suk Yeol with insurrection. Yoon becomes the first president in South Korean history to be criminally charged while still in office. In addition to criminal charges, Yoon faces potential removal from office via impeachment. Should the Constitutional Court uphold the impeachment, as many experts anticipate, South Korea will have two months to hold a new election.

Global Data Protection Authorities Issue Joint Statement on Artificial Intelligence

On February 11, 2025, the data protection authorities of the UK, Ireland, France, South Korea and Australia issued a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective artificial intelligence (“AI”) (the “Joint Statement”). In the Joint Statement, the DPAs recognize “the importance of supporting players in the AI ecosystem in their efforts to comply with data protection and privacy rules and help them reconcile innovation with respect for individuals’ rights.”
The Joint Statement refers to the “leading role” DPAs have in “shaping data governance” to address the evolving challenges of AI. Specifically, the Joint Statement indicates that the authorities will commit to:

Foster a shared understanding of lawful grounds for processing personal data in the context of AI training.
Exchange information and establish a shared understanding of proportionate safety measures, to be updated in line with evolving AI data processing activities.
Monitor technical and societal impacts of AI.
Reduce legal uncertainties and create opportunities for innovation where data processing is considered essential for AI.
Strengthen interactions with other authorities to enhance consistency between different regulatory frameworks for AI systems, tools and applications, including those responsible for competition, consumer protection and intellectual property.

Read the full Joint Statement.

Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors

On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:

Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.

In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:

Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.

Thomson Reuters Wins Copyright Case Against Former AI Competitor

Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.
A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.
In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.

Three States Ban DeepSeek Use on State Devices and Networks

New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.
Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”
According to the Texas Governor’s press release:
“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.” 

New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”
The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”
The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”
These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.

USCO Releases Part 2 of its Report on Copyright and Artificial Intelligence

On January 29, 2025, the Copyright Office (USCO) issued the second of its three-part report, Copyright and Artificial Intelligence, relating to its study of how copyright law and policy should respond to the development and use of artificial intelligence. The report grows out of the USCO’s Notice of Inquiry issued in August of 2023, which garnered over 10,000 responses from the public, including from businesses, trade associations, academics, non-profits, artists, and computer scientists. Part 2 deals with the copyrightability of AI-generated materials. Part 1, issued on December 16, 2024, discussed “digital replicas” (digital depictions of an individual) and proposed a new federal law to fill the gap in coverage of existing copyright law and other aligned areas such as rights of publicity. Part 3, expected in early 2025, will discuss the use of copyrighted works for training AI models.
In Part 2, the USCO concludes that existing law on copyrightability adequately accommodates AI-generated output and accordingly makes no recommendation for legislative action. The Copyright Act grants authors limited monopolies on their creations to assure that they have sufficient incentive to create and thereby to enrich culture (i.e., in the language of Article I, Section 8: “to promote the progress of the Arts and Sciences”). As interpreted, authors must be humans. Thaler v. Perlmutter, 687 F. Supp. 3d 140, 149–50 (D.D.C. 2023), Notice of Appeal, No. 23-5233 (D.C. Cir. October 18, 2023 (argued September 19, 2024).
Although the requirement of human authorship would appear, at present, to preclude AI-generated work from receiving copyright protection under the Copyright Act, the USCO considered the pros and cons of creating a new sui generis protection targeted at AI-generated works (perhaps a more limited protection). In support of such protection, it is worth noting that a robust repository of work, even if created by non-human actors, would arguably further the goal of promoting the Arts and Sciences (and therefore argue in favor of such protection). Such protection might also encourage many people (e.g., the non-professional) to participate in the excitement of creation and to express themselves, and even monetize those expressions, in ways they never dreamed.
The USCO was ultimately not persuaded by these arguments. Focusing initially on the AI technology itself, the USCO noted that unlike humans, machines, software, and algorithms need no incentive to create and therefore need no protection. The USCO was also cautious about increasing incentives to rely on AI-generated works and the concomitant creation of a synthetically diluted culture at the expense of a robust and perpetually renewing repository of human creativity. See Part 2, at 36-37. Citing as evidence the recent challenges faced by writers and musicians, the USCO was also sympathetic to concerns that such reliance might dampen the ability of human creators to monetize their works and thereby degrade the societal incentive to create. The USCO determined that the case of persons with disabilities did not require a different result. Noting its strong support for the empowerment of all creators, the USCO noted that AI is used as an assistive technology to facilitate human creation. Copyright protection remains available so long as it is used as a tool to recast, transform, or adapt an author’s expression and not to generate that expression. The USCO provided the example of singer Randy Travis who received a copyright registration for a song he created after suffering a stroke, under circumstances where he used AI to recreate his voice and help realize the musical sounds that he and his musical team desired. Part 2, at 37-38.
Acknowledging that a work must be “authored” by a human to receive protection, humans can play a variety of roles in creating AI-generated works. When does human activity rise to authorship? The USCO devoted a large part of its discussion, perhaps the most interesting part, to this question, the answer to which, as explained by the USCO, is rooted in copyright law’s distinction between ideas and expression. Only the latter is protected under copyright law (17 U.S.C. § 102(b)).
AI output is typically generated by user inputs (“prompts”), often in the form of text (e.g., “draw an image of dolphins at a birthday party”). Typical prompts, according to the USCO, contain merely unprotectible ideas that are then translated by AI into expression. These translation processes are largely random (“black box”) processes that are not predictable or understood by the user. The USCO noted that frequently the output includes many items that were not specified and excludes items that were, and the same prompt often can yield very different results. Viewed in this light, the AI process is akin to the novel writer who translates the high-level suggestions of his or her editor, or the scriptwriter who follows the general ideas and suggestions of a movie treatment. The suggestions of the editor do not make him or her an author of the novel, and the treatment does not make its creator an author of the script for the movie. Community for Creative Non-Violence v. Reid, 490 U.S. 730, 737 (1989) (“person who translates an idea into a fixed, tangible expression entitled to copyright protection”) (emphasis added), cited in Part 2 at 9; cf., Andrien v. Southern Ocean County Chamber of Commerce, 927 F.2d 132, 135 (3d Cir. 1991) cited in Part 2 at 9 (“[A] party can be considered an author when his or her expression of an idea is transposed by mechanical or rote transcription into tangible form under the authority of the party”) (emphasis added); Milano v. NBC Universal, Inc., 584 F. Supp. 2d 1288, 1294 (C.D. Cal. 2008), citing and quoting from Berkic v. Crichton, 761 F.2d 1289, 1293 (9th Cir.1985) (distinguishing a television treatment from “the actual concrete elements that make up the total sequence of events and the relationships between the major characters”). 
Importantly, even a detailed string of prompts, involving a selection and adoption process leading iteratively to the final output, does not, according to the USCO, confer the requisite control for authorship. As stated in Part 2 (at 20):
“Repeatedly revising prompts does not change this analysis or provide a sufficient basis for claiming copyright in the output. . . . By revising and submitting prompts multiple times, the user is “re-rolling” the dice, causing the system to generate more outputs from which to select, but not altering the degree of control over the process. No matter how many times a prompt is revised and resubmitted the final output reflects the user’s acceptance of the AI system’s interpretation, rather than authorship of the expression it contains.”
This conclusion may be hard to swallow for some. The iterative process more or less reflects the way many artists work, adopting small random acts on the page or canvas into integral parts of their work. Action painting of Jackson Pollock (referenced in Part 2), involving the dripping of paint on a canvas, is only a more extreme example of this regular phenomenon. While acknowledging a certain randomness in both prompting and the artistic process, the USCO distinguished the latter by the control that the artist, unlike the AI user, exerts physically over a process that is transparent and understood. “Jackson Pollock’s process of creation,” said the USCO, “did not end with his vision of a work. He controlled the choice of colors, number of layers, depth of texture, placement of each addition to the overall composition – and used his own body movements to execute each of these choices.” Part 2 at 20.
The approach of the USCO differs markedly from a highly publicized case in China, cited by the USCO at 28-29, which accorded copyright protection to an Image created by using Stable Diffusion and recognized the person using the AI tool as the author. In addition to making subsequent adjustments and modifications, the “author” used over 150 prompts to refine the image. As distinguished from the USCO’s position on iterative prompts, the court in that case considered the use of prompts as evidence of the control and creativity exerted by the author. The spirit of the Chinese court’s ruling is in striking contrast to the USCO’s:
“Therefore, when people use an AI model to generate pictures, there is no question about who is the creator. In essence, it is a process of man using tools to create, that is, it is man who does intellectual investment throughout the creation process, [not the] AI model. The core purpose of the copyright system is to encourage creation. And creation and AI technology can only prosper by properly applying the copyright system and using the legal means to encourage more people to use the latest tools to create. Under such context, as long as the AI-generated images can reflect people’s original intellectual investment, they should be recognized as works and protected by the Copyright Law.”
Li v. Liu, Dispute over Copyright Infringement of the Right of Attribution and Right of Information Network Distribution of works (as translated), at 13 (Beijing Internet Ct. November 27, 2023).
Various statutes in UK, Hong Kong, New Zealand, and India, though created before the proliferation of AI over the last couple of years, allow for computer-generated works to be copyrightable in the name of the person causing the creation of the work. The USCO noted that it remains to be seen how foreign countries interpret and apply their laws, how they harmonize their laws among themselves, and how content creators and information technology developers respond to these emerging laws. Part 2, at 29. Finally, with respect to its own conclusions on prompts, the USCO left open the possibility that advances in technology giving humans more control over generated content might merit a reevaluation of the conclusions in the report. Part 2, at 21.
Less nuanced, and perhaps less debatable than its conclusions with respect to prompts, were some of the USCO’s other conclusions. For example, even if a particular output might not be protectible, the creativity embodied in a modification of an AI output (the derivative aspects) may well be protectible. Likewise, if certain AI-generated components are themselves not protectible, a human compilation of those components may well be (e.g., organizing/compiling the AI-generated frames of a graphic novel or comic book). Additionally, the USCO noted that creative prompts (e.g., a drawing combining flowers and a human head) may result in a protectible AI image to the extent that the image transcribes the prompt. Finally, the USCO saw no need to enhance protections for AI Technologies themselves and noted that first-mover incentives along with existing copyright, patent, and trade secret protections for such technologies are sufficient.
For entities in the U.S. wanting to assure ownership of their creative works, the USCO’s position on AI-generated content might induce a bit of caution in using AI in the creative process, and it may offer some relief, at least at the margins, for people whose livelihood depends on their human creativity. Of course, ultimately it will be Congress and the courts that decide the direction of U.S. law.