The Future of AI: A Glimpse into 10 Years from Now

The Future of AI: A Glimpse into 10 Years from Now. Liang Wenfeng is the name behind DeepSeek, the Chinese AI startup that has caused a massive disruption in the global tech scene. What began as a side project has now evolved into a serious threat to U.S. tech giants like OpenAI, Google, and Nvidia, sending shockwaves […]

Who is Liang Wenfeng? – A Visionary Entrepreneur in AI and Quantitative Investment

Who is Liang Wenfeng? – A Visionary Entrepreneur in AI and Quantitative Investment Liang Wenfeng is the name behind DeepSeek, the Chinese AI startup that has caused a massive disruption in the global tech scene. What began as a side project has now evolved into a serious threat to U.S. tech giants like OpenAI, Google, and Nvidia, […]

FDA Dumps Trio of Device-Related Guidances Prior to Administration Change

Among the wave of guidance documents issued by the U.S. Food and Drug Administration (“FDA” or the “Agency”) in the first week of 2025 were three notable draft guidance documents pertaining to medical devices (together, the “Draft Guidances”). The Draft Guidances hit on the topics of in vitro diagnostic (“IVD”) devices, artificial intelligence (“AI”) enabled device software functions, and pulse oximeters. This uncharacteristic deluge of guidance all within the span of a week illustrates the Agency’s desire to disseminate policy ahead of the incoming administration – especially as it relates to medical devices, which for a variety of reasons that any follower of this blog could intuit, have become a hot-button issue across the various corners of the healthcare and life sciences industries.
I. In Vitro Diagnostic Devices
On January 6, FDA released a draft guidance titled “Validation of Certain In Vitro Diagnostic Devices for Emerging Pathogens During a Section 564 Declared Emergency” (the “IVD Draft Guidance”).[1] This guidance aims to provide a framework for manufacturers to efficiently validate IVDs for emerging pathogens – part of FDA’s continuing effort to lay the groundwork for a timely and effective response to future public health emergencies. FDA is inviting comments to the Draft Guidance with a deadline set for March 7, 2025.
A. Background
The Food, Drug, and Cosmetic Act (“FD&C Act”) grants FDA authority to facilitate the availability and use of medical countermeasures (“MCMs”) to address chemical, biological, radiological, and nuclear threats to the nation’s public health.[2] This power is referred to as Emergency Use Authorization (“EUA”) and allows FDA to authorize the use of certain unapproved medical products if the Secretary of Health and Human Services (the “Secretary”) declares that justifying circumstances exist. FDA has used EUA to authorize emergency use of IVDs for eight infectious diseases over the years – most recently and notably, for COVID-19.
During COVID-19, FDA had to play catch-up by issuing enforcement discretion policies, through guidance, for certain unauthorized tests to help rapidly increase testing capacity on a nationwide scale – meaning certain tests were made available without EUA. Whether or not tests are authorized through EUA or described in enforcement discretion policies, the key concern for FDA is that these tests are properly validated. To this end, FDA can, and has, taken appropriate action against tests lacking the proper validation. In the IVD Draft Guidance, FDA provides recommendations for test validation so that IVD manufacturers can have a framework to efficiently secure authorization under EUA, and get much-needed treatments to the public, in the event of a new infectious disease outbreak.
B. Takeaways
The IVD Draft Guidance is clearly underscored by a desire to be better prepared to efficient, safe, and effective testing in the event of another disease outbreak like COVID-19 – in fact, FDA says as much in the guidance itself. What FDA does not explicitly say, but would could also underscore the Agency’s timing in issuing the guidance when it did, is a concern about how the incoming administration might handle such an outbreak in terms of testing and therapeutics, given some of the discourse we’ve heard to date.
Aside from emergency preparation, the IVD Draft Guidance also underscores FDA’s concerns about the efficacy of IVDs, generally, especially those that are subject to abbreviated validation standards. For example, last year, the Agency issued a lengthy (and controversial) final rule outlining a plan to end its previous policy of enforcement discretion for laboratory-developed tests (“LDTs”) – a subset of IVD – based on over a decade of concerns over the efficacy of these tests that have historically not been subject to any oversight, including validation standards, from FDA at all. The framework outlined in this IVD Draft Guide similarly addressed concerns over the efficacy of testing during emergency scenarios when manufacturers are subject to urgent time constraints and abbreviated EUA standards.
II. AI-Enabled Device Software
On January 7, FDA released a draft guidance titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” (the “AI Draft Guidance”).[3] This is only the third guidance FDA has released related to Artificial Intelligence (“AI”), and the second relating specifically to AI-enabled device software functions.[4] The AI Draft Guidance was expected, as it appeared on FDA’s Center for Devices and Radiological Health’s (“CDRH”) “A-list” of guidances to publish in Fiscal Year 2025.[5] The AI Draft Guidance signifies an acknowledgement by FDA of the need to keep pace – as best it can – with technological advancements in the medical device space, particularly in light of the attention and concern swirling around AI use. FDA is inviting comments to the Draft Guidance with a deadline set for April 7, 2025.
A. Background
The rapid advance of AI technology in recent years has significantly influenced the development and implementation of medical device software functions. Device manufacturers are increasingly integrating these AI-enabled device software functions (“AI-DSFs”) to enhance diagnostic, monitoring, and treatment capabilities. In recent years, FDA has focused on promoting a Total Product Life Cycle (“TPLC”) approach to the oversight of AI-DSF, which emphasizes ongoing management and continuous improvement of AI-enabled devices both pre- and post-market according to guiding principles for Good Machine Learning Practice (“GMLP”), to ensure that AI-DSF remain safe and effective from design through decommissioning. In the AI Draft Guidance, FDA continues its effort by establishing lifecycle management and marketing submission recommendations for AI-DSF and, as always, encouraging early interaction with FDA to ensure the development of a safe and effective product for patients.
B. Takeaways
In its AI Draft Guidance, FDA makes clear that the integration of AI modeling into medical device software is being scrutinized in a way that at least parallels, or even exceeds, the oversight given to general device software functions. The AI Draft Guidance sets forth many FDA recommendations for lifecycle management, suggesting that a large overhaul is needed to come up-to-speed with rapidly evolving AI development. Significantly, the scope of the AI Draft Guidance includes the device itself and any device constituent parts of a combination product, which may include AI-DSFs.
Much of the AI Draft Guidance focuses on premarket notification (e.g., 510(k)) submissions for devices that include AI-DSFs, notably requiring a thorough explanation of how AI is integrated into the device. In light of all the uncertainties surrounding AI use and its seemingly unlimited application, FDA seems to be looking for some level of assurance that AI-DSF developers will properly leverage parameters and safeguards so that this “unlimited” use potential does not transform a device beyond its cleared and/or approved intended use.
Another key focus of the AI Draft Guidance is transparency and bias reduction. This is a typical, and growing, area of concern for FDA when it comes to devices that collect and store information; however, incorporating AI complicates the issue because of unknown risk of bias. Specifically, FDA notes that AI models may rely on data correlations and other machine learning-derived processes that do not connect to biologically plausible mechanisms of action. Therefore, while AI’s strength is its adaptability, risk lies in the fact that its decision-making processes are not fully predictable. To mitigate this risk, FDA provides a recommended design approach to transparency throughout the product lifecycle, especially with respect to involving data collection and monitoring.
Another key focus of the AI Draft Guidance – and another growing area concern for FDA and stakeholders alike – is cybersecurity. Here, FDA builds off of its 2023 guidance (“2023 Guidance”) which addressed, more generally, cybersecurity in medical devices,[6] to contextualize it within the AI-sphere. The application of AI to medical device software adds a new layer of security concern because, if AI systems are hacked/accessed, the consequences can be much more widespread. To mitigate this risk, FDA provides comprehensive, AI-specific recommendations for handling cybersecurity threats, while also deferring to the 2023 Guidance for the complete framework that should be implemented prior to marketing submission.
The emergence of AI necessitates that FDA alter its long-standing framework for ensuring the safety and efficacy of medical devices in light of the unique way that AI-enabled device functions operate – and the AI Draft Guidance illustrates FDA’s continued recognition of, and response to this need.
III. Pulse Oximeters for Medical Purposes
On January 7, FDA released a draft guidance titled “Pulse Oximeters for Medical Purposes – Non-Clinical and Clinical Performance Testing, Labeling, and Premarket Submission Recommendations.” (the “PO Draft Guidance”),[7] which provides recommendations for performance testing, labeling, and premarket submissions of pulse oximeters. Once finalized, the PO Draft Guidance FDA’s existing pulse oximeter guidance, which was issued on March 4, 2013 (“2013 Guidance”).[8] FDA is inviting comments to the PO Draft Guidance with a deadline set for March 10, 2025.
A. Background
In recent years, there has been growing concern over the accuracy of readings from pulse oximeters, which are devices that measure the amount of oxygen in arterial blood and pulse rate.[9] In addressing this concern, FDA found that a host of factors affects the accuracy of pulse oximeter readings, especially person’s skin pigmentation. In light of this particular concern, FDA engaged interested parties, and partnered with the University of California San Francisco, as part of the Centers of Excellence in Regulatory Science and Innovation (“CERSI”) program, to conduct a study comparing pulse oximeter errors in clinical patients with varying skin tones. Based on results from this study, as well as input from interested stakeholders, FDA has created enhanced recommendations for marketing submissions to ensure that pulse oximeters used as standalone medical devices, or as part of a multi-parameter medical device, accurately fulfill their intended use. This comprehensive list of marketing submission recommendations is laid out in the new PO Draft Guidance and includes enhanced clinical performance testing procedures that specifically account for disparities in performance across different populations, such as diverse skin tones and pediatric populations.
FDA is showing that it is concerned not only with whether the device performs its intended function accurately, but whether that performance is consistent across all patient populations. Significantly, FDA is exhibiting concern regarding the diversity of patient populations across this country, urging manufacturers to ensure accuracy for all.
B. Takeaways
This new PO Draft Guidance underscores FDA’s continuing commitment to ensuring that regulated products are safe and effective for all individuals – not only those majority populations who have typically been the subject of clinical testing and validation. It is incumbent on manufacturers, FDA, and providers to ensure that medical devices perform properly for each and every person, irrespective of differences in identifying characteristics. And where a certain product has not been tested on and/or cannot be confirmed safe and effective for a certain population, FDA is clear that this limitation needs to be made known to prescribers and end users by limiting the product’s intended use and associated labeling. Bottom line – we can’t have patients relying on products that do not operate safely and/or specifically for them and others like them.
Conclusion
The common thread among these device-specific Draft Guidances is an emphasis on early collaboration with FDA to get ahead of certain identified issues that pose public health threats—an infectious disease emergency, an unmanageable and/or unsecure AI algorithm, or a test result that was not clinically verified for a certain patient’s skin tone. Now, we have heard this refrain before, especially on the drug side of the house—we often roll our eyes when we see it, given that the Agency holds the ultimate power in just about any facet of inquiry, decision-making, and enforcement. But given the blistering speed with which these technologies have been and will continue to develop, FDA’s entreaties here might mean something more.
Indeed, despite the myriad of other critical issues that FDA needs to address, it is clear that CDRH policymakers did not intend for devices to fall by the wayside as this administrations changed guard. Whatever happens over the coming months, all eyes in our industry will be on FDA policy—in guidance, enforcement, or otherwise.

FOOTNOTES
[1] IVD Draft Guidance available here: Validation of Certain In Vitro Diagnostic Devices for Emerging Pathogens During a Section 564 Declared Emergency | FDA
[2] FD&C Act Section 564 available here: 21 U.S.C. 360bbb-3.
[3] AI Draft Guidance available here: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations | FDA
[4] See December 2024 guidance available here: Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions | FDA; January 2025 draft guidance available here: Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products | FDA
[5] 2025 A-List available here: CDRH Proposed Guidances for Fiscal Year 2025 (FY2025) | FDA
[6] 2023 Guidance available here: Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions | FDA
[7] PO Draft Guidance available here: Pulse Oximeters for Medical Purposes – Non-Clinical and Clinical Performance Testing, Labeling, and Premarket Submission Recommendations | FDA
[8] 2013 Guidance available here: Pulse Oximeters – Premarket Notification Submissions [510(k)s]: Guidance for Industry and Food and Drug Administration Staff | FDA
[9] See, e.g., Pulse Oximeter Accuracy and Limitations: FDA Safety Communication, FDA (Feb. 19, 2021).; Multistate Letter Urging FDA to Address Concerns about Dangerous Pulse Oximeter Inaccuracies Impacting Communities of Color, Cal. Atty. Gen. (Nov. 1, 2023).

Is the Future of Digital Assets in the United States Bright Again?

Yes, indeed! What Brad Garlinghouse of Ripple Labs called “Gensler’s reign of terror” ended with Securities and Exchange Commission (SEC) Chair Gary Gensler’s resignation upon President Donald Trump’s inauguration. Paul Atkins, who has co-chaired the Token Alliance, spoke of the need for a “change of course” at the SEC and will be given charge of the SEC when he is confirmed as its new Chairman.
While the greatest deliberative body takes time to exercise its constitutional role of advice and consent, President Trump and Acting SEC Chairman Mark Uyeda are moving ahead at lightning speed, each taking action in the first week of the new administration. The long-awaited paradigm shift in regulation for digital assets is here and the market likes what it sees, with Bitcoin now trading near an all-time high and the total market capitalization of digital assets topping the US$3 trillion mark. Projects are once again being funded in—and development teams are returning to—the United States.
The day after his inauguration, President Trump signed an Executive Order, Strengthening American Leadership in Digital Finance Technology, aiming to “support the responsible growth and use of digital assets, blockchain technology, and related technologies across all sectors of the economy.” This comes on the heels of a newly announced Crypto Task Force at the SEC, dedicated to developing a comprehensive and clear regulatory framework for digital assets, including “crypto” assets.
The Executive Order
In his Executive Order, President Trump points to the crucial role that the digital assets industry plays in the innovation and economic development of the United States, declaring it to be the policy of his administration to:

Protect and promote public blockchain networks, mining and validating, and self-custody of digital assets.
Protect and promote the U.S. dollar by promoting stablecoins worldwide.
Provide regulatory clarity and certainty built on technology-neutral regulations, including well-defined jurisdictional regulatory boundaries.

President Trump’s 2025 Executive Order revokes former President Biden’s 2022 Executive Order regarding crypto assets and orders the Secretary of the Treasury to likewise revoke all prior inconsistent Treasury policies.
Most significantly, the Executive Order establishes the “President’s Working Group of Digital Asset Markets” to be chaired by the “Special Advisor for AI and Crypto,” Silicon Valley venture capitalist David Sacks, who is sometimes called the “Crypto Czar.” Its Executive Director will be “Bo” Hines of North Carolina. The Working Group will consist of specified officials (or their designees) such as the Secretaries of the Treasury, Commerce, and Homeland Security, the Attorney General, the Director of Office, Management and Budget, the Homeland Security Advisor, and the Chairs of the SEC and the Commodities and Futures Trading Commission (CFTC).
The Working Group has been charged to hit the ground running:

By February 22, 2025, the Treasury, DOJ, SEC and other relevant agencies included in the Working Group shall identify all regulations, guidance documents, orders, or other items that affect the digital assets sector. In other words, what has the federal government done so far?
By March 24, 2025,each agency shall submit recommendations with respect to whether each identified regulation, guidance document, order, or other itemshould be rescinded or modified, or, for items other than regulations, adopted in a regulation.
By the end of last week, the SEC had already rescinded Staff Accounting Bulletin 121, an especially troubling piece of guidance that the SEC never approved and that Congress had sought to overturn but former President Biden retained. SAB 121 required crypto custodial banks to carry customer assets on their balance sheets—something required for no other asset. Upon rescinding SAB 121, SEC Commissioner Hester Pierce tweeted, “Bye, bye SAB 121! It’s not been fun.” Another piece of SEC guidance that might be on the chopping block is the so-called “Framework for ‘Investment Contract’ Analysis of Digital Assets,” which has confounded the digital assets industry since it was first adopted.
By July 22, 2025, the Working Group shall submit a report to the President recommending i that advance the policies established in the order. In particular:

The Working Group will propose a federal regulatory framework governing the issuance and operation of digital assets, including stablecoins, in the United States. The Working Group’s report shall consider provisions for market structure, oversight, consumer protection, and risk management.
The Working Group will have significant choices to make in this regard: Will it back the “FIT 21” bill that has already been approved by the U.S. House of Representatives, or will it seek to chart a different course? Will it back a merger of the CFTC with the SEC? How will it reconcile the desire to support technology innovation with national security interests and investor protection?
The Working Group will evaluate the potential creation and maintenance of a national digital asset stockpile and propose criteria for establishing such a stockpile, potentially derived from cryptocurrencies lawfully seized by the federal government through its law enforcement efforts. In this regard, President Trump might be seen as having backed off his earlier promise to create a Bitcoin reserve in the United States, as it is now being considered rather than proposed for immediate adoption. The word “Bitcoin” does not appear even once in the Executive Order.

President Trump’s Executive Order also prohibits the establishment, issuance, or promotion by federal agencies of Central Bank Digital Currencies (CBDCs) within the United States or abroad, terminating any ongoing plans or initiatives related to the creation of a CBDC within the United States. The libertarians who dominate appointments in the financial services sector of the administration are strongly opposed to CBDCs, viewing them as a threat to personal liberty.
In issuing this Executive Order, President Trump fulfilled his campaign promises relating to crypto assets. In a July 27, 2024, address to the Bitcoin 2024 Conference in Nashville, he promised to “end Joe Biden’s war on crypto.” He promised:

To “fire Gary Gensler,” who resigned upon Trump’s inauguration.
To “immediately shut down Operation Chokepoint 2.0,” which he is carrying out in his order to Department of the Treasury.
To appoint the aforementioned Working Group.
To defend the right to self-custody.
To ban CBDCs.

In the first week, we are seeing that, at least thus far, promises made are promises kept.
SEC Crypto Task Force
On the SEC side, Commissioner Hester Pierce, known as “Crypto Mom,” will head the Crypto Task Force that will work to develop a “sensible regulatory path that respects the bound of the law.” The SEC under former President Biden used “regulation by enforcement” rather than “regulation by rulemaking and interpretation” to regulate the crypto asset industry. President Trump’s SEC has already signaled the “course correction” that Paul Atkins called for before the election. Both Commissioners Peirce and Uyeda worked for Atkins in his prior stint as an SEC Commissioner. Others have observed that the Atkins-Peirce-Uyeda “triumvirate” might be the most powerful cohort of Commissioners that the SEC has ever seen.
The SEC announcement states that the Task Force will be focused on developing clear regulatory lines, realistic paths to registration, sensible disclosure frameworks, and deploying enforcement resources judiciously. The Task Force plans to hold future roundtables and is asking for public input as well.
The day that the SEC Task force was announced, Foley & Lardner submitted suggestions to the SEC for roundtable topics. Our suggestions included:

What Securities Act registration exemptions should be adopted to broaden market access to digital assets? An example might be the “safe harbor” that Commissioner Peirce proposed and refined, only to have it ignored by the Gensler SEC.
What guidance should the staff have given that it has failed to give? What guidance should be withdrawn? There has been no guidance about how Regulation S applies to digital asset offerings, to point out one shortcoming. The staff might have given guidance, but Chairman Gensler prohibited it, adopting the view that the SEC does not give legal advice. 
What needs to change for you to “come in and register” if you are a token “issuer”? Plainly the system is broken now, as those who have tried to register were delayed indefinitely and ultimately conceded defeat. Others, seeing this, never even tried.
What needs to change for you to “come in and register” if you are a token “dealer” or “exchange”? These questions are paramount for crypto exchanges that do business in the United States and have been sued by the SEC for failing to register.
What needs to change for you to “come in and register” your crypto brokerage firm? What more can be done for you to “come in and register” your crypto fund? How can the SEC facilitate trading in securities tokens and other tokenized assets? How can the SEC better collaborate with the CFTC regarding digital assets? What legislation should the SEC recommend for adoption by Congress? All these questions, and more, need to be addressed by the SEC, engaging the public as the answers are determined. In each case, the SEC would act consistently with its statutory mandate to protect securities investors and assure fair and orderly markets.

Next Steps
Foley has offered to assist the SEC in its consideration of these questions and expect to be involved in some capacity along the way. Likewise, we expect to make submissions to the President’s Working Group. If you would like to be represented in that process to make sure that your views are considered, please reach out to either of the authors. We are engaging with the House Financial Services Committee and the Senate Banking Committee in addition to the Trump Administration, the SEC, and the CFTC.
Similarly, if you have a development team or a product and are looking to access the U.S. digital asset markets lawfully, we are standing by to help.

5 Trends to Watch in 2025: AI and the Israeli Market

Israel’s AI sector emerging as a pillar of the country’s tech ecosystem. Currently, approximately 25% of Israel’s tech startups are dedicated to artificial intelligence, according to The Jerusalem Post, with these companies attracting 47% of the total investments in the tech sector (Startup Nation Finder). This strong presence highlights Israel’s focus on AI-driven innovation and entrepreneurs’ belief in the growth opportunities related to AI. The Israeli AI market is expected to grow at a compound annual growth rate of 28.33% from 2024 through 2030, reaching a value of $4.6 billion by 2030 (Statista). This growth is driven by increasing demand for AI applications across diverse industries such as health care, cybersecurity, and fintech. Government-backed initiatives, including the National AI Program, play a critical role in supporting startups by providing accessible and non-dilutive funding for research and development (R&D) purposes. Despite facing significant challenges since the start of the war in Gaza, Israel has continued to produce cutting-edge technologies that are getting the attention of global markets. Additionally, Israel’s highly skilled workforce and partnerships with academic institutions provide a steady supply of talent to meet the sector’s demands. With innovation, resilience, and collaboration at its core, the Israeli AI landscape is poised to remain a global force in 2025 and beyond.
Mergers and acquisitions to remain a cornerstone of deals. According to IVC Research Center, 47 Israeli AI companies successfully completed exits in 2024, showcasing the global demand for AI-driven innovation. Investors are continually identifying the differences between companies whose foundations were built on AI, versus those leveraging AI to enhance other core elements of their value proposition—sometimes only marginally. Savvy buyers look beyond the “AI label” and seek out companies with genuine, scalable AI solutions rather than superficial integrations, understanding that value lies in robust and transformative applications. AI is also sector agnostic and may disrupt virtually every vertical. From health care and finance to retail and manufacturing and others, numerous industries are increasingly leveraging AI to enhance or even change their core competency to gain competitive advantages. Deals in this space are coming from strategics such as automobile manufacturers, banks, digital marketing companies and life science firms, among others. As AI continues to permeate multiple sectors, Israeli companies are poised to receive increased attention from strategic M&A buyers looking to unlock new technologies and business opportunities in the market.
Intersection of PropTech and AI to further revolutionize the global real estate industry. Israeli innovation is expected to be at the forefront of this trend. According to IVC Research Center, over 70 PropTech companies headquartered in Israel are leveraging AI to develop cutting-edge technologies that are reshaping the industry on a global scale. We anticipate these companies will continue advancing AI-driven tools and third-party solutions to streamline acquisition strategies, enhance underwriting processes, and drive operational efficiencies. By harnessing AI to identify leasing opportunities, forecast rental trends, and optimize costs, Israeli PropTech firms are set to solidify their position as global leaders in real estate innovation in the year ahead.
AI to become increasingly important across global industries. Israeli companies have demonstrated genuine thought/R&D leadership in AI innovation. Some of the AI-centric legal trends that may stand out in 2025 include (1) a greater focus on data rights management as Agentic AI continues to carve new learning standards; (2) regulatory advancements in science, highlighted by two AI-related Nobel Prizes in science, that will likely materialize in the U.S. Food and Drug Administration adopting new rules for AI-driven drug approvals, as well as new AI patenting standards and requirements; (3) greater emphasis on responsible AI usage, particularly around ethics, privacy, and transparency; (4) the adoption of quantum AI across many industries, including in the area of securities trading, which will likely challenge securities regulators to address its implications; and(5) turning to AI-powered LegalTech strategies (both in Israel and in other countries). Israeli entrepreneurs are likely to continue working within each of these industries and help drive the AI transformation wave.
AI-based technology to continue changing how companies handle recruitment and hiring. While targeted advertising enables employers to find strong talent, and AI-assisted resume review facilitates an efficient focus on suitable candidates, the use of AI to identify “ideal” employees and filter out “irrelevant” applicants may actually discriminate (even if unintentionally) against certain groups protected under U.S. law (for example, women, older employees, and/or employees with certain racial profiles). In addition, AI-assisted interview analysis may inadvertently use racial or ethnic bias to eliminate certain candidates. Israeli companies doing business in the United States should not assume their AI-assisted recruitment and hiring tools used in Israel will be permitted to be utilized in the United States. Also, Israeli companies should be mindful of newly enacted legislation in certain U.S. states requiring companies to notify candidates of AI use in hiring, as well as conduct mandatory self-audits of AI-based employee recruitment and hiring systems. AI regulation on the state level in the United States is likely to increase, and Israeli companies that recruit and hire in the United States will be required to balance their use of available technology with applicable U.S. legal constraints.

Latest Changes to ISS and Glass Lewis Proxy Voting Guidelines

Institutional Shareholder Services (ISS) and Glass Lewis, two leading proxy advisory firms, recently announced updates to their U.S. proxy voting policies in advance of the 2025 proxy and annual meeting season. Public companies need to consider how these updates could impact voting recommendations and any governance changes that could be implemented to improve the likelihood of favorable recommendations.
Background on Proxy Advisory Firms
ISS and Glass Lewis have risen to prominence for making proxy voting recommendations to their investor clients ahead of shareholder meetings for public companies. 
ISS and Glass Lewis publish their respective proxy voting guidelines and policies that describe the factors that it will take into consideration in making voting recommendations. While these policies remain largely consistent year over year, the annual updates often address new and emerging issues, such as artificial intelligence, or revise or clarify existing stances on evolving matters of corporate governance, such as executive compensation, director independence, and environmental, social, and governance (ESG) policies and disclosures. These changes can be based on a number of factors, such as changing shareholder attitudes, new legislation or exchange rules, or general industry trends.
These proxy voting guidelines can be used by institutional investors as either determinative or informative of their voting decisions, and the recommendations by ISS and Glass Lewis can significantly sway the outcome of shareholder voting proposals.
2025 ISS Proxy Voting Guideline Changes
Executive Compensation
In addition to revisions to its proxy voting guidelines, ISS also updated its FAQs on executive compensation policies:

Computation of Realizable Pay – The realizable pay chart will not be displayed for companies that have experienced multiple (two or more) CEO changes within the three-year measurement period.
Pay-for-Performance Qualitative Review – ISS will place greater focus on performance‑vesting equity disclosures and plan designs, especially for companies with a quantitative pay-for-performance misalignment. Existing qualitative considerations around performance equity programs will be subject to greater scrutiny in the context of a quantitative pay‑for‑performance misalignment. ISS provided a non-exhaustive list of typical considerations for such analysis, including:

Non-disclosure of forward-looking goals (note: retrospective disclosure of goals at the end of the performance period will carry less mitigating weight than it has in prior years);
Poor disclosure of closing-cycle vesting results;
Poor disclosure of the rationale for metric changes, metric adjustments or program design;
Unusually large pay opportunities, including maximum vesting opportunities;
Non-rigorous goals that do not appear to strongly incentivize for outperformance; and/or
Overly complex performance equity structures.

Evaluation of Incentive Program Metrics – ISS reaffirmed its stance that it does not favor total shareholder return (TSR) or any specific metric in executive incentive plans, holding that the board and its compensation committee are best suited to choose metrics that lead to long-term shareholder value. However, ISS acknowledged that shareholders prefer an emphasis on objective metrics that lead to increased transparency in compensation decisions. In evaluating the metrics of an incentive program, ISS may consider several factors, including:

Whether the program emphasizes objective metrics linked to quantifiable goals, as opposed to highly subjective or discretionary metrics;
The rationale for selecting metrics, including the linkage to company strategy and shareholder value;
The rationale for atypical metrics or significant metric changes from the prior year; and/or
The clarity of disclosure around adjustments for non-GAAP metrics, including the impact on payouts.

Changes to In-Progress Incentive Programs – ISS reiterated its position against midstream changes to ongoing incentive programs, such as metrics, performance targets, and/or measurement periods). Similar to other kinds of unusual pay program interventions, ISS states that companies should disclose a compelling rationale for such actions and how they do not circumvent pay-for-performance outcomes.
Robust Clawback Policies – This year, ISS added a new FAQ concerning the requirements for a clawback policy to be considered “robust” under the “Executive Compensation Analysis” section of the ISS research report. In order to qualify, a clawback policy must:

Extend beyond minimum Dodd-Frank requirements; and
Explicitly cover all time-vesting equity awards.

Poison Pills
ISS made significant revisions to its voting policies concerning shareholder rights plans, more commonly referred to as “poison pills,” which are used by boards of directors to prevent hostile takeovers. Currently, when considering whether or not to vote for director nominees who have adopted a short-term poison pill (one year or less) without shareholder approval, ISS evaluates director nominees on a case-by-case basis. This year, ISS revised its guidelines to increase transparency surrounding the factors considered in this evaluation.
The revised list of factors now includes (changes as marked):

The trigger threshold and other terms of the pill;
The disclosed rationale for the adoption;
The context in which the pill was adopted (e.g., factors such as the company’s size and stage of development, sudden changes in its market capitalization, and extraordinary industry-wide or macroeconomic events);
A commitment to put any renewal to a shareholder vote;
The company’s overall track record on corporate governance and responsiveness to shareholders; and
Other factors as relevant.

Natural Capital
Next, ISS renamed references to “General Environmental Proposals” with “Natural Capital‑Related and/or Community Impact Assessment Proposals.” ISS also revised the list of factors considered when voting requests for reports on policies and/or the potential (community) social and/or environmental impact of company operations. 
The revised list of factors now includes (changes as marked):

Alignment of current disclosure of applicable company policies, metrics, risk assessment report(s) and risk management procedures with any relevant, broadly accepted reported frameworks;
The impact of regulatory non-compliance, litigation, remediation, or reputational loss that may be associated with failure to manage the company’s operations in question, including the management of relevant community and stakeholder relations;
The nature, purpose, and scope of the company’s operations in the specific region(s);
The degree to which company policies and procedures are consistent with industry norms; and
The scope of the resolution.

SPAC Extensions
ISS also revised its policies with respect to SPAC termination dates and extension requests. Now, ISS will generally recommend that shareholders vote in favor of requests to extend the termination date of a SPAC by up to one year from the SPAC’s original termination date, inclusive of any built-in extension options, and accounting for prior extension requests.
ISS may also consider the following factors:

Any added incentives;
Business combination status;
Other amendment terms; and
If applicable, use of money in the trust fund to pay excise taxes on redeemed shares.

2025 Glass Lewis Proxy Voting Guideline Changes
Approach to Executive Pay Program
Glass Lewis provided clarification on its pay-for-performance policy to emphasize Glass Lewis’ holistic approach to analyzing executive compensation programs. Glass Lewis’ analysis reviews pay programs on a case-by-case basis, and there are few program features that, standing alone, will lead to an unfavorable recommendation from Glass Lewis on a say-on-pay proposal.
Glass Lewis does not utilize a pre-determined scorecard approach when considering individual features such as the allocation of the long-term incentive between performance-based awards and time-based awards. Unfavorable factors in executive compensation programs are reviewed in the context of rationale, overall structure, overall disclosure quality, the program’s ability to align executive pay with performance and the shareholder experience, and the trajectory of the pay program resulting from changes introduced by the board’s compensation committee, all as reflected in the compensation disclosures in the company’s proxy statement.
Additionally, while regulatory disclosure rules may allow for the omission of key executive compensation information, such as for smaller reporting companies, Glass Lewis believes that companies should use proxy statements to provide sufficient information to enable shareholders to vote in an informed manner.
Glass Lewis also revised how it identifies peer groups for its pay-for-performance model, including with reference to the peers of a company’s self-disclosed peers.
Board Oversight of Artificial Intelligence
Glass Lewis has adopted new guidelines dedicated to board oversight of AI, similar to the oversight of cybersecurity that was added in 2023. Glass Lewis believes that boards should take steps to mitigate exposure to material risks that could arise from their use or development of AI. 
In the absence of material incidents related to a company’s use or management of AI-related issues, Glass Lewis’ policy will generally not make voting recommendations on the basis of AI‑related issues. However, when there is evidence that there is insufficient oversight and/or management of AI technologies that has resulted in material harm to shareholders, Glass Lewis will review a company’s overall governance practices and identify which directors or board-level committees have been charged with oversight of AI-related risks. Glass Lewis will also closely evaluate the board’s management of this issue, as well as any associated disclosures, and Glass Lewis may recommend against directors it deems appropriate should it find the board’s oversight, response, or disclosure concerning AI-related issues to be insufficient. Glass Lewis recommends that all companies that develop or use AI in their operations disclose the board’s role in AI oversight and how they are ensuring their directors are fully educated on this topic.
Change-in-Control Procedures
Glass Lewis also has updated the policies surrounding the change-of-control provision to clarify that companies that allow for committee discretion over the treatment of unvested awards should commit to providing clear rationale for how such awards are treated in the event that a change in control occurs. This change underscores the importance of clear disclosure surrounding equity awards.
Board Responsiveness to Shareholder Proposals
Glass Lewis revised its policy for shareholder proposals to clarify that when shareholder proposals receive “significant” shareholder support (generally more than 30%, but less than a majority of votes cast), boards should engage with shareholders on the issue and provide future disclosure addressing shareholder concerns and outreach initiatives. 
Reincorporation
Glass Lewis also revised its policy on reincorporation to reflect that Glass Lewis reviews all proposals to reincorporate to a different state or country on a case-by-case basis. Glass Lewis considers a number of factors, including the changes in corporate governance provisions, especially those relating to shareholder rights, material differences in corporate statuses and legal precedents, and relevant financial benefits, among other factors, resulting from the change in domicile.
Key Takeaways
You can find copies of the 2025 polices of ISS and Glass Lewis on their respective websites, as well as summaries of their 2025 policy updates. These policy updates will be important as public companies prepare for their 2025 proxy statements and annual shareholders’ meetings. Companies should review these voting guidelines to proactively make disclosures necessary to secure favorable voting recommendations from ISS and Glass Lewis. Companies may also want to consider changes in governance and compensation practices to decrease the likelihood of an adverse voting recommendation from ISS or Glass Lewis, although any such change should also be weighed against the overall governance needs and strategy of the company.
In addition to ISS and Glass Lewis and other third-party proxy advisory firms, companies should review the voting policies of any large institutional investors who have significant shareholdings in the company. These institutional investors often have their own voting policies that can change over time, like ISS and Glass Lewis.

President Trump’s Executive Order Steering Digital Assets Policy

As promised during his campaign, President Trump has taken significant steps to support the digital asset industry during his first week in office. On 23 January 2025, he signed an executive order initiating digital asset regulatory rollbacks and a new federal framework governing cryptocurrencies, stablecoins, and other digital assets (the Order).
On the same day, the Securities and Exchange Commission (SEC) rescinded the controversial Staff Accounting Bulletin 121, which required crypto custodians and banks to reflect digital assets in their custody as both an asset and a liability on their balance sheets. Earlier in the week, the SEC established Crypto 2.0, a crypto task force designed to provide paths for registration and reasonable disclosure frameworks, and to allocate enforcement resources “judiciously.”
The Order recognizes the role the digital asset industry serves in our economy and aims to support the responsible growth and use of digital assets by promoting dollar-backed stablecoins and providing regulatory clarity. The Order lays the groundwork for a regulatory shift furthering digital assets policy, focusing on the creation of “technology-neutral regulations” tailored to digital assets.
In addition to prohibiting agencies from facilitating any central bank digital currencies, the Order establishes a working group comprised of the heads of various agencies (the Working Group) and sets three deadlines:

22 February 2025: Federal agencies must report to the Special Advisor for AI and Crypto with the regulations or other agency guidance that affect the digital asset sector.
24 March 2025: Federal agencies must submit recommendations on whether to rescind or modify these regulations and guidance.
22 July 2025: The Working Group must submit a report to the President on regulatory and legislative proposals to advance digital assets policy. This report must include a proposed Federal framework for the issuance and operation of digital assets, including stablecoins, and evaluate whether establishing a national digital assets stockpile is possible.

Cybersecurity Executive Order—Key Implications for the Manufacturing Industry

On January 16, 2025, President Joe Biden issued the “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity,” a comprehensive directive designed to address the growing complexity and sophistication of cyber threats targeting the United States. The Executive Order aims to establish a cohesive national strategy for improving cybersecurity across federal agencies, private businesses, and critical infrastructure sectors. The Executive Order governs a wide-array of critical issues, including new cybersecurity standards for federal contractors, enhanced public-private information sharing, the promotion of advanced technologies like quantum-resistant cryptography and artificial intelligence (AI), and the imposition of sanctions on foreign cyber actors. The Executive Order’s initiatives demonstrate a commitment to strengthening the nation’s cybersecurity defenses in a rapidly evolving digital landscape and incorporate approaches generally understood as best practices to enhance cybersecurity.
To further advance the initiatives outlined in the order, the Cybersecurity and Infrastructure Security Agency (CISA), a key federal entity responsible for coordinating national efforts to safeguard critical infrastructure, expanded on the directive with detailed implementation frameworks and additional guidance. CISA’s involvement underscores its crucial role in operationalizing the Executive Order and transforming its policy directives into actionable strategies. Through collaboration with industry leaders, technology innovators, and government stakeholders, CISA has addressed specific challenges, including adopting quantum-resistant cryptography, deploying artificial intelligence in cybersecurity defenses, and improving public-private information-sharing mechanisms. These efforts emphasize fostering innovation, enhancing resilience, and protecting the nation’s digital ecosystem from emerging threats. By building on the Executive Order, CISA seeks to bridge the gap between policy objectives and on-the-ground cybersecurity practices, ensuring that the nation’s cybersecurity posture evolves in tandem with the rapidly changing threat landscape.
The transition of the presidency to President Donald Trump on January 20, 2025, has led to questions about the future of the Biden Executive Order. Historically, President Trump has favored deregulation and, during his first term, had repealed several executive orders issued by previous administrations. The possibility of modification or repeal to the Executive Order is particularly significant for the manufacturing sector, which is both a critical component of the U.S. economy and a frequent target of cyberattacks.
The purpose of this guide is three-fold. First, it examines the key elements of the existing Executive Order. Next, it explores the potential modifications that the Trump administration may implement. Finally, it provides guidance tailored to manufacturing companies for navigating this evolving regulatory and threat environment, building on previous related resources published by Foley & Lardner and the Cybersecurity Manufacturing Innovation Institute (CyManII), which are referenced at the end of this alert.
Key Provisions of the Executive Order and their Impact on Manufacturing
Minimum Cybersecurity Standards for Federal Contractors
A central provision of the Executive Order mandates baseline cybersecurity measures for federal contractors. These include securing access to critical systems and data using Multi-factor authentication (MFA), incorporating endpoint detection and response (EDR) tools to monitor, detect, and respond to cybersecurity threats, and using encryption to protect sensitive data both during transit and at rest.
Manufacturers supplying goods or services to the federal government must adhere to these cybersecurity standards to maintain their eligibility for governmental contracts. For many companies, this may require substantial investments in upgrading systems, adopting new technologies, and training personnel. Non-compliance could lead to the loss of profitable federal contracts and potential reputational damage.
Enhanced Public-Private Information Sharing
The Executive Order directs federal agencies to enhance mechanisms for sharing threat intelligence with private-sector entities. This collaboration aims to provide timely and actionable insights to help businesses defend against emerging cyber threats.
This initiative benefits the manufacturing sector as it is a primary target for ransomware attacks and intellectual property theft. Access to real-time threat intelligence allows manufacturers to identify vulnerabilities, respond swiftly to incidents, and mitigate risks more effectively. A ransomware incident plan focused on manufacturing can be found here: Ransomware Playbook.
Transition to Quantum-Resistant Cryptography
The Executive Order highlights the urgent need to adopt quantum-resistant cryptographic algorithms to tackle the long-term threat arising from advancements in quantum computing. As manufacturing increasingly incorporates digital technologies and interconnected systems, safeguarding proprietary designs, supply chain data, and other sensitive information is essential to business. Early adoption of quantum-resistant encryption may provide a competitive advantage and safeguard critical assets against existing and future threats. Guidelines for approaching quantum-resistant cryptography are available from NIST and the first post-quantum encryption standards are found here.
Leveraging AI for Cybersecurity
The Executive Order promotes the use of AI-driven cybersecurity tools to identify and counter advanced cyber threats in real time. AI is potentially transformative for the manufacturing sector because it can automate threat detection and response strategies. AI is also a proven tool for minimizing operational disruptions, protecting intellectual property, and ensuring the integrity of production lines. The pilot programs outlined in the Executive Order could serve as a model for broader adoption across the industry. AI may significantly accelerate the detection and mitigation of cyber-attacks, an area under development by CyManII. 
Sanctions on Foreign Cyber Actors
The Executive Order grants the federal government the authority to impose sanctions on individuals and entities responsible for cyberattacks targeting U.S. organizations. Sanctions serve as a deterrent against state-sponsored cyberattacks and industrial espionage. For manufacturers, this provision provides an extra layer of protection and highlights the government’s commitment to safeguarding critical industries.
Potential Changes Under the Trump Administration
Deregulation of Cybersecurity Standards
President Trump’s emphasis on minimizing regulatory burdens may result in a rollback of the cybersecurity requirements in the Executive Order. This could shift the responsibility for implementing robust cybersecurity measures from the federal government to individual companies.
Focus on Supply Chain Resiliency
Based on the criticality of U.S. manufacturing and its role in global competitiveness and economic stability, we anticipate President Trump will issue guidance on securing supply chain resiliency to enhance the productivity of U.S. manufacturers. We will monitor these anticipated changes and publish future alerts as applicable.
Reprioritization of Cybersecurity Initiatives
While the current Executive Order emphasizes quantum-resistant cryptography and AI, the Trump administration might focus first on immediate cybersecurity challenges and delay longer-term solutions that require significant investment.
Reduced Emphasis on Public-Private Collaboration
Changes to information-sharing initiatives could decrease government support for private-sector cybersecurity efforts, which may compel manufacturers to seek alternative sources of threat intelligence.
Selective Sanctions Enforcement
A more selective approach to sanctions could change the deterrent effect on foreign cyber actors, potentially raising the risk of targeted attacks on U.S. manufacturing companies.
Guidance for Manufacturing Companies
Given the uncertainty surrounding the future of the Executive Order, manufacturers must adopt a proactive approach to cybersecurity. Below are actionable steps to enhance resilience:
Strengthen Core Cybersecurity Measures

Adopt Industry Best Practices: Ensure the deployment of MFA, EDR, and encryption on all critical systems.
Secure Operational Technology (OT): Safeguard industrial control systems (ICS) and other OT components essential to manufacturing operations.
Conduct Regular Assessments: Regular audits can help identify vulnerabilities and prioritize remediation efforts.
Invest in Employee Training: Over 80% of ransomware and other cyber-attacks can be traced to the “human in the loop.” Thus, cybersecurity training is a solid investment to protect your company and its operations.

Monitor Regulatory Developments

Stay Informed: Stay informed about updates to the Executive Order and other relevant cybersecurity policies.
Engage Legal Counsel: Consult legal and compliance experts to assess the potential impact of policy changes on your business operations.

Invest in Advanced Cybersecurity Technologies

Explore AI Solutions: Leverage AI tools for predicting threats, identifying anomalies, and automating incident responses.
Transition to Quantum-Resistant Cryptography: Start planning cryptographic upgrades to protect sensitive data from emerging threats.
Collaborate with Industry Peers: Participate in forums and consortia to exchange best practices and establish standardized cybersecurity protocols.

Secure the Supply Chain

Evaluate Vendor Risks: Perform comprehensive cybersecurity assessments of suppliers and third-party partners.
Develop Redundancy Plans: Identify critical supply chain dependencies and develop contingency plans to mitigate potential disruptions.
Encrypt Communications: Safeguard data transfers throughout the supply chain to minimize the risk of interception.

Build Robust Incident Response Plans

Establish Comprehensive Protocols: Develop incident response plans tailored to manufacturing-specific threats, such as ransomware attacks on production systems. An example of industry guidance and template is available in CyManII’s Ransomware Preparation Guide: Prevention, Mitigation, and Recovery for Manufacturers.
Train Employees: Provide ongoing cybersecurity training to improve awareness and minimize human error.
Test and Refine Plans: Perform regular simulations to assess the effectiveness of response strategies and implement necessary adjustments.

Final Thoughts
The “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity” highlights the urgent need for robust cybersecurity measures, particularly within the manufacturing sector, vital to national security, economic stability, and global competitiveness. This sector faces an increasing number of sophisticated threats, including ransomware attacks, vulnerabilities in the supply chain, and intellectual property theft. While the future of the Executive Order under the Trump administration is uncertain, manufacturers cannot afford to delay action. Cyber-attacks on manufacturers will continue to rise in volume and sophistication over the coming years. Proactive measures such as implementing advanced security technologies, strengthening supply chain defenses, and keeping abreast of regulatory changes are essential for mitigating risks and ensuring operational continuity.
Furthermore, adhering to strict cybersecurity standards allows manufacturers to secure federal contracts, establish trust with stakeholders, and gain a competitive edge in the market. As potential changes to the Executive Order could lead to a fragmented regulatory landscape—spanning federal, state, and international levels—manufacturers must prepare for diverse compliance requirements. By prioritizing cybersecurity, the manufacturing sector not only safeguards its critical assets and processes but also reinforces its vital role in driving economic growth and technological innovation.
About CyManII
Launched in 2020 by the U.S. Department of Energy, CyManII works across the manufacturing industry, research and academic institutions, and federal government agencies to develop technologies that enable the security and growth of the U.S. manufacturing sector. 
Additional information on cybersecurity risks faced by manufacturers can be found in prior articles authored by Foley & Larder and CyManII, including:
Recommendations for Managing Cybersecurity Threats in the Manufacturing Sector
So, You Think of Cybersecurity Only as a Cost Center? Think Again. 
CyManII also contributed to this article.

LinkedIn Sued for Using Private DMs to Train AI

LinkedIn Sued for Using Private DMs to Train AI. A class-action lawsuit has been filed against LinkedIn, accusing the social networking giant of using private direct messages (DMs) to train its artificial intelligence (AI) models starting in August 2024, without obtaining explicit consent from its users. The legal action, filed Tuesday in the U.S. District […]

New York’s Impending WARN Notice Requirement for Artificial Intelligence Related Layoffs Highlights Proliferating Nationwide Requirements

During her 2025 State of the State Address on January 14, 2025, New York Governor Kathy Hochul announced a plan to support workers displaced by Artificial Intelligence (AI) by requiring employers who engage in mass layoffs or closings subject to New York’s state Worker Adjustment and Retraining Notification law (“NY WARN”) to disclose whether AI automation played a role in the layoffs. Governor Hochul stated that the goal of these disclosures is to understand “the potential impact of new technologies through real data.” 
The Governor’s announcement states that she is directing the New York Department of Labor to impose this requirement, so presumably the change will be imposed without the need for legislative action. Specific details about the scope of the new disclosure requirement are not yet available.
The rise of AI in the workplace has been a matter of concern to many state lawmakers across the nation, as well as federal regulators. In New York, for example, New York City’s 2021 Local Law 144 placed guardrails on employers utilizing AI and other Automated Employment Decision Tools (“AEDTs”) in employment related decisions by requiring bias audits of AEDT tools and employer notice to employees and candidates of their use. Similarly, California nearly passed a law in 2024, SB 1047, requiring notice to employees when an AI system is used in employment decisions. While the bill was stalled out at the end of the 2024 California legislative session, California is expected to propose more AI safety legislation in 2025. Colorado will also impose a new requirement in 2026 for developers and users of employment-related AI to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system.”
At the federal level, the Equal Employment Opportunity Commission (EEOC) issued two guidance documents in 2023 concerning the issues of adverse impact and disability accommodations in the use of AI and machine learning tools in making workplace decisions.
These proliferating laws show the need for employers to be intentional about their use of AI tools in making employment decisions. Legal and human resources leaders should familiarize themselves with how their organizations are using AI tools in the employment context, and design policies to ensure that the rapidly proliferating state and local requirements around AI usage are met. 

5 Trends to Watch: 2025 Financial Services Litigation

Increasing Focus on Payments — Payments litigation will likely continue and increase in 2025 in the United States and globally, along with increased use of Automated Clearing House (ACH) transfers and wires, bank and non-bank competition, state regulation, and more sophisticated fraud schemes. This trend should continue regardless of the incoming administration’s enforcement priorities. Borrowing from Europe, the United States could see increasing pressure for a Payment Services Regulation or other laws to shift more risk of payment fraud to financial institutions. State-based efforts to regulate interchange fees may create additional risk.
Increasing Use of Mass Arbitration and Rise of International Arbitration — Mass arbitration in the United States is likely to continue and increase, particularly as plaintiffs’ counsels become more equipped, efficient, and coordinated at lodging these attacks. International arbitration also is likely to increase, given globalization and diversification, driven by the growing complexity of cross-border issues. The strategic advantage of leveraging global litigation offices in regions like Latin America, Europe, and the Middle East will be crucial, as these areas continue to be hot spots for international business activities and disputes. Reliance on local knowledge will become increasingly important as parties seek more efficient and culturally sensitive resolutions.
Anti-Money Laundering (AML), Know Your Customer (KYC), and Compliance-Related Issues — There was increased activity over the past year on AML-related matters globally, and this trend appears likely to continue. This increase also is likely to carry over to civil litigation, including complex fraud and Ponzi schemes and allegations relating to improper asset management or trust disputes, where financial institutions are being more heavily scrutinized over actions taken by their customers, and the plaintiffs’ bar is expected to try to create more hospitable case law and jurisdictions. As regulatory scrutiny intensifies globally, financial institutions will continue to find themselves at the intersection of civil litigation and concurrent regulatory/criminal investigations, creating additional risks. The growing complexity of these cases underscores the need for banks to maintain vigilance and adaptability.
Changing Enforcement and Regulatory Risks — A slowdown of Consumer Financial Protection Bureau (CFPB)-related activity, including a relative slowdown of crypto enforcement, could take place over the course of the year due to the change of administration and agency leadership, but there could be an increase in certain states’ attorneys general activity. State-based regulation and legislation would pose additional risks, creating jurisdictional and other challenges. State regulatory agencies may continue enforcement efforts related to consumer protections in the financial services space. There also may be continued focus on fair lending practices, with potential litigation concerning artificial intelligence’s (AI) role in lending or other decisions. The rise of digital currencies also has introduced new legal challenges. Cryptocurrency exchanges are being held accountable for frauds occurring on their platforms and ongoing uncertainties in digital asset regulations are resulting in compliance challenges and related litigation.
Information Use and Security — The increasing use of new technologies and AI likely will result in increased risks and a rise in civil litigation. Litigation may emerge over AI tools allegedly infringing on copyrights. Another area would be AI-based pricing algorithms being scrutinized for potential collusion and antitrust violations or discrimination and bias. More U.S. states are proposing and passing comprehensive AI and other laws that do not have broad financial institution or Graham Leach Bliley Act-type exemptions, so there could be additional regulation. States also could continue efforts to pass new laws in the privacy area to address areas not currently regulated through federal laws.

CNIL Publishes 2025-2028 Strategic Plan

On January 16, 2025, the French Data Protection Authority (“CNIL”) unveiled its strategic plan for 2025-2028, highlighting its priorities for the coming years. Summarized below are the four key focus areas outlined in the CNIL’s strategic plan:

Artificial Intelligence (“AI”): With respect to AI, the CNIL commits to: (1) collaborating with European and international partners to promote harmonized AI governance; (2) providing guidance to stakeholders, clarifying applicable rules and implementing effective and balanced regulation of AI; (3) raising public awareness of the challenges raised by AI and the importance of exercising individuals’ rights; and (4) ensuring AI systems comply with applicable rules, including by creating a methodology and tools allowing such monitoring throughout the lifecycle of an AI system, and collaborating with other data protection authorities on EU-wide monitoring actions.
Protection of Minors: Recognizing the vulnerabilities of children in digital environments, the CNIL will prioritize safeguarding their personal data. Key actions include: (1) strengthening requirements for online platforms to ensure age-appropriate protections; (2) promoting tools and resources to enhance children’s understanding of their digital rights; (3) allowing minors to effectively exercise their rights; and (4) engaging with educators, parents, and industry stakeholders to create safer digital spaces for minors.
Cybersecurity and Resilience: With increasing cyber threats targeting organizations and individuals, the CNIL will focus on: (1) strengthening cooperation with all cybersecurity stakeholders; (2) supporting businesses and individuals in enhancing their data security practices and with facing cyber risks; (3) advocating for privacy-by-design approaches to mitigate cybersecurity risks; and (4) conducting investigations and enforcing sanctions to reinforce compliance with data breach notification requirements under the EU General Data Protection Regulation.
Everyday Digital Life: Apps and Online Identity: To address the pervasive role of technology in daily life, the CNIL commits to: (1) continuing the implementation of its apps strategy to protect individuals’ privacy, including by raising public awareness of the importance of privacy, monitoring the compliance of apps with applicable rules, and updating its guidelines for professionals working with apps; and (2) monitoring the development of apps and encouraging companies to adopt user-centric approaches that respect privacy.

Read the CNIL’s press release and strategic plan (in French).