Mr. Robot Goes To Washington: The Shifting Federal AI Landscape Under the Second Trump Administration

President Trump’s inauguration on January 20, 2025, has already resulted in significant changes to federal artificial intelligence (AI) policy, marking a departure from the regulatory frameworks established during the Biden administration. This shift promises to reshape how businesses approach AI development, deployment, governance, and compliance in the United States.
Historical Context and Initial Actions
The first Trump administration (2017–2021) prioritized maintaining US leadership in AI through executive actions, including the 2019 Executive Order (EO) on Maintaining American Leadership in Artificial Intelligence and the establishment of the National Artificial Intelligence Initiative Office. This approach emphasized US technological preeminence, particularly in relation to global competition.
For its part, the Biden administration’s approach to AI development emphasized “responsible diffusion” — allowing AI advances and deployment while maintaining strategic control over frontier capabilities.
In a swift and significant move, President Trump revoked President Biden’s October 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence on his first day in office. This action signals a clear pivot toward prioritizing innovation and private sector growth and development over regulatory oversight and AI safety (or at least a move away from government mandates and toward market-driven safety measures).
Emerging Policy Priorities
Several key priorities have emerged that will likely shape AI development under the second Trump administration:

Focus on National Security: The Trump administration EO framed AI development as a matter of national security, particularly with respect to competition with China. This is an area where the administration is likely to enjoy bipartisan support.
Energy Infrastructure: The Trump administration’s declaration of a national energy emergency on his first day in office highlights the administration’s recognition of AI’s substantial computational and energy demands. And on his second day in office, President Trump followed the declaration with the announcement of a private-sector $500 billion investment in AI infrastructure assets code-named “Project Stargate,” with the first of the project’s data centers already under construction in Texas.
Defense Integration: Increased military spending on AI capabilities and the administration’s focus on military might indicate an emphasis on accelerated development of defense-related AI applications.

Regulatory Shifts and Business Impacts
The new administration’s approach signals several potential changes to the AI regulatory landscape:

Federal Agency Realignment: Key agencies like the Federal Trade Commission may relax their focus on consumer protection to allow more free market competition and innovation.
Preemption Considerations: The administration might pursue federal legislation to create uniform standards that preempt the current patchwork of state and local AI laws and regulations.
International Engagement: Restrictions on international AI collaboration and technology sharing, particularly regarding semiconductor exports used for AI (which had already been tightened under the Biden administration), are likely to be enhanced.

Strategic Planning Considerations
The AI policy shift creates new imperatives for business leaders, including:

Multi-jurisdictional Compliance: Despite potential reduced federal oversight, businesses must maintain compliance with any applicable federal, state, and local regulations and international requirements, including the EU AI Act for those organizations doing business in EU countries.
Investment Strategy: Changes in federal policy and potential international trade restrictions could transform AI development costs, investment patterns, and technology budgets.
Risk Management: Businesses should maintain robust internal governance frameworks regardless of regulatory requirements, particularly considering the ongoing operational and reputational risks.

Looking Ahead
While specific policy developments remain in flux, the Trump administration’s emphasis on technological leadership and reduced regulatory oversight suggests a significant departure from previous approaches. The continued integration of AI into critical business functions, however, necessitates continued attention to responsible development and deployment practices, even as the regulatory landscape evolves.
Businesses should stay informed of policy developments while maintaining robust AI governance and compliance frameworks that can adapt to changing federal priorities while ensuring compliance with any applicable legal and regulatory obligations and standards.

Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity and Potential Implications Under the Trump Administration

On January 16, 2025, President Joe Biden signed the “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity.” This directive seeks to tackle the increasingly complex and evolving cybersecurity threats confronting the United States. From nation-state actors to sophisticated cybercriminal organizations, the U.S. faces unprecedented challenges to its critical infrastructure, government systems, and private sector networks. The executive order outlines a multifaceted strategy aimed at safeguarding the nation’s digital landscape while encouraging innovation and collaboration in cybersecurity technologies.
However, the future of this order has come into question following President Donald Trump’s inauguration on January 20, 2025. President Trump has shown a readiness to reassess policies set by his predecessor, including the potential revocation of previous executive orders. This client alert offers a summary of President Biden’s cybersecurity order, explores potential implications under the Trump administration, and provides guidance for businesses navigating this uncertain regulatory landscape.
Overview of President Biden’s Executive Order
President Biden’s executive order is a comprehensive initiative aimed at addressing the most pressing challenges in cybersecurity. The directive outlines crucial measures that federal agencies, contractors, and private sector partners are required to adopt to enhance their resilience against cyber threats. Key components of the order include:
Development of Minimum Cybersecurity Standards
The order requires the development of baseline cybersecurity standards for federal contractors and suppliers. These standards encompass requirements for multi-factor authentication (MFA), endpoint detection and response (EDR) systems, and the encryption of sensitive data both in transit and at rest. Contractors must demonstrate compliance to secure or maintain government contracts.
Enhanced Public-Private Collaboration
Acknowledging the interconnected nature of the public and private sectors, the order establishes a framework for improved information sharing. Federal agencies are directed to share threat intelligence and vulnerability information with private entities to enable faster responses to emerging threats.
Sanctions on Foreign Cyber Actors
To deter nation-state-sponsored cyberattacks, the executive order allows for sanctions against foreign actors targeting U.S. entities, including critical infrastructure such as health care facilities and energy systems. This provision underscores the administration’s commitment to holding adversaries accountable for malicious cyber activities.
Quantum-Resistant Cryptography
The order prioritizes transitioning federal systems to quantum-resistant cryptographic algorithms to safeguard sensitive data from future quantum computing threats. Agencies are required to develop implementation plans and timelines for this transition.
Artificial Intelligence in Cybersecurity
The executive order calls for pilot programs to investigate the use of artificial intelligence (AI) in cybersecurity applications, particularly in the energy sector. These programs seek to leverage AI for real-time threat detection, automated responses, and enhanced incident recovery.
Potential Impacts Under the Trump Administration
The Trump administration’s approach to cybersecurity remains uncertain, but early signs indicate possible adjustments to Biden’s executive order. Historically, the administration has focused on minimizing regulatory burdens and encouraging industry-led solutions, which may influence the implementation of this directive.
Adjustments to Cybersecurity Standards
The administration may choose to implement less prescriptive cybersecurity requirements, encouraging businesses to adopt voluntary best practices rather than enforceable mandates for federal contractors. This could lead to greater flexibility but might also introduce variability in security practices.
Reevaluation of Quantum-Resistant Cryptography
While quantum-resistant cryptography addresses long-term risks, the administration might prioritize immediate cybersecurity challenges, potentially delaying the transition to quantum-resistant algorithms.
Focus on Targeted Sanctions
The Trump administration may refine its sanctions policy to focus on specific high-impact cases rather than broad deterrence, which could influence the overall effectiveness of this measure.
Shifts in Public-Private Collaboration
Efforts to enhance public-private collaboration may evolve, with businesses potentially taking on a larger role in independently managing cybersecurity risks. This could lessen the emphasis on centralized federal support for information sharing.
Guidance for Companies
In light of these developments, businesses must proactively adapt to an evolving cybersecurity landscape. Regardless of whether the executive order remains in effect, organizations should prioritize cybersecurity to mitigate risks and uphold resilience. Below are suggested actions for companies:
Strengthen Internal Cybersecurity Measures

Conduct a thorough assessment of existing cybersecurity protocols to identify vulnerabilities and opportunities for enhancement.
Implement multi-factor authentication (MFA), endpoint detection and response (EDR) tools, and robust encryption practices to protect sensitive data.
Develop and test incident response plans to ensure rapid recovery from cyber incidents.

Monitor Regulatory Changes

Stay updated on possible changes to the executive order and associated cybersecurity policies from the Trump administration.
Engage with legal and compliance teams to assess the effects of regulatory changes on business operations.
Monitor state and international regulations to ensure compliance with relevant standards.

Invest in Cybersecurity Innovation

Investigate emerging technologies, such as AI-driven cybersecurity tools, to enhance threat detection and response capabilities.
Evaluate the feasibility of transitioning to quantum-resistant cryptographic algorithms, even in the absence of federal mandates.
Collaborate with industry partners to embrace innovative solutions and exchange best practices.

Foster Public-Private Partnerships

Engage in information-sharing initiatives like the Cybersecurity and Infrastructure Security Agency’s (CISA) programs to remain informed about threat intelligence.
Promote policies that encourage collaboration between the public and private sectors to strengthen collective security.

Prepare for Geopolitical Risks

Monitor geopolitical developments and their potential impact on cyber threats, particularly those originating from nation-states.
Strengthen supply chain security to reduce risks associated with foreign adversaries.
Conduct tabletop exercises to simulate responses to nation-state cyberattacks.

Implications for the Private Sector
The uncertainty surrounding the executive order underscores the necessity for businesses to adopt a proactive and flexible approach to cybersecurity. Key implications include:
Increased Responsibility on Businesses
With potential adjustments to federal oversight, companies may need to be more proactive in managing their cybersecurity risks. Implementing strong internal policies and investing in advanced security technologies will be crucial.
Fragmented Regulatory Environment
If federal mandates are modified, businesses may face a patchwork of state and international regulations. Navigating this fragmented landscape will demand considerable resources and expertise.
Heightened Cyber Threats
The evolving threat landscape, along with potential policy changes, could make critical infrastructure and private networks more vulnerable to sophisticated attacks. Companies must remain vigilant and prepared to respond to emerging threats.
Competitive Differentiation
Organizations that prioritize cybersecurity and demonstrate a commitment to protecting customer data may gain a competitive advantage in the market. Establishing trust with stakeholders through transparency and robust security measures will be crucial.
Final Thoughts
President Biden’s executive order marks a significant advancement in addressing the nation’s cybersecurity challenges. However, its future under the Trump administration remains uncertain, with the potential for policy adjustments. Businesses must navigate this evolving landscape by bolstering internal measures, staying updated on regulatory shifts, and investing in innovation.
While the federal government’s role in cybersecurity may evolve, the responsibility for safeguarding critical systems and data ultimately rests with the private sector. By implementing proactive strategies and encouraging collaboration, companies can enhance their resilience against cyber threats and contribute to a more secure digital ecosystem.
For additional information about President Biden’s executive order, check out President Biden Issues Second Cybersecurity Executive Order.

Key Developments in German Labor and Employment Law for 2025

Labor and employment law in Germany will see a number of important developments in 2025. The Bureaucracy Relief Act IV took effect on January 1; the EU AI Act’s initial provisions on unauthorized AI take effect on February 2; and the Self-Determination Act took effect late last year.
Important developments are also on the near horizon regarding pay equity and professional validation for experienced employees who lack degrees. The following is a summary of some of the key developments for employers to put on their radars.
Quick Hits

The Fourth Bureaucracy Reduction Act, effective January 1, 2025, simplifies requirements for giving written evidence of employment contracts, allowing now digital agreements for open-ended contracts while maintaining written form for fixed-term contracts.
The EU’s AI Act, effective from August 2024, introduces regulations on AI systems, with initial provisions on unauthorized AI use starting February 2, 2025, and further regulations on general-purpose AI models and sanctions taking effect on August 2, 2025.
The Self-Determination Act, effective late 2024, mandates that employers update relevant documents for transgender, intersex, and nonbinary employees upon request, with fines up to EU 10,000 for noncompliance.

Bureaucracy Reduction Act IV
The Fourth Bureaucracy Reduction Act (BEG IV) took effect on January 1, 2025. The aim is to reduce bureaucratic hurdles and relieve the burden on employers. Here is an overview of the most important points:
Simplification of the Formal Requirement of the Evidence Act
The formal requirements of the Evidence Act introduced in 2022 will be partially simplified by the BEG IV. Significant terms and conditions of employment and changes to them will no longer have to be made in writing (i.e., signed by hand) but can be drawn up and transmitted in text form. This means that permanent employment contracts can be concluded completely digitally if the employment contract is agreed in text form. In the future, an email with a scanned signature may be sufficient for an open-ended employment contract. Therefore, it is necessary that the essential terms and conditions of employment can be stored, accessed, and printed by the employee. In addition, the employer must request the employee to confirm receipt of the transmission. However, these changes do not apply to certain sectors—listed in § 2a of the Act to Combat Clandestine Employment (“SchwarzArbG”)—for which a handwritten signature is still required.
Fixed-Term Contracts Remain Strictly Regulated
While there are simplifications for open-ended contracts, the written form is still required for fixed-term contracts. Such agreements must still be set out in writing to ensure legal certainty. A purely electronic fixed-term contract will still be inadmissible and open to challenge in 2025. The exception here is the fixed term for reaching retirement age, for which the text form will also be sufficient. In this respect, the legislator has taken on board the criticisms of the draft law. If the written form had also been maintained for the contractual clause on termination of the contract upon reaching retirement age, no employer would be able to use the bureaucratic relief without concern. As a precautionary measure, almost all open-ended contracts contain a termination clause for reaching retirement age, as the employment relationship does not automatically end when employees (can) draw a pension.
Job References in Electronic Form Permitted
A new development is that employers may now issue employment references electronically, provided that a qualified electronic signature is used and the employees agree. To check the validity of the e-signature of a PDF, the EU Commission has created a general tool. In what regard the option of issuing employment references electronically will be accepted in practice remains to be seen, not only because of the more extensive procedure. If employment references are signed electronically, the time of the electronic signature is unalterably recognizable and the usual backdating to the leaving date is not possible. Any discrepancy between the date of issue and the date of signature and thus conclusions about a supposedly nonconsensual separation would then be impossible to conceal. Employees can, however, continue to request the traditional paper form if they prefer it.
No Changes to Termination Notices and Termination Agreements
The written form (wet ink signature) continues to apply to notices of termination and termination agreements. These must be signed by hand—electronic formats are expressly excluded here.
First Provisions of EU AI Act Take Effect
The European Union’s AI Act came into force at the beginning of August 2024 as the world’s first comprehensive law on the regulation of artificial intelligence. The regulation classifies AI systems according to their risk and sets standards and requirements for them accordingly. Most of the regulations are aimed at the developers of the systems, although users are also subject to obligations. The EU regulation does not need to be transposed into national law, and the individual regulations will come into force in stages over the next few years.
Initially, the applicability of the provisions on the unauthorized use of artificial intelligence will begin on February 2, 2025. Art. 5 of the AI Act lists various types of AI-based practices whose use is generally prohibited. Examples include systems for social scoring or monitoring emotions in the workplace. In the opinion of the EU legislator, these violate central European values, above all fundamental rights, and are unacceptable as a broad risk.
One year after the directive comes into force, the regulations on AI models with a general purpose come into force on August 2, 2025. Such models are not limited to one application purpose but are generally valid and capable of competently performing a wide range of different tasks. The providers of such models must ensure that copyrights are observed, keep detailed technical records of the development and testing of their AI and make these available to other companies that wish to use their model. For providers of AI models that are open source and freely available to the public but do not pose a systemic risk, a reduced obligation applies to the extent that they must comply with copyright law and publish a summary of the training data.
In addition, as of August 2, 2025, the sanctions provisions of the AI Act will apply, apart from fines for providers of AI models with a general purpose.
Self-Determination Act
Shortly before the turn of the year, the Self-Determination Act (SBGG) came into force. It makes it easier for transgender, intersex, and nonbinary people to change their gender entry and their first names in the civil status register. This also has implications for the employment relationship. Employers are obliged to amend all relevant documents at the request of the employees concerned. This includes, for example, employment contracts, certificates, performance records and payment cards.
If the gender entry or first name has been changed, previous details may not be disclosed or researched without the consent of the employees concerned. A violation of this prohibition of disclosure can result in a fine of up to EUR 10,000.
New Attempt to Amend the Pay Transparency Act
The EU Directive on pay transparency (Directive (EU) 2023/970) aims to reduce gender pay gaps and promote equal pay. It has been in force since May 17, 2023, and must be transposed into national law by June 7, 2026. A draft bill had been announced for summer 2024 but not been published by the end of the year. In view of the necessary lead time for a legislative procedure and the preparation time to be granted to companies for far-reaching changes, it is to be expected that a new federal government will address the matter promptly. Germany will probably closely follow the directive when implementing it.
The directive will require employers to provide applicants with information on starting salaries or salary ranges—either directly in job advertisements or at the latest before an interview. However, questions about the previous salary development are no longer permitted for employers. Employees are also entitled to comprehensive information on pay criteria, individual remuneration and average salaries, broken down by gender and employee group. This information must be provided in writing within two months, regardless of the size of the company.
The reporting obligations will also be expanded: Companies with at least one hundred employees will have to prepare regular reports on pay-related indicators such as the gender pay gap. If such reports identify an unjustifiable gender pay gap of more than 5 percent, a pay assessment is required, usually carried out together with the works council or an alternative body.
The directive stipulates serious consequences for violations: Those affected can claim damages or compensation, and employers bear the burden of proof that there has been no discrimination. In addition, fines may be imposed. Trade unions and anti-discrimination bodies are given the right to sue to actively support those affected or to sue on their behalf.
Employers may want to start preparing early to comply with these upcoming requirements.
Professional Validation: New Opportunities for Experienced Employees Without a Degree
The Vocational Training Validation and Digitization Act (BVaDiG), which took effect on August 1, 2024, creates the possibility of officially recognizing the professional skills of people without a formal qualification as of January 2025. The aim is to assess skills based on the training regulations for a referenced occupation and to certify in a Chamber of Industry and Commerce (IHK) certificate, to the extent that these are comparable with completed vocational training. Important: Professional validation is not possible for advanced training qualifications such as the master craftsman.
The procedure is aimed at adults who:

have several years of professional experience;
do not have a formal vocational qualification in their profession;
are seeking proof of their professional competencies; and
for whom an external examination is currently not an option.

The BVaDiG offers employers a valuable opportunity to better utilize the potential of their workforce. The validation process gives long-serving employees who do not have a formal qualification the chance to have their skills officially recognized. This not only boosts employees’ motivation, but also increases their opportunities for deployment in the company.
In addition, employers can use professional validation to secure existing know-how within the company and close potential gaps in the supply of skilled workers.
Additional Employment Law–Related Developments
As has become customary in recent years, the minimum wage will be raised on January 1, 2025. It will rise from EUR 12.41 per hour to EUR 12.82 gross. At the same time, the mini-job threshold will increase from EUR 538 to EUR 556 gross. The proposals for the further development of the statutory minimum wage from the independent Minimum Wage Commission are expected in June 2025.
The Growth Opportunities Act amends Section 34 of the Income Tax Act, making it easier for employers to account for severance payments. Previously, the tax benefit of severance pay as a large one-off payment was only partially considered by the employer in payroll accounting. As of the beginning of the new year, employers no longer have to observe the so-called fifth rule for severance payments but can account for them without any special features. However, employees can still claim the privileged treatment of severance pay in their income tax assessment.
For births from April 1, 2025, the income limit above which parents are no longer entitled to parental allowance will fall from EUR 200,000 to EUR 175,000. This limit applies equally to couples and single parents. Furthermore, it is only possible to simultaneously receive basic parental allowance for a maximum of one month at a time and only within the first twelve months of the child’s life.
The new version of the Postal Act gives the postal service more time to deliver letters. Previously, 95 percent of letters had to arrive two working days after posting and 80 percent on the following working day. Section 18 I PostG now stipulates that only 95 percent of all letters must be delivered on the third working day after posting and 99 percent on the fourth. According to Deutsche Post AG, the standard for ordinary letters will shift so that they will generally be delivered on the working day after next. At the same time, Deutsche Post AG increased postage prices on January 1.
The rates for the minimum training allowance will also be increased at the turn of the year. In the first year of training, apprentices will receive EUR 682 per month (2024: EUR 649), and in the second year EUR 805 instead of the previous EUR 766. In the third year of training, this will be at least EUR 921 (2024: EUR 876), and in the fourth year, the prospective skilled workers can expect to receive at least EUR 955 (2024: EUR 909).
For various products placed on the market after June 28, 2025, as well as for various services provided to consumers after June 28, 2025, the provisions of the Accessibility Act (“BFSG”) will then apply. Among other things, online commerce, e-commerce services, and electronic communication services must be more accessible. Small companies are exempt from the obligations.
The German government has decided to double the maximum period of entitlement to short-time working allowance to twenty-four months. This regulation came into force on January 1 and is limited until December 31, 2025. From 2026, the regular maximum entitlement period of twelve months will apply again. Entitlements that extend beyond this period will expire at the end of December 31, 2025. With this measure, the German government is responding to the increase in short time working in Germany.

Potential Changes in the Regulation of Artificial Intelligence in 2025

On January 20, 2025, within hours of taking the oath of office for the second time, President Donald Trump issued an executive order that revoked an executive order from the prior administration regarding the use of artificial intelligence (AI). During the first term of President Trump from 2017 until 2021, the use of AI, in particular generative AI, was not as prevalent as it is today. As such, the number of regulations of AI at the federal, state, or local level was more limited as well. The significant technology advance and adoption over the past four years has impacted government regulation and will likely continue to do so.
On the first day of his second term in office, President Trump issued an executive order titled “Initial Rescissions of Harmful Executive Orders and Actions.” It reads that to “commence the policies that will make our Nation united, fair, safe, and prosperous again, it is the policy of the United States to restore common sense to the federal government and unleash the potential of the American citizen.” In connection with that, the executive order revokes more than 50 prior executive orders, including Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence).
Executive Order 14110 outlined its purpose as “governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government–wide approach to doing so.” The executive order also outlined eight “guiding principles and priorities”: 

Making AI safe and secure
Promoting “responsible innovation, competition, and collaboration” 
Committing to supporting American workers in the development and use of AI
Advancing equity and civil rights with AI
Protecting the interest of Americans using AI and AI-enabled products in their daily lives
Protecting Americans’ privacy and civil liberties
Managing the risks from the federal government’s own use of AI
Engaging with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.

Executive Order 14100 was one of many executive orders issued in the past four years. Some other prior executive actions on AI are the October 2024 “Framework to Advance AI Governance and Risk Management in National Security”; the U.S. Office of Special Counsel’s principles and policies for the use of AI; and most recently in January 2025, on federal support for AI Data Centers.
Chain of Executive Orders on AI
In his first term in office, President Trump did issue Executive Order 13859 in February 2019, “Maintaining American Leadership in Artificial Intelligence.” Under this executive order, “Agencies are encouraged to continue to use AI, when appropriate, to benefit the American people.” It also noted that “Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law and the goals of Executive Order 13859.” It outlined nine principles for the use of AI in government: 

Lawful and respectful of the Nation’s values
Performance driven
Accurate
Safe and secure
Understandable
Responsible and traceable
Regularly monitored
Transparent
Accountable. 

Many of these same principles can be found in the October 2023 Executive Order 14110 that has now been revoked, with the main difference of the latter addressing concerns about the implementation of AI in the private sector and in the government, while the 2019 Executive Order 13859 focused only on the government’s implementation of artificial intelligence.
The recent executive order revoking the 2023 Executive Order 14110 appears to reflect this difference, as the other executive orders on AI focused on the government’s use of AI were not revoked. This concern on regulation of the private sector can be found in the 2024 Republican Party platform on AI, which noted that certain executive orders hindered AI innovation and that they “support AI Development rooted in Free Speech and Human Flourishing.” Potentially as a sign of things to come, CEOs of large technology companies such as Amazon, Meta, Google, and Tesla were in attendance at the recent inauguration. 
Conclusion
It is worth noting that the recent executive order states that the “revocations within this order will be the first of many steps the United States federal government will take to repair our institutions and our economy.” Thus, it remains to be seen how future government action at the federal level in 2025 will impact the regulation of artificial intelligence going forward.

5 Trends to Watch: 2025 Trade Secrets

Large Damage Awards May Face Scrutiny. In 2024, courts issued several significant decisions concerning damages awards in trade secret misappropriation cases. Three recent federal court decisions overturned exceptionally large damages awards on the ground the plaintiff failed to prove causation between the proven misappropriation and the claimed damages. These cases illustrate that plaintiffs seeking recovery for misappropriation still have powerful tools at their disposal, including the potential for large damages awards and injunctive relief. But plaintiffs should consider taking care to show the causal nexus between the claimed damages and the proven misappropriation at trial.
Noncompete Enforceability Limited. The Federal Trade Commission (FTC) under President Trump’s nominee, Andrew Ferguson, may well rescind the FTC ban on noncompete agreements and withdraw appeals in the Fifth and Eleventh Circuits. This action may prompt states that have allowed their own proposed noncompete legislation to languish (e.g. New York, Illinois) to refocus on narrowing the ability of employers to impose such restrictions. The health care space is expected to see more states narrowing the enforceability of noncompetes as well (such as Rhode Island, Pennsylvania, Maryland, Iowa). Within this unpredictable and inconsistent landscape concerning the enforcement of noncompetes, it is critical for companies to protect and defend their trade secrets.
Plaintiffs Prevail in Trade Secret Trials. A recent analysis of federal trade secret cases that go to trial may be either alarming or heart-warming, depending on which side clients find themselves (see Stout’s “Trends in Trade Secret Litigation Report,” Nov. 4, 2024). The report’s findings include that out of 271 federal trade secret cases that went to verdict since 2017, 84% went in favor of the claimant. This may mean that more trade secret misappropriation filings will occur, and perhaps settlements before verdict will also be likely. Such settlements may reach higher amounts, further ratcheting up filings. As more cases are tried to verdict, the odds may stabilize toward more even outcomes, but clients and counsel should take note of these numbers.
The AI Revolution Could Dramatically Affect Trade Secrets. AI may create innumerable systems, algorithms and other material that constitute trade secrets, raising a host of issues, like who owns them and how to protect them. AI also poses a threat to owners of trade secrets that can be reverse-engineered by AI but perhaps not nearly as easily (or at all) by a human. We expect the law to evolve to address ownership of trade secrets created by AI and to bolster protection against AI-generated reverse-engineering.
Foreign Damages Are Available for Trade Secret Misappropriation. In 2024, the Seventh Circuit held that the federal Defend Trade Secrets Act (DTSA) has extraterritorial reach. This was the first circuit court in the country to find this explicitly. In so holding, the Seventh Circuit affirmed a nine-figure compensatory damages award that consisted entirely of the defendant’s foreign sales. All that is necessary to obtain foreign damages is an “act in furtherance” of the misappropriation in the United States, such as advertising products at a trade show that make use of the misappropriated information. Importantly, proximate causation between the act in the United States and damages is not required. The Seventh Circuit’s decision is likely to lead to an uptick in discovery battles over foreign damages and has the potential to increase damages in trade secret cases.

Bryan Harrison also contributed to this article.

Managing Artificial Intelligence: The Monetary Authority of Singapore’s Recommendations on AI Model Risk Management

This publication is issued by K&L Gates Straits Law LLC, a Singapore law firm with full Singapore law and representation capacity, and to whom any Singapore law queries should be addressed. K&L Gates Straits Law is the Singapore office of K&L Gates, a fully integrated global law firm with lawyers located on five continents.
Introduction and Background
On 5 December 2024, as part of the Monetary Authority of Singapore’s (MAS) incremental efforts to ensure responsible use of artificial intelligence (AI) in Singapore’s financial sector, MAS published recommendations on AI model risk management in an information paper1 following a review of AI-related practices of selected banks.
In the information paper, MAS stressed that the good practices highlighted in the information paper should apply to other financial institutions. This alert briefly outlines key recommendations in the information paper, with three key focus areas that MAS expects banks and financial institutions to keep in mind when developing and deploying AI, which covers (1) oversight and governance of AI, (2) key risk management systems and processes for AI, and (3) development, validation and deployment of AI.
Key Focus Area 1: Oversight and Governance of AI2 
Existing risk governance frameworks and structures (such as those related to data, technology and cyber; third-party risk management; and legal and compliance) remain relevant for AI governance and risk management. In tandem with these existing control functions, MAS deems it good practice for banks to do the following: 

Establish cross-functional oversight forums to avoid gaps in AI risk management and ensure that the bank’s standards and processes are aligned across the bank and kept in pace with the state of the bank’s AI usage.
Update control standards to keep pace with the increasing use of AI or new AI developments, policies and procedures relating to performance testing of AI for new use cases and clearly setting out roles and responsibilities to address AI risk. 
Develop clear statements and guidelines to govern areas such as fair, ethical, accountable and transparent use of AI across the bank to prevent potential harms to consumers and other stakeholders arising from the use of AI.
Build capabilities in AI across the bank to support both innovation and risk management.

Key Focus Area 2: Key Risk Management Systems and Processes3
MAS also recognised from most banks the need to establish or update key risk management systems and processes for AI, particularly in the following areas: 

Policies and procedures for identifying AI usage and risk across the bank, so that commensurate risk management can be applied to the respective AI model.
Systems and processes to ensure the completeness of a bank’s AI inventories, which also capture the approved scope of use for that particular AI (e.g., the purpose, use case, application, system and other relevant conditions) and provide a central view of AI usage to support oversight.
Assessment of the risk materiality of AI that covers key risk dimensions, such as AI’s impact on the customer, bank and stakeholders; the complexity of AI model or system used; and the bank’s reliance on AI, which takes into account the autonomy granted to AI and the involvement of humans, so that relevant controls can be applied proportionately. 

Key Focus Area 3: Development and Deployment of AI4
Most banks have established standards and processes for development, validation and deployment of AI to address key risks. MAS deems it good practice for banks and financial institutions to do the following:

In relation to the development of AI, to focus on data management, model selection, robustness and stability, explainability and fairness, as well as reproducibility and auditability. 
In relation to the validation of AI, to require independent validations or reviews of AI of higher risk materiality prior to deployment and to ensure that development and deployment standards have been adhered to. For AI of lower risk materiality, most banks conduct peer reviews that are calibrated to the risks posed by the use of AI prior to deployment. 
In relation to the deployment, monitoring and change management of AI, to perform predeployment checks, closely monitor deployed AI based on appropriate metrics, and apply the appropriate change management standards and processes to ensure that AI would behave as intended when deployed.

Generative AI and Third-Party AI5
MAS has noted that the use of generative AI is still in its early stages in banks and financial institutions. However, MAS suggests that banks and financial institutions should generally try to apply existing governance and risk management structures and processes where relevant and practicable. Innovation and risk management should be balanced by adopting the following: 

Strategies and approaches, in which a bank leverages on the general-purpose nature of generative AI for key enabling modules or services, but limits the current scope of generative AI to use cases for assisting or augmenting human and operational efficiencies that are not directly customer-facing. 
Process controls, such as setting up cross-functional risk control checks at key stages of the generative AI’s life cycle and requiring human oversight for generative AI decisions with attention on user education and training on the limitations of generative AI tools.
Technical controls, such as selection, testing and evaluation of generative AI models in the bank’s use cases; developing reusable modules to facilitate testing and evaluation; assessing different aspects of generative AI model performance and risks; establishing input and output filters as guardrails to address toxicity, bias and privacy issues; and mitigating data security risk via measures such as the use of private clouds or on-premise servers and limiting the access of generative AI to sensitive information.

In relation to third-party AI, existing third-party risk management standards and processes continue to play an important role in banks’ efforts to mitigate risks. As far as practicable, MAS suggests that banks extend controls for internally developed AI to third-party AI. Banks should also supplement controls for third-party AI with other approaches to mitigate additional risks. These include the following:

Conducting compensatory testing to verify the third-party AI model’s robustness and stability and detect potential biases.
Developing robust contingency plans to address potential failures, unexpected behaviour of third-party AI or discontinuing support by vendors.
Updating legal agreements and contracts with third-party AI providers to include clauses that provide for performance guarantees, data protection, the right to audit and notification when AI is introduced in third-party providers’ solutions to the banks and financial institutions.
Improving the staff training on AI literacy, risk awareness and mitigation. 

Conclusion
In conclusion, MAS has highlighted that robust oversight and governance of AI, supported by comprehensive identification, recording of AI inventories and appropriate risk materiality assessment, along with development, validation and deployment standards, are important areas that financial institutions and banks will need to focus on when using AI. Financial institutions and banks will need to keep in mind that the AI landscape will continue to evolve, and existing standards and process will need to reviewed and updated in consultation with MAS and industry best practices to ensure proper governance and risk management of AI and generative AI.

Footnotes

1 “Artificial Intelligence Model Risk Management: Observations from a Thematic Review,” accessible at https://www.mas.gov.sg/publications/monographs-or-information-paper/2024/artificial-intelligence-model-risk-management (the Information Paper).
2 Information Paper paras. 4.1–4.5.
3 Information Paper paras. 5.1–5.3.
4 Information Paper paras. 6.1–6.5.
5 Information Paper paras. 7.1–7.2.

AI Tools on Trial: Emerging Litigation Trends Impacting AI-Powered Technologies

With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part one of our series discussing emerging legal trends. Future alerts in the series will cover:

Deep dives into regulatory activity at both the federal and state levels.
Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.

But first, some background.
AI on Trial: How Did We Get Here?
In October 2023, we wrote about the emerging legal risks impacting businesses using new technologies, such as AI-powered website chat functions. In Chat with Caution, we discussed a new wave of privacy litigation seeking to dramatically expand state wiretapping laws to encompass new customer service technologies, and we identified measures that companies should take to avoid being targeted.
During 2024, class action plaintiffs increased their focus on new technologies and wiretapping laws, and courts began to address some of the thornier legal issues as claims proceeded past initial pleading stages. In late October 2024, we wrote about a critical decision at the Massachusetts Supreme Judicial Court rejecting plaintiffs’ theory that wiretapping laws could be extended to website tracking technologies. Nonetheless, we noted that the decision in Massachusetts doesn’t help to resolve cases in courts in other states, especially in California, where judges may reach different conclusions.
Meanwhile, regulators at the Federal Trade Commission (FTC) recently launched “Operation AI Comply,” which it describes as a “new law enforcement sweep,” related to using new AI technologies in misleading or deceptive ways.
And, while the incoming administration may have different enforcement priorities regarding AI, Operation AI Comply is rooted in concerns about Big Tech that are shared by the leadership in both political parties.
Additionally, the FTC’s actions to date under Operation AI Comply are tied to their longstanding authority over deceptive trade practices — meaning that any substantial shifts in focus are unlikely in the short-term. Regulatory action is not limited to the federal level – state attorneys general are taking notice as well, including in states that are litigation hotbeds, such as Massachusetts and California.
Plaintiffs Break Through in Federal Court
Though many courts have rejected the argument that wiretapping laws apply to new technologies such as chatbots, including the recent decision by the Massachusetts Supreme Judicial Court in Vita v. New England Baptist Hospital, plaintiffs have found success in the U.S. District Court for the Northern District of California.
In Yockey v. Salesforce, plaintiffs survived a motion to dismiss after sufficiently arguing that an undisclosed third-party chatbot service provider violated wiretapping laws because it both intercepted the chats before they arrived at the intended recipient (pharmacies with whom the customers thought they were chatting) and had the ability to use the intercepted chats for their own purposes, such as to improve its own products and services and for analytics.
Even though the pharmacies authorized the third party to provide a chatbot service, users were not made aware of this arrangement and did not consent to that third party receiving and transmitting their communications.
In courts that adopt a broad interpretation of wiretap laws, future cases could extend to other technologies beyond chatbots, such as scribing technologies, customer service center analytics and evaluation software, and other digital customer service tools.1
By the same token, future decisions could reject the reasoning in Yockey, at least with respect to technologies that simply record basic data points about a user’s behavior on a site but do not record the contents of a communication.
Even in courts that have rejected a broad interpretation of state wiretap laws, the risk of litigation remains, as plaintiffs shift their focus to federal wiretap laws paired with other state laws. This strategy was seen most recently in the amended complaint in Doe v. Atrius Health, Inc., where in response to the Vita decision, the plaintiff replaced a Massachusetts Wiretap Act claim with a Federal Wiretap Act claim and six state law claims, and subsequently removed the case to federal court.2 A similar strategy was also adopted in McManus v. Tufts Medical Center, Inc.3
Moving claims to federal court, however, may not be a panacea, as plaintiffs are likely to face additional hurdles. A recent example can be found in the Ninth Circuit. In Daghaly v. Bloomingdales, LLC, the plaintiff argued that Bloomingdales’ use of advertising technologies on its website violated the California Invasion of Privacy Act (CIPA).
The Ninth Circuit affirmed dismissal of the case without reaching the question of whether CIPA applies. Because the data transmitted to third parties was limited to information about the site visit and did not include meaningful communications, the court found that the plaintiff had not met the injury threshold required to access federal courts.
The law in this area is quickly evolving, and courts will likely continue to adopt differing views. We will continue to monitor the evolving case law and share additional information in future alerts in this series.
1See, e.g., Class Action Complaint and Demand for Jury Trial, Paulino v. Navy Federal Credit Union, No. 24-cv-03298 (N.D. Cal. May 31, 2024) (customer call center technology). The case has since been voluntarily dismissed and, at the time of writing, no information could be found on subsequent filings by the plaintiff.
2See Defendant’s Notice of Removal, at 3, Doe v. Atrius Health, Inc., No. 1:25-cv-10020 (D. Mass. Jan. 3, 2025).
3No. 1:25-cv-10008 (D. Mass. Jan. 2, 2025).

New Jersey Division on Civil Rights Issues New Guidance on ‘Algorithmic Discrimination’

On January 9, 2024, New Jersey Attorney General Matthew J. Platkin and the New Jersey Division on Civil Rights (DCR) issued a thirteen-page “Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination.”

Quick Hits

The New Jersey Division on Civil Rights (DCR) issued guidance that explains how an employer’s use of automated decision-making tools can lead to algorithmic discrimination that violates the New Jersey Law Against Discrimination (NJLAD).
The guidance does not impose any new obligations on employers but reinforces the importance of NJLAD compliance and instructs that the NJLAD “draws no distinctions based on the mechanism of discrimination.”
Given the increasing use of AI tools to make employment decisions, the DCR explains that all “New Jerseyans [should] understand what these tools are, how they are being used, and the risks and benefits associated with their use.”

The DCR rolled out the guidance as part of the agency’s launch of a new Civil Rights and Technology Initiative “to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies” and provide guidance concerning how the New Jersey Law Against Discrimination (NJLAD) applies to these new technologies. The guidance does not impose any new requirements that are not already included in the NJLAD or establish any new rights or obligations. However, given the DCR’s decision to release guidance on the topic, employers doing business in New Jersey may wish to audit their existing uses of AI to ensure that their policies and practices comply with the NJLAD. While AI technology can be complex, and an employer may not fully grasp how a particular tool generates results, the guidance reinforces that employers are fully responsible for the AI technology they utilize and may not delegate their compliance responsibilities to third parties.
What Are Automatic Decision-Making Tools?
The guidance defines an “automatic decision-making tool” as “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.” An automated decision-making tool might be used to determine whether a human resources professional will review a certain resume, hire a job applicant or terminate an employee. The DCR referenced May 18, 2023, guidance from the U.S. Equal Employment Opportunity Commission (EEOC) in providing these examples. See “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
The DCR explained that automated decision-making tools “accomplish their aims by using algorithms, or sets of instructions that can be followed, typically by a computer, to achieve a desired outcome.” Depending upon how the algorithms are designed, the tools “can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics.” Given the role that algorithms play in the operation of these tools, the DCR defines any discrimination resulting from them as “algorithmic discrimination.”
Citing recent studies, the guidance explains how, for example, an automated decision-making tool that ranks job applicants of a particular race or gender more favorably (or less favorably) than applicants in another group could lead to discriminatory hiring. The DCR further explained that while these tools can also be used in a positive way to reduce bias and discrimination, given the risk of achieving the wrong outcome, employers must fully understand the mechanics of any AI tool upon which they rely to make employment decisions, including the risks and benefits involved.
How Do Automated Decision-Making Tools Lead to Discriminatory Outcomes?
The DCR acknowledges that it may not be easy to detect whether a particular automated decision-making tool might lead to discriminatory outcomes because the calculations made by these tools “can be invisible and not well understood.” Nevertheless, the agency explained that when discriminatory outcomes do arise, it is generally because of the way the tools are (1) designed, (2) trained, or (3) deployed.
Design
The guidance explains that a tool’s design may be intentionally or unintentionally skewed. The tool’s developer makes decisions about “the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses.” Each of these decisions could introduce bias into the tool, which could then generate discriminatory outcomes. Referring to an example from an EEOC enforcement action, the agency explains how a tool was programmed to exclude job applicants who were of a certain age or older depending upon their gender. The case resolved with the company agreeing to stop requesting age-related information from applicants in the future.
Training
The DCR explained that before an automated decision-making tool is used in a real-world environment, the tool must be “trained.” This training “occurs by exposing the tool to training data from which the tool learns correlations or rules.” If the training data that is relied upon reflects the developer’s own biases, or otherwise reflects institutional inequities, the tool can become biased through the training process itself.
Deployment
Finally, the guidance explains that algorithmic discrimination can occur once the tool is utilized in the real world. If, for example, the employer intentionally uses the tool with members of a particular protected class, doing so can lead to purposeful discrimination. Or “[i]f a tool is used to make decisions that it was not designed to assess, its deployment may amplify any bias in the tool and systemic inequities that exist outside of the tool.” Real-world use of the tool may also reveal biases that did not reveal themselves during the testing process. If the tool is flawed, it can contribute to discriminatory decisions that are then fed back into the tool for further training. “Each iteration of this loop exacerbates the tool’s bias.”
NJLAD ‘Draws No Distinctions’ Based on Discrimination Mechanism
The DCR concluded its guidance by reinforcing the NJLAD’s prohibitions on employment discrimination. Whether prohibited discrimination occurs because of the actions of a “live” human being or based on the decisions of an AI tool is immaterial. As always, the impact of an employer’s decision is the critical issue. As the DCR put it, “the LAD draws no distinctions based on the mechanism of discrimination.” If an employer uses an automated decision-making tool to discriminate against a protected class, that employer is liable for unlawful discrimination, the same as if the employer engaged in that behavior without a tool. Such actions constitute disparate impact or intentional discrimination.
If use of an automated decision-making tool generates decisions that disproportionately impact members of a protected class, the employer that used the tool may be liable for disparate impact discrimination. Under well-established principles of disparate impact discrimination, even if the tool serves a “substantial, legitimate nondiscriminatory interest,” use of the tool could be argued to be unlawful if a “less discriminatory alternative” exists. The guidance shares an example about a company that uses an automated decision-making tool to assess contract bids. If that tool disproportionately excludes women-owned businesses, the tool is problematic and may cause disparate impact discrimination. Similarly, if a store uses facial recognition software to flag shoplifters, and the software disproportionately generates false positives for customers who wear certain religious headwear, the tool’s design is flawed, and the employer may be liable for disparate impact discrimination.
Use of Automated Decision-Making Tools and Reasonable Accommodations
The guidance also provides examples of how AI tools can affect applicants or employees who require reasonable accommodations. If, for example, the employer relies upon an AI tool to test an applicant’s typing speed, and the tool cannot assess the speed of an applicant who utilizes a nontraditional keyboard due to a disability, the employer’s use of the tool may cause discrimination against a disabled applicant.
In another context, if an AI tool is not “trained” (see above) on data that includes individuals who require accommodations, the tool may unintentionally penalize individuals who require accommodations. Similarly, an AI-screening tool used in the hiring process may screen out individuals who state in their applications that they require an accommodation to perform the job. Another example is an AI tool that tracks employee productivity by the number of breaks an employee takes. This tool may disproportionately target for discipline employees who require additional break time to accommodate a disability or to express breast milk. If an employer relies upon such tools to discipline employees, the employer could violate the NJLAD.
Next Steps
While the guidance creates no new obligations for employers, its issuance strongly suggests that the DCR, like the EEOC, Office of Federal Contract Compliance Programs (OFCCP), and the the U.S. Department of Labor (DOL), may focus increased attention on employers’ use of automated decision-making tools. New Jersey employers may want to consider reviewing and evaluating their use of these tools and subject them to a bias audit. Additionally, as employers can be liable for unlawful algorithmic discrimination even if they rely on a vendor’s representation that the tool they offer is sound and will not lead to discriminatory outcomes, employers may want to evaluate their vendor contracts and work closely with their vendors to determine how these potential risks and liabilities are spelled out.
Employers may want to stay tuned for new developments on the legislative front involving the use of AI. The New Jersey Legislature introduced two bills early last year (A. 3854 and A. 3911) that seek to regulate employers’ use of this technology in the hiring process. Among other provisions, A. 3854 would require companies that sell automated decision-making tools to conduct an annual bias audit and require employers relying on such tools to notify job candidates that such technology was used in the hiring process and provide a summary of its most recent bias audit. The proposed legislation would also impose monetary penalties ranging from $500 for a first offense and $500 to $1,500 penalty for each subsequent offense. A. 3911 addresses the use of AI-enabled video interviews, and, among other provisions, requires employers to obtain a candidate’s consent to use the technology. If enacted, New Jersey will join other jurisdictions, including Colorado, Illinois, and New York City, that have taken steps to regulate the use of AI in employment decision making.

California AG Issues Legal Advisories on the Application of California Law to the Use of AI

On January 13, 2025, California Attorney General Rob Bonta issued two legal advisories on the use of AI, including in the healthcare context. The first legal advisory (“AI Advisory”) advises consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws with respect to the use of AI, while the second (“Healthcare AI Advisory”) provides guidance specific to healthcare entities about their obligations under California law regarding the use of AI.
The AI Advisory notes that businesses have existing obligations with respect to their use of AI under existing California law, including the California Consumer Privacy Act of 2018, the California Invasion of Privacy Act, the Student Online Personal information Protection Act and the Confidentiality of Medical information Act.
The AI Advisory also notes the applicability of recently passed AI laws (with effective dates in 2025 and 2026) to businesses’ use of AI, including laws providing:

disclosure requirements for businesses (e.g., regarding training data used in AI models, AI-generated telemarketing, detection tools for content created by generative AI);
contractual and consent requirements relating to the unauthorized use of likeness in the entertainment industry and other contexts;
disclosure and content removal requirements relating to the use of AI in election and campaign materials;
prohibition of and reporting requirements related to exploitative uses of AI (i.e., child pornography, nonconsensual pornography using deepfake technology, sexually explicit digital identity theft); and
supervision requirements for use of AI tools in healthcare settings.

The Healthcare AI Advisory provides guidance specific to healthcare providers, insurers, vendors, investors and other healthcare entities about their obligations with respect to their use of AI under California law, including:

health consumer protection laws (e.g., prohibition on unlawful, unfair or fraudulent business acts or practices; professional licensing standards and other prohibitions relating to the practice of medicine by non-human entities; requirements relating to management of health insurance);
anti-discrimination laws (e.g., requirements relating to protected classifications); and
patient privacy and autonomy laws (e.g., use and disclosure of patient data, confidentiality of patient data, patient consent, patient rights).

The Healthcare AI Advisory emphasizes the importance of taking proactive steps to comply with existing California law, even as additional AI laws and regulations are anticipated, given the potential risk of harm to patients, healthcare systems and public health.

House Bipartisan Task Force on Artificial Intelligence Report

In February 2024, the House of Representatives launched a bipartisan Task Force on Artificial Intelligence (AI). The group was tasked with studying and providing guidance on ways the United States can continue to lead in AI and fully capitalize on the benefits it offers while mitigating the risks associated with this exciting yet emerging technology. On 17 December 2024, after nearly a year of holding hearings and meeting with industry leaders and experts, the group released the long-awaited Bipartisan House Task Force Report on Artificial Intelligence. This robust report touches on how this technology impacts almost every industry ranging from rural agricultural communities to energy and the financial sector to name just a few. It is clear that the AI policy and regulatory space will continue to evolve while being front and center for both Congress and the new administration as lawmakers, regulators, and businesses continue to grapple with this new exciting technology.
The 274-page report highlights “America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.” Specifically, it outlines the Task Force’s key findings and recommendations for Congress to legislate in over a dozen different sectors. The Task Force co-chairs, Representative Jay Obernolte (R-CA) and Representative Ted Lieu (D-CA), called the report a “roadmap for Congress to follow to both safeguard consumers and foster continued US investment and innovation in AI,” and a “starting point to tackle pressing issues involving artificial intelligence.” 
There was a high level of bipartisan work on AI in the 118th Congress, and although most of the legislation in this area did not end up becoming law, the working group report provides insight into what legislators may do this year and which industries may be of particular focus. Our team continues to monitor legislation, Congressional hearings, and the latest developments writ large in these industries as we transition into the 119th Congress. See below for a sector-by-sector breakdown of a number of findings and recommendations from the report.
Data Privacy
The report’s section on data privacy discusses advanced AI systems’ need to collect huge amounts of data, the significant risks this creates for the unauthorized use of consumers’ personal data, the current state of US consumer privacy protection laws, and recommendations to address these issues. 
It begins with a discussion of AI systems’ need for “large quantities of data from multiple diverse sources” to perform at an optimal level. Companies collect and license this data in a variety of ways, including collecting data from their own users, scraping data from the internet, or some combination of these and other methods. Further, some companies collect, package, and sell scraped data “while others release open-source data sets.” These collection methods raise their own set of issues. For example, according to the report, many websites following “a voluntary standard” state that their websites should not be scraped, but their requests are ignored and litigation ensues. It also notes that some companies “are updating their privacy policies in order to permit the use of user data to train AI models” but not otherwise informing users that their data is being used for this purpose. The European Union and Federal Trade Commission have challenged this practice. It notes that in response, “some companies are turning to privacy-enhanced technologies, which seek to protect the privacy and confidentiality of data when sharing it.” They also are looking at “synthetic data.”
In turn, the report discusses the types of harms that consumers frequently experience when their personal and sensitive data is shared intentionally or unintentionally without their authorization. The list includes physical, economic, emotional, reputational, discrimination, and autonomy harms.
The report follows with a discussion of the current state of US consumer privacy protection laws. It kicks off with a familiar tune: “Currently, there is no comprehensive US federal data privacy and security law.” It notes that there are several sector specific federal privacy laws, such as those intended to protect health and financial data and children’s data, but, as has become clear from this year’s Congressional debate, even these laws need to be updated. It also notes that 19 states have adopted state privacy laws but notes that their standards vary. This suggests that, as in the case of state data breach laws, the result is that they have “created a patchwork of rules and regulations with many drawbacks.” This has caused confusion among consumers and resulted in increased costs and lawsuits for businesses. It concludes with the statement that Federal legislation that preempts state data privacy laws has advantages and disadvantages.” The report outlines three Key Findings: (1) “AI has the potential to exacerbate privacy harms;” (2) “Americans have limited recourse for many privacy harms;” and (3) “Federal privacy laws could potentially augment state laws.”
Based on its findings, the report recommends that Congress should: (1) help “in facilitating access to representative data sets in privacy-enhanced ways” and “support partnerships to improve the design of AI systems” and (2) ensure that US privacy laws are “technology neutral” and “can address the most salient privacy concerns with respect to the training and use of advanced AI systems.”
National Security 
The report highlights both the potential benefits of emerging technologies to US defense capabilities, as well as the risks, especially if the United States is outpaced by its adversaries in development. The report discusses the status and successes of current AI programs at the Department of Defense (DOD), the Army, and the Navy. The report categorizes issues facing development of AI in the national security arena into technical and nontechnical impediments. The technical impediments include increased data usage, infrastructure/compute power, attacks on algorithms and models, and talent acquisition, especially when competing with the private sector in the workforce. The report also identifies perceived institutional challenges facing DOD, saying “acquisition professionals, senior leaders, and warfighters often hesitate to adopt new, innovative technologies and their associated risk of failure. DOD must shift this mindset to one more accepting of failure when testing and integrating AI and other innovative technologies.” The nontechnical challenges identified in the report revolved around third-party development of AI and the inability of the United States to control systems it does not create. The report notes that advancements in AI are driven primarily by the private sector and encourages DOD to capitalize on that innovation, including through more timely procurement of AI solutions at scale with nontraditional defense contractors. 
Chief among the report’s findings and recommendations is a call to Congress to explore ways that the US national security apparatus can “safely adopt and harness the benefits of AI” and to use its oversight powers to hone in on AI activities for national security. Other findings focus on the need for advanced cloud access, the value of AI in contested environments, and the ability of AI to manage DOD business processes. The additional recommendations were to expand AI training at DOD, continue oversight of autonomous weapons policies, and support international cooperation on AI through the Political Declaration on Responsible Military Use of AI. The report indicates that Congress will be paying much more attention to the development and deployment of AI in the national security arena going forward, and now is the time for impacted stakeholders to engage on this issue.
Education and the Workforce
The report also highlights the role of AI technologies in education and the promise and challenges that it could pose on the workforce. The report recognizes that despite the worldwide demand for science, technology, engineering, and mathematics (STEM) workers, the United States has a significant gap in the talent needed to research, develop, and deploy AI technologies. As a result, the report found that training and educating US learners on AI topics will be critical to continuing US leadership in AI technology. The report notes that training the future generations of talent in AI-related fields needs to start with AI and STEM education. Digital literacy has extended to new literacies, such as media, computer, data, and now AI. Challenges include resources for AI literacy. 
US leadership in AI will require growing the pool of trained AI practitioners, including people with skills in researching, developing, and incorporating AI techniques. The report notes that this will likely require expanding workforce pathways beyond the traditional educational routes and a new understanding of the AI workforce, including its demographic makeup, changes in the workforce over time, employment gaps, and the penetration of AI-related jobs across sectors. A critical aspect to understanding the AI workforce will be having good data. US leadership in AI will also require public-private partnerships as a means to bolster the AI workforce. This includes collaborations between educational institutions, government, and industries with market needs and emerging technologies.
While the automation of human jobs is not new, using AI to automate tasks across industries has the potential to displace jobs that involve repetitive or predictable tasks. In this regard, the report notes that while AI may displace some jobs, it will augment existing jobs and create new ones. Such new jobs will inevitably require more advanced skills, such as AI system design, maintenance, and oversight. Other jobs, however, may require less advanced skills. The report adds that harnessing the benefits of AI systems will require a workforce capable of integrating these systems into their daily jobs. It also highlights several existing programs for workforce development, which could be updated to address some of these challenges.
Overall, the report found that AI is increasingly used in the workplace by both employers and employees. US AI leadership would be strengthened by utilizing a more skilled technical workforce. Fostering domestic AI talent and continued US leadership will require significant improvements in basic STEM education and training. AI adoption requires AI literacy and resources for educators.
Based on the above, the report recommends the following:

Invest in K-12 STEM and AI education and broaden participation.
Bolster US AI skills by providing needed AI resources.
Develop a full understanding of the AI workforce in the United States.
Facilitate public-private partnerships to bolster the AI workforce.
Develop regional expertise when supporting government-university-industry partnerships.
Broaden pathways to the AI workforce for all Americans.
Support the standardization of work roles, job categories, tasks, skill sets, and competencies for AI-related jobs.
Evaluate existing workforce development programs.
Promote AI literacy across the United States.
Empower US educators with AI training and resources.
Support National Science Foundation curricula development.
Monitor the interaction of labor laws and worker protections with AI adoption.

Energy Usage and Data Centers
AI has the power to modernize our energy sector, strengthen our economy, and bolster our national security but only if the grid can support it. As the report details, electrical demand is predicted to grow over the next five years as data centers—among other major energy users—continue to come online. These technologies’ outpacing of new power capacity can “cause supply constraints and raise energy prices, creating challenges for electrical grid reliability and affordable electricity.” While data centers only take a few years to construct, new sources of power, such as power plants and transmission infrastructure, can take up to or over a decade to complete. To meet growing electrical demand and support US leadership in AI, the report recommends the following:

Support and increase federal investments in scientific research that enables innovations in AI hardware, algorithmic efficiency, energy technology development, and energy infrastructure.
Strengthen efforts to track and project AI data center power usage.
Create new standards, metrics, and a taxonomy of definitions for communicating relevant energy use and efficiency metrics.
Ensure that AI and the energy grid are a part of broader discussions about grid modernization and security.
Ensure that the costs of new infrastructure are borne primarily by those customers who receive the associated benefits.
Promote broader adoption of AI to enhance energy infrastructure, energy production, and energy efficiency.

Health Care
The report highlights that AI technologies have the potential to improve multiple aspects of health care research, diagnosis, and care delivery. The report provides an overview of use to date and its promise in the health care system, including with regard to drug, medical device, and software development, as well as in diagnostics and biomedical research, clinical decision-making, population health management, and health care administration. The report also highlights the use of AI by payers of health care services both for the coverage of AI-provided services and devices and for the use of AI tools in the health insurance industry.
The report notes that the evolution of AI in health care has raised new policy issues and challenges. This includes issues involving data availability, utility, and quality as the data required to train AI systems must exist, be of high quality, and be able to be transferred and combined. It also involves issues concerning interoperability and transparency. AI-enabled tools must be able to integrate with health care systems, including EHR systems, and they need to be transparent for providers and other users to understand how an AI model makes decisions. Data-related risks also include the potential for bias, which can be found during development or as the system is deployed. Finally, there is the lack of legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or recommendations. 
Overall, the report found that AI’s use in health care can potentially reduce administrative burdens and speed up drug development and clinical diagnosis. When used appropriately, these uses of AI could lead to increased efficiency, better patient care, and improved health outcomes. The report also found that the lack of standards for medical data and algorithms impedes system interoperability and data sharing. The report notes that if AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.
Based on the above, the report recommends the following:

Encourage the practices needed to ensure AI in health care is safe, transparent, and effective.
Maintain robust support for health care research related to AI.
Create incentives and guidance to encourage risk management of AI technologies in health care across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.
Support the development of standards for liability related to AI issues.
Support appropriate payment mechanisms without stifling innovation.

Financial Services
With respect to financial services, the report emphasizes that AI is already and has been used for decades within the financial services system, by both industry and financial regulators alike. Key examples of use cases have included fraud detection, underwriting, debt collection, customer onboarding, real estate, investment research, property management, customer service, and regulatory compliance, among other things. The report also notes that AI presents both significant risks and opportunities to the financial system, so it is critical to be thoughtful when considering and crafting regulatory and legislative frameworks in order to protect consumers and the integrity of the financial system, while also ensuring to not stifle technological innovation. As such, the report states that lawmakers should adopt a principles-based approach that is agnostic to technological advances, rather than a technology-based approach, in order to preserve longevity of the regulatory ecosystem as technology evolves over time, particularly given the rapid rate at which AI technology is advancing.  Importantly, the report notes that small financial institutions may be at a significant disadvantage with respect to adoption of AI, given a lack of sufficient resources to leverage AI at scale, and states that regulators and lawmakers must ensure that larger financial institutions are not inadvertently favored in policies so as not to limit the ability of smaller institutions to compete or enter the market. Moreover, the report stresses the need to maintain relevant consumer and investor protections with AI utilization, particularly with respect to data privacy, discrimination, and predatory practices.
A Multi-Branch Approach to AI/Next Steps
The Task Force recognizes that AI policy will not fall strictly under the purview of Congress. Co-chair Obernolte shared that he has met with David Sacks, President Trump’s “AI Czar,” as well as members of the transition team to discuss what is in the report. 
We will be closely following how both the administration and Congress act on AI in 2025, and we are confident that no industry will be left untouched.
 
Vivian K. Bridges, Lauren E. Hamma, Abby Dinegar contributed to this article.

USPTO Announces New Effort to Promote AI and Emerging Technologies

The U.S. Patent and Trademark Office (USPTO) recently announced an official Artificial Intelligence Strategy that outlines how the Office plans to address the promise and challenges of artificial intelligence (AI) in its internal operations as well as in the development of intellectual property (IP) policy.  According to information provided in the newly released document, annual filings of AI-related patent applications have increased more than two-fold since 2002 and are up 33% since 2018. Additionally, AI has permeated a wide range of technology sectors with AI-related patent applications appearing in 60% of all the technology subclasses used by the USPTO in 2023.
The initiative outlined in the AI Strategy document seeks to support responsible and inclusive AI innovation, implement AI in furtherance of the USPTO’s mission, and maintain the U.S.’s competitive edge in global innovation. At the same time, USPTO officials say the new AI Strategy mitigates risks and fosters responsible use of artificial intelligence.
Specifically, the AI Strategy document provides a roadmap designed to enhance the agency’s efforts in promoting AI innovation within its operations and the broader intellectual property sector, through five key focus areas:
(1) Advance the development of IP policies that promote inclusive AI innovation and creativity. (2) Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development.(3) Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem. (4) Develop AI expertise within the USPTO’s workforce.(5) Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

With respect to the first focus area, the Strategy document states that:
“As appropriate, the USPTO will advocate for the development of balanced and sound judicial precedents and legislation that promote both AI innovation and respect for IP rights, while not unnecessarily constraining future AI innovation. For example, the USPTO would advocate for judicial positions, consistent with existing legal precedent, that would encourage innovation with respect to issues including AI-generated prior art and AI-assisted inventions.”

The USPTO Strategy document also notes that the rapid advancement of AI technologies could not only impact patent-related policy, including inventorship, subject-matter eligibility, obviousness, enablement, and written description, but also affect “the volume and character of submitted applications.” Some of these topics, as noted below, have been addressed via guidance released by the USPTO this past year.
AI has the potential to not only transform the tools used by Examiners to examine patent applications, but also to redefine the inventive process itself as well as the framework by which inventions are evaluated. For patent holders and attorneys, the release of this strategy signals the USPTO’s commitment to fostering an ecosystem where AI advancements can thrive responsibly, driving innovation and protecting intellectual property.
This announcement caps an active 12-month period for the Office with respect to AI policy and guidance. On February 13, 2024, the Office published inventorship guidance for AI-assisted inventions followed by updated patent eligibility guidance for AI inventions on July 17, 2024. The release of the USPTO’s official AI Strategy plan, along with the prior guidance, is responsive to and in alignment with the Biden-Harris Administration’s October 2023 Executive Order 14110 on the safe and secure development and use of AI. Given that the AI Strategy plan was released during the final week of the Biden-Harris Administration, the degree to which it is implemented will depend on the Trump-Vance Administration. Expect further notices and guidance regarding these topics as this transition occurs.

States Ring in the New Year with Proposed AI Legislation

As we enter 2025, the rapid growth of artificial intelligence (AI) presents both transformative opportunities and pressing legal challenges, particularly in the workplace.
Employers must navigate an increasingly complex regulatory landscape to ensure compliance and avoid liability. With several states proposing AI regulations that would impact hiring practices and other employment decisions, it is critical for employers to stay ahead of these developments.
New York
New York’s proposed legislation, which if passed would become effective January 1, 2027, provides guardrails to New York employers implementing AI to assist in hiring, promoting, or making other decisions pertaining to employment opportunities. Unlike New York City Local Law 144, which covers only certain employment decisions, the New York Artificial Intelligence Consumer Protection Act (“NY AICPA”), A 768, takes a risk-based approach to AI regulation, much like that of Colorado’s SB 24-205. The NY AICPA would specifically regulate all “consequential decisions” made by AI, including those having a “material legal or similarly significant effect” on any “employment or employment opportunity.” The bill imposes compliance obligations on “developers” and “deployers” of high-risk AI decision systems. 
If passed, NY AICPA would require developers to:

Use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. This would include undertaking bias and governance audits by an independent third-party auditor qualified by the State’s Attorney General.
Make available to deployers and other relevant developers documentation describing the intended uses and benefits, the known harmful or inappropriate uses, the governance parameters, the training data, and the expected outputs of the AI system.
Publish a statement summarizing the types of high-risk AI decision systems it has developed or modified, currently makes available to others, and how it manages risks of algorithmic discrimination.

NY AICPA imposes similar requirements on deployers (which would include employers using AI systems to aid in employment decision-making). Additionally, deployers must:

Implement a risk management policy and program to govern the use of high-risk AI decision systems, which will be evaluated based on NIST’s current AI Risk Management Framework or some similar risk management framework; the size and complexity of the deployer, the nature and scope of the AI deployed; and the sensitivity and volume of data processed in connection with the AI system.
Complete an impact assessment, at least annually and within 90 days after an intentional and substantial modification, of the AI system.
Publish on its website a statement summarizing the types of high-risk AI decision systems used by the deployer; how the deployer manages risks of algorithmic discrimination; and the nature, source, and extent of the information collected and used by the deployer.
When using the AI system to make, or be a substantial factor in making, a consequential decision concerning an individual, (i) notify the consumer of the use of the AI system; (ii) provide the consumer with a statement disclosing the purpose of the AI system and nature of the consequential decision, contact information for the deployer, a plain-language description of the AI system, and where to access the website statement summarizing its AI use.
If the deployer uses the AI system to make an adverse consequential decision, disclose to the consumer the principle reason for reaching that decision, and provide an opportunity to correct any “incorrect personal data” that the AI system processed in making the decision and an opportunity to appeal the decision.

Deployers/employers, however, can contract with developers to bear many of these compliance obligations if certain conditions are met. 
The impact assessments required by NY AICPA would analyze the reasonably foreseeable risks of algorithmic discrimination and identify steps to mitigate these risks. These audits would specifically evaluate whether the AI system disproportionately affects certain groups based on protected characteristics. If the audit identifies biases in the AI, the employer would have to engage in corrective actions, including training the system to recognize and avoid discriminatory patterns. If the AI system plays a significant role in making an employment decision, such as hiring or firing, employers must be prepared to justify the decision and offer an employee the opportunity to appeal the decision, among other things. 
The MIT Technology Review also reports that New York Assemblymember Alex Bores has drafted a yet-to-be-released Responsible AI Safety and Education Act (“RAISE Act”), inspired by an unsuccessful California bill (SB 1047), requiring developers to establish safety plans and assessment models for AI systems. From an employment perspective, the RAISE Act would shield any whistleblowers in AI companies from retaliation who share information about any problematic AI model. If it follows in similar fashion to SB 1047, the RAISE Act may require covered entities to submit a statement of compliance to the state’s Attorney General within 30 days of use of relevant AI systems. 
Also pending in New York state are Senate Bill S7623A and Assembly Bill A9315. Both bills would require employers to conduct impact assessments and provide written notice to employees when used. If passed, both laws would specifically limit employers’ use and consequences of employee data collected via AI systems and monitoring.
Massachusetts
If passed, Massachusetts’ proposed Artificial Intelligence Accountability and Consumer Protection Act (“MA AIACPA”), HD 396, also would regulate high-risk AI systems. MA AIACPA imposes similar obligations on developers and deployers, including the requirements of maintaining risk management programs and conducting impact assessments. 
Deployers, including employers, must notify consumers when an AI system materially influences a consequential decision. As part of this notification, employers are required to provide a statement on the purpose of the AI system, an explanation of how AI influenced the decisions, and a process to appeal the decision.
Any corporation operating in the state that uses AI to target specific consumer groups or influence behavior must disclose the methods, purposes, and context in which the AI is used, the ways in which the AI systems are designed to influence consumer behavior, and the details of any third-party entities involved. This public corporate disclosure statement must be available on the website and included in any terms and conditions provided to consumers prior to significant interaction with an AI system. Specifically, corporations must notify individuals when AI targets or materially influences their decisions, and when using algorithms to determine pricing, eligibility, or access to services.
New Mexico
New Mexico’s proposed Artificial Intelligence Act, HB 60, also takes the risk-based approach to AI regulation. Like the bills in New York and Massachusetts, the New Mexico Artificial Intelligence Act contains requirements for both developers and deployers, including the maintenance of a risk management policy addressing the known or reasonably foreseeable risk of algorithmic discrimination, conducting impact assessments at regular intervals, and publishing a notice on their website summarizing the AI systems used. If it passes, the Artificial Intelligence Act will become effective July 1, 2026.
Virginia
The Virginia High Risk Artificial Intelligence Developer and Deployer Act, HB 2094, would create operating standards for developers and deployers of high-risk AI systems. Designed to protect against the risks of algorithmic discrimination, these operating standards largely track the proposed legislation in other states. If passed, the act will go into effect on July 1, 2026.
Texas
In the final days of 2024, Texas introduced the Texas Responsible AI Governance Act (TRAIGA), which, like other states, would regulate the use of AI by requiring: (1) the creation of a risk identification and management policy; (2) semi-annual impact assessments; (3) disclosure and analysis of risk; (4) a description of transparency measures, and (5) human oversight in certain instances. The Texas bill would take effect on September 1, 2025. 
Connecticut
Connecticut’s S.B. 2, while currently stalled, is expected to be re-introduced in 2025. If passed into law, Connecticut employers would need to implement protocols to protect against algorithmic discrimination, conduct impact assessments, and notify employees. Employers that use off-the-shelf AI would not have to ensure the AI product is non-discriminatory, as long as the product is used as intended.
What Employers Should Do Now
As can be seen by the similarities in proposed legislation across the country, a common theme has developed with respect to AI regulation – developers and deployers must implement an AI governance plan aimed to identify and reduce the risk of algorithmic discrimination and ensure ongoing monitoring of the AI system or tools. Although these bills are still pending, employers should commence development of comprehensive AI governance strategies. This proactive approach not only ensures regulatory readiness but demonstrates an organization’s commitment to ethical and responsible AI use, which are important considerations for stakeholders and enforcement agencies alike.