California AG Issues Legal Advisories on the Application of California Law to the Use of AI

On January 13, 2025, California Attorney General Rob Bonta issued two legal advisories on the use of AI, including in the healthcare context. The first legal advisory (“AI Advisory”) advises consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws with respect to the use of AI, while the second (“Healthcare AI Advisory”) provides guidance specific to healthcare entities about their obligations under California law regarding the use of AI.
The AI Advisory notes that businesses have existing obligations with respect to their use of AI under existing California law, including the California Consumer Privacy Act of 2018, the California Invasion of Privacy Act, the Student Online Personal information Protection Act and the Confidentiality of Medical information Act.
The AI Advisory also notes the applicability of recently passed AI laws (with effective dates in 2025 and 2026) to businesses’ use of AI, including laws providing:

disclosure requirements for businesses (e.g., regarding training data used in AI models, AI-generated telemarketing, detection tools for content created by generative AI);
contractual and consent requirements relating to the unauthorized use of likeness in the entertainment industry and other contexts;
disclosure and content removal requirements relating to the use of AI in election and campaign materials;
prohibition of and reporting requirements related to exploitative uses of AI (i.e., child pornography, nonconsensual pornography using deepfake technology, sexually explicit digital identity theft); and
supervision requirements for use of AI tools in healthcare settings.

The Healthcare AI Advisory provides guidance specific to healthcare providers, insurers, vendors, investors and other healthcare entities about their obligations with respect to their use of AI under California law, including:

health consumer protection laws (e.g., prohibition on unlawful, unfair or fraudulent business acts or practices; professional licensing standards and other prohibitions relating to the practice of medicine by non-human entities; requirements relating to management of health insurance);
anti-discrimination laws (e.g., requirements relating to protected classifications); and
patient privacy and autonomy laws (e.g., use and disclosure of patient data, confidentiality of patient data, patient consent, patient rights).

The Healthcare AI Advisory emphasizes the importance of taking proactive steps to comply with existing California law, even as additional AI laws and regulations are anticipated, given the potential risk of harm to patients, healthcare systems and public health.

House Bipartisan Task Force on Artificial Intelligence Report

In February 2024, the House of Representatives launched a bipartisan Task Force on Artificial Intelligence (AI). The group was tasked with studying and providing guidance on ways the United States can continue to lead in AI and fully capitalize on the benefits it offers while mitigating the risks associated with this exciting yet emerging technology. On 17 December 2024, after nearly a year of holding hearings and meeting with industry leaders and experts, the group released the long-awaited Bipartisan House Task Force Report on Artificial Intelligence. This robust report touches on how this technology impacts almost every industry ranging from rural agricultural communities to energy and the financial sector to name just a few. It is clear that the AI policy and regulatory space will continue to evolve while being front and center for both Congress and the new administration as lawmakers, regulators, and businesses continue to grapple with this new exciting technology.
The 274-page report highlights “America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.” Specifically, it outlines the Task Force’s key findings and recommendations for Congress to legislate in over a dozen different sectors. The Task Force co-chairs, Representative Jay Obernolte (R-CA) and Representative Ted Lieu (D-CA), called the report a “roadmap for Congress to follow to both safeguard consumers and foster continued US investment and innovation in AI,” and a “starting point to tackle pressing issues involving artificial intelligence.” 
There was a high level of bipartisan work on AI in the 118th Congress, and although most of the legislation in this area did not end up becoming law, the working group report provides insight into what legislators may do this year and which industries may be of particular focus. Our team continues to monitor legislation, Congressional hearings, and the latest developments writ large in these industries as we transition into the 119th Congress. See below for a sector-by-sector breakdown of a number of findings and recommendations from the report.
Data Privacy
The report’s section on data privacy discusses advanced AI systems’ need to collect huge amounts of data, the significant risks this creates for the unauthorized use of consumers’ personal data, the current state of US consumer privacy protection laws, and recommendations to address these issues. 
It begins with a discussion of AI systems’ need for “large quantities of data from multiple diverse sources” to perform at an optimal level. Companies collect and license this data in a variety of ways, including collecting data from their own users, scraping data from the internet, or some combination of these and other methods. Further, some companies collect, package, and sell scraped data “while others release open-source data sets.” These collection methods raise their own set of issues. For example, according to the report, many websites following “a voluntary standard” state that their websites should not be scraped, but their requests are ignored and litigation ensues. It also notes that some companies “are updating their privacy policies in order to permit the use of user data to train AI models” but not otherwise informing users that their data is being used for this purpose. The European Union and Federal Trade Commission have challenged this practice. It notes that in response, “some companies are turning to privacy-enhanced technologies, which seek to protect the privacy and confidentiality of data when sharing it.” They also are looking at “synthetic data.”
In turn, the report discusses the types of harms that consumers frequently experience when their personal and sensitive data is shared intentionally or unintentionally without their authorization. The list includes physical, economic, emotional, reputational, discrimination, and autonomy harms.
The report follows with a discussion of the current state of US consumer privacy protection laws. It kicks off with a familiar tune: “Currently, there is no comprehensive US federal data privacy and security law.” It notes that there are several sector specific federal privacy laws, such as those intended to protect health and financial data and children’s data, but, as has become clear from this year’s Congressional debate, even these laws need to be updated. It also notes that 19 states have adopted state privacy laws but notes that their standards vary. This suggests that, as in the case of state data breach laws, the result is that they have “created a patchwork of rules and regulations with many drawbacks.” This has caused confusion among consumers and resulted in increased costs and lawsuits for businesses. It concludes with the statement that Federal legislation that preempts state data privacy laws has advantages and disadvantages.” The report outlines three Key Findings: (1) “AI has the potential to exacerbate privacy harms;” (2) “Americans have limited recourse for many privacy harms;” and (3) “Federal privacy laws could potentially augment state laws.”
Based on its findings, the report recommends that Congress should: (1) help “in facilitating access to representative data sets in privacy-enhanced ways” and “support partnerships to improve the design of AI systems” and (2) ensure that US privacy laws are “technology neutral” and “can address the most salient privacy concerns with respect to the training and use of advanced AI systems.”
National Security 
The report highlights both the potential benefits of emerging technologies to US defense capabilities, as well as the risks, especially if the United States is outpaced by its adversaries in development. The report discusses the status and successes of current AI programs at the Department of Defense (DOD), the Army, and the Navy. The report categorizes issues facing development of AI in the national security arena into technical and nontechnical impediments. The technical impediments include increased data usage, infrastructure/compute power, attacks on algorithms and models, and talent acquisition, especially when competing with the private sector in the workforce. The report also identifies perceived institutional challenges facing DOD, saying “acquisition professionals, senior leaders, and warfighters often hesitate to adopt new, innovative technologies and their associated risk of failure. DOD must shift this mindset to one more accepting of failure when testing and integrating AI and other innovative technologies.” The nontechnical challenges identified in the report revolved around third-party development of AI and the inability of the United States to control systems it does not create. The report notes that advancements in AI are driven primarily by the private sector and encourages DOD to capitalize on that innovation, including through more timely procurement of AI solutions at scale with nontraditional defense contractors. 
Chief among the report’s findings and recommendations is a call to Congress to explore ways that the US national security apparatus can “safely adopt and harness the benefits of AI” and to use its oversight powers to hone in on AI activities for national security. Other findings focus on the need for advanced cloud access, the value of AI in contested environments, and the ability of AI to manage DOD business processes. The additional recommendations were to expand AI training at DOD, continue oversight of autonomous weapons policies, and support international cooperation on AI through the Political Declaration on Responsible Military Use of AI. The report indicates that Congress will be paying much more attention to the development and deployment of AI in the national security arena going forward, and now is the time for impacted stakeholders to engage on this issue.
Education and the Workforce
The report also highlights the role of AI technologies in education and the promise and challenges that it could pose on the workforce. The report recognizes that despite the worldwide demand for science, technology, engineering, and mathematics (STEM) workers, the United States has a significant gap in the talent needed to research, develop, and deploy AI technologies. As a result, the report found that training and educating US learners on AI topics will be critical to continuing US leadership in AI technology. The report notes that training the future generations of talent in AI-related fields needs to start with AI and STEM education. Digital literacy has extended to new literacies, such as media, computer, data, and now AI. Challenges include resources for AI literacy. 
US leadership in AI will require growing the pool of trained AI practitioners, including people with skills in researching, developing, and incorporating AI techniques. The report notes that this will likely require expanding workforce pathways beyond the traditional educational routes and a new understanding of the AI workforce, including its demographic makeup, changes in the workforce over time, employment gaps, and the penetration of AI-related jobs across sectors. A critical aspect to understanding the AI workforce will be having good data. US leadership in AI will also require public-private partnerships as a means to bolster the AI workforce. This includes collaborations between educational institutions, government, and industries with market needs and emerging technologies.
While the automation of human jobs is not new, using AI to automate tasks across industries has the potential to displace jobs that involve repetitive or predictable tasks. In this regard, the report notes that while AI may displace some jobs, it will augment existing jobs and create new ones. Such new jobs will inevitably require more advanced skills, such as AI system design, maintenance, and oversight. Other jobs, however, may require less advanced skills. The report adds that harnessing the benefits of AI systems will require a workforce capable of integrating these systems into their daily jobs. It also highlights several existing programs for workforce development, which could be updated to address some of these challenges.
Overall, the report found that AI is increasingly used in the workplace by both employers and employees. US AI leadership would be strengthened by utilizing a more skilled technical workforce. Fostering domestic AI talent and continued US leadership will require significant improvements in basic STEM education and training. AI adoption requires AI literacy and resources for educators.
Based on the above, the report recommends the following:

Invest in K-12 STEM and AI education and broaden participation.
Bolster US AI skills by providing needed AI resources.
Develop a full understanding of the AI workforce in the United States.
Facilitate public-private partnerships to bolster the AI workforce.
Develop regional expertise when supporting government-university-industry partnerships.
Broaden pathways to the AI workforce for all Americans.
Support the standardization of work roles, job categories, tasks, skill sets, and competencies for AI-related jobs.
Evaluate existing workforce development programs.
Promote AI literacy across the United States.
Empower US educators with AI training and resources.
Support National Science Foundation curricula development.
Monitor the interaction of labor laws and worker protections with AI adoption.

Energy Usage and Data Centers
AI has the power to modernize our energy sector, strengthen our economy, and bolster our national security but only if the grid can support it. As the report details, electrical demand is predicted to grow over the next five years as data centers—among other major energy users—continue to come online. These technologies’ outpacing of new power capacity can “cause supply constraints and raise energy prices, creating challenges for electrical grid reliability and affordable electricity.” While data centers only take a few years to construct, new sources of power, such as power plants and transmission infrastructure, can take up to or over a decade to complete. To meet growing electrical demand and support US leadership in AI, the report recommends the following:

Support and increase federal investments in scientific research that enables innovations in AI hardware, algorithmic efficiency, energy technology development, and energy infrastructure.
Strengthen efforts to track and project AI data center power usage.
Create new standards, metrics, and a taxonomy of definitions for communicating relevant energy use and efficiency metrics.
Ensure that AI and the energy grid are a part of broader discussions about grid modernization and security.
Ensure that the costs of new infrastructure are borne primarily by those customers who receive the associated benefits.
Promote broader adoption of AI to enhance energy infrastructure, energy production, and energy efficiency.

Health Care
The report highlights that AI technologies have the potential to improve multiple aspects of health care research, diagnosis, and care delivery. The report provides an overview of use to date and its promise in the health care system, including with regard to drug, medical device, and software development, as well as in diagnostics and biomedical research, clinical decision-making, population health management, and health care administration. The report also highlights the use of AI by payers of health care services both for the coverage of AI-provided services and devices and for the use of AI tools in the health insurance industry.
The report notes that the evolution of AI in health care has raised new policy issues and challenges. This includes issues involving data availability, utility, and quality as the data required to train AI systems must exist, be of high quality, and be able to be transferred and combined. It also involves issues concerning interoperability and transparency. AI-enabled tools must be able to integrate with health care systems, including EHR systems, and they need to be transparent for providers and other users to understand how an AI model makes decisions. Data-related risks also include the potential for bias, which can be found during development or as the system is deployed. Finally, there is the lack of legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or recommendations. 
Overall, the report found that AI’s use in health care can potentially reduce administrative burdens and speed up drug development and clinical diagnosis. When used appropriately, these uses of AI could lead to increased efficiency, better patient care, and improved health outcomes. The report also found that the lack of standards for medical data and algorithms impedes system interoperability and data sharing. The report notes that if AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.
Based on the above, the report recommends the following:

Encourage the practices needed to ensure AI in health care is safe, transparent, and effective.
Maintain robust support for health care research related to AI.
Create incentives and guidance to encourage risk management of AI technologies in health care across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.
Support the development of standards for liability related to AI issues.
Support appropriate payment mechanisms without stifling innovation.

Financial Services
With respect to financial services, the report emphasizes that AI is already and has been used for decades within the financial services system, by both industry and financial regulators alike. Key examples of use cases have included fraud detection, underwriting, debt collection, customer onboarding, real estate, investment research, property management, customer service, and regulatory compliance, among other things. The report also notes that AI presents both significant risks and opportunities to the financial system, so it is critical to be thoughtful when considering and crafting regulatory and legislative frameworks in order to protect consumers and the integrity of the financial system, while also ensuring to not stifle technological innovation. As such, the report states that lawmakers should adopt a principles-based approach that is agnostic to technological advances, rather than a technology-based approach, in order to preserve longevity of the regulatory ecosystem as technology evolves over time, particularly given the rapid rate at which AI technology is advancing.  Importantly, the report notes that small financial institutions may be at a significant disadvantage with respect to adoption of AI, given a lack of sufficient resources to leverage AI at scale, and states that regulators and lawmakers must ensure that larger financial institutions are not inadvertently favored in policies so as not to limit the ability of smaller institutions to compete or enter the market. Moreover, the report stresses the need to maintain relevant consumer and investor protections with AI utilization, particularly with respect to data privacy, discrimination, and predatory practices.
A Multi-Branch Approach to AI/Next Steps
The Task Force recognizes that AI policy will not fall strictly under the purview of Congress. Co-chair Obernolte shared that he has met with David Sacks, President Trump’s “AI Czar,” as well as members of the transition team to discuss what is in the report. 
We will be closely following how both the administration and Congress act on AI in 2025, and we are confident that no industry will be left untouched.
 
Vivian K. Bridges, Lauren E. Hamma, Abby Dinegar contributed to this article.

USPTO Announces New Effort to Promote AI and Emerging Technologies

The U.S. Patent and Trademark Office (USPTO) recently announced an official Artificial Intelligence Strategy that outlines how the Office plans to address the promise and challenges of artificial intelligence (AI) in its internal operations as well as in the development of intellectual property (IP) policy.  According to information provided in the newly released document, annual filings of AI-related patent applications have increased more than two-fold since 2002 and are up 33% since 2018. Additionally, AI has permeated a wide range of technology sectors with AI-related patent applications appearing in 60% of all the technology subclasses used by the USPTO in 2023.
The initiative outlined in the AI Strategy document seeks to support responsible and inclusive AI innovation, implement AI in furtherance of the USPTO’s mission, and maintain the U.S.’s competitive edge in global innovation. At the same time, USPTO officials say the new AI Strategy mitigates risks and fosters responsible use of artificial intelligence.
Specifically, the AI Strategy document provides a roadmap designed to enhance the agency’s efforts in promoting AI innovation within its operations and the broader intellectual property sector, through five key focus areas:
(1) Advance the development of IP policies that promote inclusive AI innovation and creativity. (2) Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development.(3) Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem. (4) Develop AI expertise within the USPTO’s workforce.(5) Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

With respect to the first focus area, the Strategy document states that:
“As appropriate, the USPTO will advocate for the development of balanced and sound judicial precedents and legislation that promote both AI innovation and respect for IP rights, while not unnecessarily constraining future AI innovation. For example, the USPTO would advocate for judicial positions, consistent with existing legal precedent, that would encourage innovation with respect to issues including AI-generated prior art and AI-assisted inventions.”

The USPTO Strategy document also notes that the rapid advancement of AI technologies could not only impact patent-related policy, including inventorship, subject-matter eligibility, obviousness, enablement, and written description, but also affect “the volume and character of submitted applications.” Some of these topics, as noted below, have been addressed via guidance released by the USPTO this past year.
AI has the potential to not only transform the tools used by Examiners to examine patent applications, but also to redefine the inventive process itself as well as the framework by which inventions are evaluated. For patent holders and attorneys, the release of this strategy signals the USPTO’s commitment to fostering an ecosystem where AI advancements can thrive responsibly, driving innovation and protecting intellectual property.
This announcement caps an active 12-month period for the Office with respect to AI policy and guidance. On February 13, 2024, the Office published inventorship guidance for AI-assisted inventions followed by updated patent eligibility guidance for AI inventions on July 17, 2024. The release of the USPTO’s official AI Strategy plan, along with the prior guidance, is responsive to and in alignment with the Biden-Harris Administration’s October 2023 Executive Order 14110 on the safe and secure development and use of AI. Given that the AI Strategy plan was released during the final week of the Biden-Harris Administration, the degree to which it is implemented will depend on the Trump-Vance Administration. Expect further notices and guidance regarding these topics as this transition occurs.

States Ring in the New Year with Proposed AI Legislation

As we enter 2025, the rapid growth of artificial intelligence (AI) presents both transformative opportunities and pressing legal challenges, particularly in the workplace.
Employers must navigate an increasingly complex regulatory landscape to ensure compliance and avoid liability. With several states proposing AI regulations that would impact hiring practices and other employment decisions, it is critical for employers to stay ahead of these developments.
New York
New York’s proposed legislation, which if passed would become effective January 1, 2027, provides guardrails to New York employers implementing AI to assist in hiring, promoting, or making other decisions pertaining to employment opportunities. Unlike New York City Local Law 144, which covers only certain employment decisions, the New York Artificial Intelligence Consumer Protection Act (“NY AICPA”), A 768, takes a risk-based approach to AI regulation, much like that of Colorado’s SB 24-205. The NY AICPA would specifically regulate all “consequential decisions” made by AI, including those having a “material legal or similarly significant effect” on any “employment or employment opportunity.” The bill imposes compliance obligations on “developers” and “deployers” of high-risk AI decision systems. 
If passed, NY AICPA would require developers to:

Use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. This would include undertaking bias and governance audits by an independent third-party auditor qualified by the State’s Attorney General.
Make available to deployers and other relevant developers documentation describing the intended uses and benefits, the known harmful or inappropriate uses, the governance parameters, the training data, and the expected outputs of the AI system.
Publish a statement summarizing the types of high-risk AI decision systems it has developed or modified, currently makes available to others, and how it manages risks of algorithmic discrimination.

NY AICPA imposes similar requirements on deployers (which would include employers using AI systems to aid in employment decision-making). Additionally, deployers must:

Implement a risk management policy and program to govern the use of high-risk AI decision systems, which will be evaluated based on NIST’s current AI Risk Management Framework or some similar risk management framework; the size and complexity of the deployer, the nature and scope of the AI deployed; and the sensitivity and volume of data processed in connection with the AI system.
Complete an impact assessment, at least annually and within 90 days after an intentional and substantial modification, of the AI system.
Publish on its website a statement summarizing the types of high-risk AI decision systems used by the deployer; how the deployer manages risks of algorithmic discrimination; and the nature, source, and extent of the information collected and used by the deployer.
When using the AI system to make, or be a substantial factor in making, a consequential decision concerning an individual, (i) notify the consumer of the use of the AI system; (ii) provide the consumer with a statement disclosing the purpose of the AI system and nature of the consequential decision, contact information for the deployer, a plain-language description of the AI system, and where to access the website statement summarizing its AI use.
If the deployer uses the AI system to make an adverse consequential decision, disclose to the consumer the principle reason for reaching that decision, and provide an opportunity to correct any “incorrect personal data” that the AI system processed in making the decision and an opportunity to appeal the decision.

Deployers/employers, however, can contract with developers to bear many of these compliance obligations if certain conditions are met. 
The impact assessments required by NY AICPA would analyze the reasonably foreseeable risks of algorithmic discrimination and identify steps to mitigate these risks. These audits would specifically evaluate whether the AI system disproportionately affects certain groups based on protected characteristics. If the audit identifies biases in the AI, the employer would have to engage in corrective actions, including training the system to recognize and avoid discriminatory patterns. If the AI system plays a significant role in making an employment decision, such as hiring or firing, employers must be prepared to justify the decision and offer an employee the opportunity to appeal the decision, among other things. 
The MIT Technology Review also reports that New York Assemblymember Alex Bores has drafted a yet-to-be-released Responsible AI Safety and Education Act (“RAISE Act”), inspired by an unsuccessful California bill (SB 1047), requiring developers to establish safety plans and assessment models for AI systems. From an employment perspective, the RAISE Act would shield any whistleblowers in AI companies from retaliation who share information about any problematic AI model. If it follows in similar fashion to SB 1047, the RAISE Act may require covered entities to submit a statement of compliance to the state’s Attorney General within 30 days of use of relevant AI systems. 
Also pending in New York state are Senate Bill S7623A and Assembly Bill A9315. Both bills would require employers to conduct impact assessments and provide written notice to employees when used. If passed, both laws would specifically limit employers’ use and consequences of employee data collected via AI systems and monitoring.
Massachusetts
If passed, Massachusetts’ proposed Artificial Intelligence Accountability and Consumer Protection Act (“MA AIACPA”), HD 396, also would regulate high-risk AI systems. MA AIACPA imposes similar obligations on developers and deployers, including the requirements of maintaining risk management programs and conducting impact assessments. 
Deployers, including employers, must notify consumers when an AI system materially influences a consequential decision. As part of this notification, employers are required to provide a statement on the purpose of the AI system, an explanation of how AI influenced the decisions, and a process to appeal the decision.
Any corporation operating in the state that uses AI to target specific consumer groups or influence behavior must disclose the methods, purposes, and context in which the AI is used, the ways in which the AI systems are designed to influence consumer behavior, and the details of any third-party entities involved. This public corporate disclosure statement must be available on the website and included in any terms and conditions provided to consumers prior to significant interaction with an AI system. Specifically, corporations must notify individuals when AI targets or materially influences their decisions, and when using algorithms to determine pricing, eligibility, or access to services.
New Mexico
New Mexico’s proposed Artificial Intelligence Act, HB 60, also takes the risk-based approach to AI regulation. Like the bills in New York and Massachusetts, the New Mexico Artificial Intelligence Act contains requirements for both developers and deployers, including the maintenance of a risk management policy addressing the known or reasonably foreseeable risk of algorithmic discrimination, conducting impact assessments at regular intervals, and publishing a notice on their website summarizing the AI systems used. If it passes, the Artificial Intelligence Act will become effective July 1, 2026.
Virginia
The Virginia High Risk Artificial Intelligence Developer and Deployer Act, HB 2094, would create operating standards for developers and deployers of high-risk AI systems. Designed to protect against the risks of algorithmic discrimination, these operating standards largely track the proposed legislation in other states. If passed, the act will go into effect on July 1, 2026.
Texas
In the final days of 2024, Texas introduced the Texas Responsible AI Governance Act (TRAIGA), which, like other states, would regulate the use of AI by requiring: (1) the creation of a risk identification and management policy; (2) semi-annual impact assessments; (3) disclosure and analysis of risk; (4) a description of transparency measures, and (5) human oversight in certain instances. The Texas bill would take effect on September 1, 2025. 
Connecticut
Connecticut’s S.B. 2, while currently stalled, is expected to be re-introduced in 2025. If passed into law, Connecticut employers would need to implement protocols to protect against algorithmic discrimination, conduct impact assessments, and notify employees. Employers that use off-the-shelf AI would not have to ensure the AI product is non-discriminatory, as long as the product is used as intended.
What Employers Should Do Now
As can be seen by the similarities in proposed legislation across the country, a common theme has developed with respect to AI regulation – developers and deployers must implement an AI governance plan aimed to identify and reduce the risk of algorithmic discrimination and ensure ongoing monitoring of the AI system or tools. Although these bills are still pending, employers should commence development of comprehensive AI governance strategies. This proactive approach not only ensures regulatory readiness but demonstrates an organization’s commitment to ethical and responsible AI use, which are important considerations for stakeholders and enforcement agencies alike. 

2023 AI Executive Order Revoked

On January 20, 2025, President Donald Trump signed an executive order rescinding the 2023 directive issued by former President Joe Biden on artificial intelligence (AI). Biden’s order outlined extensive measures aimed at guiding the development and use of AI technologies, including the establishment of chief AI officers in major federal agencies and frameworks for tackling ethical and security risks. This revocation signals a major policy change, transitioning away from the federal oversight put in place by the previous administration.
The move to revoke Biden’s executive order has led to a climate of regulatory uncertainty for companies operating in AI-driven fields. In the absence of a unified federal framework, businesses could encounter various challenges, such as an inconsistent regulatory landscape as states and international organizations intervene, increased risks related to AI ethics and data privacy, and unfair competition among companies that implement differing standards for AI development and deployment.
Looking Forward
In light of this shift, companies are encouraged to adopt proactive measures to navigate the evolving environment. To uphold trust and accountability, it is essential to bolster internal governance by creating or improving ethical guidelines concerning AI usage. Organizations should also invest in compliance by monitoring state, international, and industry-specific regulations to align with new standards like Colorado’s Artificial Intelligence Act and the EU’s AI Act.
Additionally, staying informed about possible federal policy changes and legislative efforts is crucial, as further announcements may signal new directions in AI governance. Collaborating with industry groups and standards organizations can help shape voluntary guidelines and best practices, while robust risk management frameworks will be essential to mitigate issues such as bias, cybersecurity threats, and liability concerns.
To navigate this evolving landscape, organizations should consider taking the following steps now:

Strengthen Internal Governance: Develop or enhance internal AI policies and ethical guidelines to promote responsible and legally compliant AI use, even in the absence of federal mandates.
Invest in Compliance: Stay updated on state, international, and industry-specific AI regulations that could impact operations. Proactively align practices with emerging standards such as Colorado’s Artificial Intelligence Act and the EU’s AI Act.
Monitor Federal Developments: Keep a close eye on further announcements or legislative actions from Congress and federal agencies that could signal new directions in AI policy and regulation.
Engage in Industry Collaboration: Collaborate with industry groups and standards organizations to help influence voluntary AI standards and best practices.
Focus on Risk Management: Establish strong risk assessment frameworks to identify and address potential AI-related risks, including biases, cybersecurity threats, legal compliance, and liability concerns.

President Trump’s decision reflects a preference for less regulation, increasing the responsibility on the private sector to ensure ethical and safe AI usage. Companies need to navigate an uncertain regulatory landscape while innovating responsibly. As circumstances change, businesses must stay alert and flexible to uphold their competitive advantage and public trust.

U.S. Treasury Department’s Final Rule on Outbound Investment Takes Effect

On January 2, 2025, the U.S. Department of the Treasury’s Final Rule on outbound investment screening became effective. The Final Rule implements Executive Order 14105 issued by former President Biden on August 9, 2023, and aims to protect U.S. national security by restricting covered U.S. investments in certain advanced technology sectors in countries of concern. Covered transactions with a completion date on or after January 2, 2025, are subject to the Final Rule, including the prohibition and notification requirements, as applicable.
The Final Rule targets technologies and products in the semiconductor and microelectronics, quantum information technologies, and artificial intelligence (AI) sectors that may impact U.S. national security. It prohibits certain transactions and requires notification of certain other transactions in those technologies and products. The Final Rule has two primary components:

Notifiable Transactions: A requirement that notification of certain covered transactions involving both a U.S. person and a “covered foreign person” (including but not limited to a person of a country of concern engaged in “covered activities” related to certain technologies and products) be provided to the Treasury Department. A U.S. person subject to the notification requirement is required to file on Treasury’s Outbound Investment Security Program website by specified deadlines. The Final Rule includes the detailed information and certification required in the notification and a 10-year record retention period for filing and supporting information.
Prohibited Transaction: A prohibition on certain U.S. person investments in a covered foreign person that is engaged in a more sensitive sub-set of activities involving identified technologies and products. A U.S. person is required to take all reasonable steps to prohibit and prevent its controlled foreign entity from undertaking transaction that would be a prohibited transaction if undertaken by a U.S. person.  The Final Rule contains a list of factors that the Treasury Department would consider whether the relevant U.S. person took all reasonable steps.

The Final Rule focuses on investments in “countries of concern,” which currently include only the People’s Republic of China, including Hong Kong and Macau. The Final Rule targets U.S. investments in Chinese companies involved in the following three sensitive technologies sub-sets: semiconductor and microelectronics, quantum information technologies and artificial intelligence. The Final Rule sets forth prohibited and notifiable transactions in each of the three sectors:
Semiconductors and Microelectronics

Prohibited: Covered transactions relating to certain electronic design automation software, fabrication or advanced packaging tools, advanced packaging techniques, and the design and fabrication of certain advanced integrated circuits and supercomputers.
Notifiable: Covered transactions relating to the design, fabrication and packaging of integrated circuits not covered by the prohibited transactions.

Quantum Information Technologies

All Prohibited: Covered transactions involving the development of quantum computers and production of critical components, the development or production of certain quantum sensing platforms, and the development or production of quantum networking and quantum communication systems.

Artificial Intelligence (AI) Systems

Prohibited:

Covered transactions relating to AI systems designed exclusively for or intended to be used for military, government intelligence or mass surveillance end uses.
Covered transactions relating to development of any AI system that is trained using a quantity of computing power meeting certain technical specifications and/or using primarily biological sequence data.

Notifiable: Covered transactions involving AI systems designed or intended to be used for cybersecurity applications, digital forensics tools, penetration testing tools, control of robotic systems or that trained using a quantity of computing power meeting certain technical specifications.

The Final Rule specifically defines the key terms “country of concern,” “U.S. person,” “controlled foreign entity,” “covered activity,” “covered foreign person,” “knowledge” and “covered transaction” and other related terms and sets forth the prohibitions and notification requirements in line with the national security objectives stated in the Executive Order.  The Final Rule also provides a list of transactions that are excepted from such requirements.
U.S. investors intending to invest in China, particularly in the sensitive sectors set forth above, should carefully review the Final Rule and conduct robust due diligence to determine whether a proposed transaction would be covered by the Final Rule (either prohibited or notifiable) before undertaking any such transaction. 
Any person subject to U.S. jurisdiction may face substantial civil and/or criminal penalties for violation or attempted violation of the Final Rule, including civil fines of up to $368,137 per violation (adjusted annually for inflation) or twice the amount of the transaction, whichever is greater, and/or criminal penalties up to $1 million or 20 years in prison for willful violations.  In addition, the Secretary of the Treasury can take any authorized action to nullify, void, or otherwise require divestment of any prohibited transaction.

OR AG Issues Guidance Regarding OR State Laws and AI

On December 24, 2024, the Oregon Attorney General published AI guidance, “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” (the “Guidance”) that clarifies how existing Oregon consumer protection, privacy and anti-discrimination laws apply to AI tools. Through various examples, the Guidance highlights key themes such as privacy, accountability and transparency, and provides insight into “core concerns,” including bias and discrimination.
Consumer Protection – Oregon’s Unlawful Trade Practice Act (“UTPA”)
The Guidance emphasizes that misrepresentations, even when they are not directly made to the consumer, may be actionable under the UTPA, and an AI developer or deployer may be “liable to downstream consumers for the harm its products cause.” The Guidance provides a non-exhaustive list of examples that may constitute violations of the UTPA, such as:

failing to disclose any known material defect or nonconformity when delivering an AI product;
misrepresenting that an AI product has characteristics, uses, benefits or qualities that it does not have;
using AI to misrepresent that real estate, goods or services have certain characteristics, uses, benefits or qualities (e.g., a developer or deployer using a chatbot while falsely representing that it is human);
using AI to make false or misleading representations about price reductions (e.g., using AI generated ads or emails indicating “limited time” or “flash sale” when a similar discount is offered year-round);
using AI to set excessively high prices during an emergency;
using an AI-generated voice as part of a robocall campaign to misrepresent or falsify certain information, such as the caller’s identity and the purpose of the call; and
leveraging AI to use unconscionable tactics regarding the sale, rental or disposal of real estate, goods or services, or collecting or enforcing an obligation (e.g., knowingly taking advantage of a consumer’s ignorance or knowingly permitting a consumer to enter into a transaction that does not materially benefit them).

Data Privacy – Oregon Consumer Protection Act (“OCPA”)
In addition, the Guidance notes that developers, suppliers and users of AI may be subject to OCPA, given generative AI systems ingest a significant amount of words, images and other content that often consists of personal data. Key takeaways from the Guidance regarding OCPA include:

developers that use personal data to train AI systems must clearly disclose that they do so in an accessible and clear privacy notice;
if personal data includes any categories of sensitive data, entities must first obtain explicit consent from consumers before using the data to develop or train AI models;
if the developer purchases or uses another data’s company for model training, the developer may be considered a “controller” under OCPA, and therefore must comply with the same standards as the company that initially collected the data;
data suppliers and developers are prohibited from “retroactively or passively” altering privacy notices or terms of use to legitimatize the use of previously collected personal data to train AI models, and instead are required to obtain affirmative consent for any secondary or new uses of that data;
developers and users of AI must provide a mechanism for consumers to withdraw previously-given consent (and if the consent is revoked, stop processing the data within 15 days of receiving the revocation);
entities subject to OCPA must consider how to account for specific consumer rights when using AI models, including a consumer’s right to (1) opt-out of the use of profiling in decisions that have legal or similarly significant effects (e.g., housing, education or lending) and (2) request the deletion of their personal data; and
in connection with OCPA’s requirement to conduct data protection assessments for certain processing activities, due to the complexity of generative AI models and proprietary data and algorithms, entities “should be aware that feeding consumer data into AI models and processing it in connection with these models likely poses heightened risks to consumers.”

Data Security – Oregon Consumer Information Protection Act
The Guidance clarifies that AI developers (as well as their data suppliers and users) that “own, license, maintain, store, manage, collect, acquire or otherwise possess” personal information also must comply with the Oregon Consumer Information Protection Act, which requires businesses to safeguard personal information and implement an information security program that meets specific requirements. The Guidance also notes that to the extent there is a security breach, AI developers, data suppliers and users may be required to notify consumers and the Oregon Attorney General.
Anti-Discrimination – Oregon Equality Act
The Guidance explains that AI systems that “utilize discretionary inputs or produce biased outcomes that harm individuals based on protected characteristics” may trigger the Oregon Equality Act. The law prohibits discrimination based on race, color, religion, sex, sexual orientation, gender identity, national origin, marital status, age or disability, including in connection with housing and public accommodations. The Guidance also includes an illustrative example regarding how the law applies to the use of AI. Specifically, the Guidance notes that a rental management company’s use of an AI mortgage approval system that consistently denies loans to qualified applicants based on certain neighborhoods or ethnic backgrounds because the AI system was trained on historically biased data may be considered a violation of the law.

Watch Out, Employers: Using Smart Devices in the Workplace May Not Be So Smart

What does the EEOC have to do with smart watches, rings, glasses, helmets and other devices that track bodily movement and other data? These devices, known as “wearables,” can track location, brain activity, heart rate, and other mental or physical information about the wearer, which has led some employers to require their employees to wear company-issued wearables. While the wearables may provide useful data, the EEOC recently warned employers to watch out for the dangers associated with them. A summary of the EEOC’s reported risks are identified below. You can find the full guidance here.
What to watch out for
Per the EEOC’s guidance, there are three categories of risk: collecting information, using information, and reasonable accommodations.
1. Collecting Information – Among other things, wearables collect information related to employees’ physical or mental condition (e.g., heart rate and blood pressure). The EEOC warned that collecting this type of information may pose risks under the Americans with Disabilities Act.
The EEOC considers tracking this sort of information as the equivalent of a disability-related inquiry, or even a medical examination under the ADA. Both inquiries and medical examinations for all employees (not just those with disabilities) are limited to situations where the inquiry or exam is job-related and consistent with business necessity or otherwise permitted by the ADA. The ADA allows inquiries and examinations for employees in the following circumstances:

When a federal, safety-related law or regulation allows for the inquiry or exam,
For certain public-safety related positions (e.g., police or firefighters), or
If the inquiry or exam is voluntary and part of the employer’s health program that is reasonably designed to promote health or prevent disease.

Outside of these three exceptions, the ADA prohibits disability-related inquiries and medical examinations. Also, if you are tracking this information, keep it confidential, just like you would any other medical information.
2. Using Information – Even if collection is permitted, employers must use caution when determining how to use the information. If an employer uses the information in a way that adversely effects employees due to a protected status, it could trigger anti-discrimination laws. A few cautionary examples from the guidance:

Using heart rate or other information to infer an employee is pregnant, then taking adverse action against her
Relying on wearable data, which is less accurate for individuals with dark skin, and making an adverse decision based on that data
Tracking an employee’s location, particularly when they are on a break or off work, and asking questions about their visits to medical facilities, which could elicit genetic information
Analyzing heart rate data to infer or predict menopause and refusing to promote employee because of sex, age, and/or disability
Increased tracking of employees who have previously reported allegations of discrimination or other protected activity

Employers need to be cautious about policies regarding mandated-wearable use. Requiring some, but not all, employees to wear these devices may trigger risks of discrimination under Title VII. If you plan to use these devices, you need human oversight and the employees monitoring the data must understand the device flaws, imperfections in the data, and potential ways of misusing the information.
3. Reasonable Accommodations – Even if an employer’s mandated-wearable requirement meets one of the above-listed exceptions, you may need to make reasonable accommodations if an employee meets the requirements for a religious exception or based on pregnancy or disability.
Takeaways
Technology in the workplace is ever-changing, and you need to stay informed about potential issues before you decide to use it. Do you need this information? If so, do you need it on all of your employees? Remember that if you don’t know about an employee’s protected status, you are less likely to be accused of basing a decision on it.
Before you implement (or continue using) mandated wearables, meet with your employment lawyer to work out a plan for implementation, follow-up, and continued policy monitoring for these devices. Also, check out our prior blog on AI in the workforce for additional tips on responsible use of technology in the workplace.
Listen to this post 

2025 Labor and Employment Outlook for Manufacturers: Employer-Friendly Skies on the Horizon

As we look ahead to 2025, several important labor and employment law changes, planned and potential, are on the horizon. With President Trump set to return to the Oval Office on January 20, 2025, labor and employment law priorities at the federal level are expected to change significantly. Meanwhile, state legislatures remain active in enacting new laws that will impact the labor and employment law landscape for manufacturers. Below are a few key issues likely to impact manufacturers in 2025.
Minimum Wage for Non-Exempt Employees and Salary Threshold for Exempt Employees
While President Biden called for an increase to the federal minimum wage (currently $7.25 per hour) to $15 per hour during his presidency—that did not occur; rather, the minimum wage rate has remained unchanged since 2009. During President-elect Trump’s recent campaign, he signaled an openness to raising the federal minimum wage. However, any forthcoming increase will likely be substantially less than the Biden administration’s goal. Similarly, it is unlikely that the incoming administration will seek to revive the Biden administration’s increased “white collar” overtime exemption salary threshold, which a federal judge recently struck down. Nonetheless, manufacturers should remain current on federal and state minimum wage rates and salary thresholds.
Independent Contractor v. Employee Classification Enforcement
It is possible that the incoming administration may undo the Biden administration’s efforts to make it more difficult for manufacturers to classify workers as independent contractors, thereby simplifying wage and hour compliance for manufacturers under the Fair Labor Standards Act (FLSA). Further, the Trump administration may not prioritize this issue from an enforcement perspective. Regardless, manufacturers should continue to ensure compliance with more stringent state and local laws and guidance regarding worker classification.
Status of Equal Employment Opportunity and Diversity, Equity, Inclusion, & Belonging (DEIB) Programs and Policies
As seen during Trump’s first presidency, the Equal Employment Opportunity Commission (EEOC) under the incoming administration may aim at governmental and corporate diversity, equity, inclusion, and belonging (DEIB) initiatives in employment, focusing on equality compared to equity. Some entities are preemptively rolling back their DEIB programs and practices regarding recruiting, hiring, promotions, and similar efforts in anticipation of the new administration’s position. The Trump administration may also change protections for LGBTQ+ workers. At the state and local levels, it is expected that there will be a continued expansion of protected statutes. Manufacturers should be aware of these developments and ensure that their handbooks and policies comply with the state and local laws where their employees are working.
Workers’ Right to Organize and the National Labor Relations Board (NLRB)
Under the incoming Trump administration, the National Labor Relations Board (NLRB) may revert to more employer-friendly policies aimed at ensuring companies have rights regarding union organizing and similar activities, as seen during the first Trump administration. We also anticipate that the incoming General Counsel of the NLRB will rescind the memoranda issued by the current NLRB General Counsel, which implemented a pro-labor policy by expanding the scope of available remedies for unfair labor practices and restricting permissible non-compete agreements, among other key efforts to support union-organizing activity. The new NLRB may return to using more balanced standards and rulings when analyzing employer policies and confidentiality and non-disparagement provisions. Whether unionized or union-free, manufacturers may be impacted by these changes at the NLRB and beyond and should be aware of these developments.
Artificial Intelligence (AI)
While manufacturers continue to turn towards artificial intelligence (AI) and algorithm-based technologies for recruiting, hiring, and other employment needs, there may be developments in legislation at the state and federal levels. At the federal level, it is possible that the incoming administration could approach the issue of AI from a self-governance perspective, meaning refraining from legislating around the use of AI in employment and, instead, relying on employers to monitor their use of AI in recruiting, hiring, etc. AI tools could be a key focus for state and local legislatures in 2025. Manufacturers should ensure that their deployment and use of AI tools in employment comply with federal, state, and local laws.
Non-Compete Legislation
The Federal Trade Commission’s (FTC) final rule banning non-compete agreements did not go into effect as planned in 2024, and the FTC will likely abandon its efforts to revive the final rule once the incoming administration takes office. We do not anticipate further legislative or regulatory efforts at the federal level during the second Trump administration. However, at the state level, we expect to see more states and localities enact laws banning or restricting the scope of non-compete agreements, including based on position and salary, thereby challenging manufacturer efforts to protect their business interests and proprietary information and defend against unfair competition.

FDA Issues New Recommendations on Use of AI for Medical Devices, Drugs, and Biologics

In its most recent effort to keep pace with advancing technology, the US Food and Drug Administration (FDA) recently issued two draft guidances on the use of artificial intelligence (AI) in the context of drugs, biologics, and medical devices.

Medical Device Guidance
The first draft guidance is entitled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” The guidance provides an overview of the type of documentation and information that companies will need to submit to the FDA to obtain medical device regulatory approval — part of the so-called “marketing submission” process. Among other things, the FDA advises that such documentation and information should include:

A description of the device inputs and outputs, including whether inputs are entered manually or automatically.
A description of how AI is used to achieve the device’s intended purpose.
A description of the device’s intended users, their characteristics, and the level and type of training they are expected to have and/or receive.
A description of the intended use environment (e.g., clinical setting, home setting).
A description of the degree of automation that the device provides in comparison to the workflow for the current standard of care.
A comprehensive risk assessment and risk management plan.
Data management information, including how data were collected, limitations of the dataset, and an explanation of how the data are representative of the intended use population.
A cybersecurity assessment, particularly focusing on those risks that may be unique to AI.

Guidance for Drugs and Biological Products
The second draft guidance issued this month is entitled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” The guidance addresses considerations for the use of AI to support regulatory decision-making for drugs and biologics. Specifically, the draft guidance discusses the use of AI models to produce information or data to support regulatory decision-making regarding safety, effectiveness, or quality for these products. To that end, the FDA recommends utilizing the following seven-step process to establish and assess the credibility of an AI model output for a specific context of use (COU) based on model risk:

Step 1: Define the question of interest that will be addressed by the AI model.
Step 2: Define the COU for the AI model.
Step 3: Assess the AI model risk.
Step 4: Develop a plan to establish the credibility of AI model output within the COU.
Step 5: Execute the plan.
Step 6: Document the results of the credibility assessment plan and discuss deviations from the plan.
Step 7: Determine the adequacy of the AI model for the COU.

Each of these steps is discussed in detail in the draft guidance, with examples provided. The FDA states that this will provide a “risk-based credibility assessment framework” that will help manufacturers and other interested parties plan, gather, organize, and document information to establish the credibility of AI model outputs when the model is used to produce information or data intended to support regulatory decision-making.
Conclusion
The new guidances from the FDA provide further indication that the agency is closely scrutinizing the use of AI in and in connection with FDA-regulated medical devices, drugs, and biological products. To reduce the risk of unnecessary regulatory delays, companies seeking approval of FDA-regulated products should carefully review their regulatory submissions to ensure they align with the new AI guidance documents.

Continued FTC Crackdown on False Product Reviews

Consumer protection wins again! The Federal Trade Commission (FTC) announced a final order settling its complaint against Rytr LLC, an artificial intelligence (AI) writing assistant tool that was capable of producing detailed and specific false product reviews using AI technology.
The FTC further alleged that Rytr subscribers used the service to generate product reviews potentially containing false information, deceiving potential consumers who sought to use the reviews to make purchasing decisions. The final order settling the complaint, which was published on December 18, 2024, bars Rytr from engaging in similar illegal conduct in the future and prohibits the company from advertising, marketing, promoting, offering for sale, or selling any services “dedicated to or promoted as generating consumer reviews or testimonials.”
The decision highlights increased scrutiny against AI tools that can be used to generate false and deceptive content, which may mislead consumers. AI developers should prioritize transparency in how AI-generated content is created and used and ensure that AI services comply with advertising and consumer protection laws. The decision also reflects the need for AI developers to balance innovation with ensuring their innovations do not harm consumers.

Out with a Bang: President Biden Ends Final Week in Office with Three AI Actions — AI: The Washington Report

President Biden’s final week in office included three AI actions — a new rule on chip and AI model export controls, an executive order on AI infrastructure and data centers, and an executive order on cybersecurity.
On Monday, the Department of Commerce issued a rule on responsible AI diffusion limiting chip and AI model exports made to certain countries of concern. The rule is particularly aimed at curbing US AI technology exports to China and includes exceptions for US allies.
On Tuesday, President Biden signed an executive order (EO) on AI infrastructure, which directs agencies to lease federal sites for the development of large-scale AI data centers.
On Thursday, Biden signed an EO on cybersecurity, which directs the federal government to strengthen its cybersecurity systems and implement more rigorous requirements for software providers and other third-party contractors.
The actions come just days before President-elect Trump begins his second term. Yet, it remains an open question whether President Trump, who has previously supported chip export controls and data center investments, will keep these actions in place or undo them.  

 In its final week, the Biden administration issued three final actions on AI, capping off the administration that took the first steps toward creating a government response to AI. On Monday, the Biden administration announced a rule on responsible AI diffusion through chip and AI model export controls, which limit such exports to certain foreign countries. On Tuesday, President Biden signed an Executive Order (EO) on Advancing United States Leadership in Artificial Intelligence Infrastructure, which directs agencies to lease federal sites for the development of AI data centers. And on Thursday, Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity, which directs the federal government to strengthen its cybersecurity operations.
The new AI actions come just days before President-elect Trump takes the White House. What Trump decides to do with Biden’s new and old AI actions, as we discuss below, may provide the first indication of the direction of his second administration’s approach to AI.
Rule on Responsible Diffusion of Advanced AI Technology
On Monday, the Department of Commerce’s Bureau of Industry and Security announced a sweeping rule on export controls on chips and AI models, which requires licenses for exports of the most advanced chips and AI models. The rule aims to allow US companies to export advanced chips and AI models to global allies while also preventing the diffusion of those technologies, either directly or through an intermediary, into countries of concern, including China and Russia.
“To enhance U.S. national security and economic strength, it is essential that we do not offshore [AI] and that the world’s AI runs on American rails,” according to a White House fact sheet. “It is important to work with AI companies and foreign governments to put in place critical security and trust standards as they build out their AI ecosystems.”
The rule divides countries into three categories, with different levels of export controls and licensing requirements for each category based on their risk level:

Eighteen (18) close allies can receive a license exception. Close allies are “jurisdictions with robust technology protection regimes and technology ecosystems aligned with the national security and foreign policy interests of the United States.” They include Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Taiwan, and the United Kingdom.
Countries of concern, including China and Russia, must receive a license to export chips. A “presumption of denial” will apply to license applications from these countries.
All other countries are allowed to apply for a license, and “license applications will be reviewed under a presumption of approval.” But after a certain number of chips are exported, certain restrictions will apply for these countries.

The rule’s export controls fall into four categories depending on the country, its security standards, and the types of chips being exported.

Orders for chips of up to 1,700 advanced GPUs “do not require a license and do not count against national chip caps.”
Entities headquartered in close allies can obtain “Universal Verified End User” (UVEU) status by meeting high security and trust standards. With this status, these countries “can then place up to 7% of their global AI computational capacity in countries around the world — likely amounting to hundreds of thousands of chips.”
Entities not headquartered in a country of concern can obtain “National Verified End User” status by meeting the same high security and trust standards, “enabling them to purchase computational power equivalent to up to 320,000 advanced GPUs over the next two years.”
Entities not headquartered in a close ally and without VEU status “can still purchase large amounts of computational power, up to the equivalent of 50,000 advanced GPUs per country.”

The rule also includes specific export restrictions and licensing requirements for AI models.

Advanced Closed-Weight AI Models: A license is required to export any closed-weight AI model —“i.e., a model with weights that are not published” — “that has been trained on more than 1026 computational power.” Applications for these licenses will be reviewed under a presumption of denial policy “to ensure that the licensing process consistently accounts for the risks associated with the most advanced AI models.”
Open-Weight AI Models: The rule does “not [impose] controls on the model weights of open-weight models,” the most advanced of which “are currently less powerful than the most advanced closed-weight models.”

The new chip export controls build on previous export controls from 2022 and 2023, which we previously covered.
Executive Order on AI Infrastructure
On Tuesday, Biden signed an Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. The EO directs the Department of Defense and Department of Energy to lease federal sites to the private sector for the development of gigawatt-scale AI data centers that adhere to certain clean energy standards.
“These efforts also will help position America to lead the world in clean energy deployment… This renewed partnership between the government and industry will ensure that the United States will continue to lead the age of AI,” President Biden said in a statement.
The EO requires the Secretary of Defense and Secretary of Energy to identify three sites for AI data centers by February 28, 2025. Developers that build on these sites “will be required to bring online sufficient clean energy generation resources to match the full electricity needs of their data centers, consistent with applicable law.”
The EO also directs agencies “to expedite the processing of permits and approvals required for the construction and operation of AI infrastructure on Federal sites.” The Department of Energy will work to develop and upgrade transmission lines around the new sites and “facilitate [the] interconnection of AI infrastructure to the electric grid.”
Private developers of AI data centers on federal sites are also subject to numerous lease obligations, including paying for the full cost of building and maintaining AI infrastructure and data centers, adhering to lab security and labor standards, and procuring certain clean energy generation resources.
Executive Order on Cybersecurity
On Thursday, President Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity. The EO directs the federal government to strengthen the cybersecurity of its federal systems and adopt more rigorous security and transparency standards for software providers and other third-party contractors. It directs various agencies — with some deadlines as soon as 30 days from the EO’s issuance — to evaluate their cybersecurity systems, launch cybersecurity pilot programs, and implement strengthened cybersecurity practices, including for communication and identity management systems.
The EO also aims to integrate AI into government cybersecurity operations. The EO directs the Secretary of Energy to launch a pilot program “on the use of AI to enhance the cyber defense of critical infrastructure in the energy sector.” Within 150 days of the EO, various agencies shall also “prioritize funding for their respective programs that encourage the development of large-scale, labeled datasets needed to make progress on cyber defense research.” Also, within 150 days of the EO, various agencies shall pursue research on a number of AI topics, including “human-AI interaction methods to assist defensive cyber analysis” and “methods for designing secure AI systems.”
The Fate of President Biden’s AI Actions Under a Trump Administration?
It remains an open question whether Biden’s new AI infrastructure EO, cybersecurity EO, and chip export control rule will survive intact, be modified, or be eliminated under the Trump administration, which begins on Monday. What Trump decides to do with the new export control rule, in particular, may signal the direction of his administration’s approach to AI. Trump may keep the export controls due to his stated commitment to win the AI race against China, or he may get rid of them or tone them down out of concerns that they overly burden US AI innovation and business.