2023 AI Executive Order Revoked

On January 20, 2025, President Donald Trump signed an executive order rescinding the 2023 directive issued by former President Joe Biden on artificial intelligence (AI). Biden’s order outlined extensive measures aimed at guiding the development and use of AI technologies, including the establishment of chief AI officers in major federal agencies and frameworks for tackling ethical and security risks. This revocation signals a major policy change, transitioning away from the federal oversight put in place by the previous administration.
The move to revoke Biden’s executive order has led to a climate of regulatory uncertainty for companies operating in AI-driven fields. In the absence of a unified federal framework, businesses could encounter various challenges, such as an inconsistent regulatory landscape as states and international organizations intervene, increased risks related to AI ethics and data privacy, and unfair competition among companies that implement differing standards for AI development and deployment.
Looking Forward
In light of this shift, companies are encouraged to adopt proactive measures to navigate the evolving environment. To uphold trust and accountability, it is essential to bolster internal governance by creating or improving ethical guidelines concerning AI usage. Organizations should also invest in compliance by monitoring state, international, and industry-specific regulations to align with new standards like Colorado’s Artificial Intelligence Act and the EU’s AI Act.
Additionally, staying informed about possible federal policy changes and legislative efforts is crucial, as further announcements may signal new directions in AI governance. Collaborating with industry groups and standards organizations can help shape voluntary guidelines and best practices, while robust risk management frameworks will be essential to mitigate issues such as bias, cybersecurity threats, and liability concerns.
To navigate this evolving landscape, organizations should consider taking the following steps now:

Strengthen Internal Governance: Develop or enhance internal AI policies and ethical guidelines to promote responsible and legally compliant AI use, even in the absence of federal mandates.
Invest in Compliance: Stay updated on state, international, and industry-specific AI regulations that could impact operations. Proactively align practices with emerging standards such as Colorado’s Artificial Intelligence Act and the EU’s AI Act.
Monitor Federal Developments: Keep a close eye on further announcements or legislative actions from Congress and federal agencies that could signal new directions in AI policy and regulation.
Engage in Industry Collaboration: Collaborate with industry groups and standards organizations to help influence voluntary AI standards and best practices.
Focus on Risk Management: Establish strong risk assessment frameworks to identify and address potential AI-related risks, including biases, cybersecurity threats, legal compliance, and liability concerns.

President Trump’s decision reflects a preference for less regulation, increasing the responsibility on the private sector to ensure ethical and safe AI usage. Companies need to navigate an uncertain regulatory landscape while innovating responsibly. As circumstances change, businesses must stay alert and flexible to uphold their competitive advantage and public trust.

U.S. Treasury Department’s Final Rule on Outbound Investment Takes Effect

On January 2, 2025, the U.S. Department of the Treasury’s Final Rule on outbound investment screening became effective. The Final Rule implements Executive Order 14105 issued by former President Biden on August 9, 2023, and aims to protect U.S. national security by restricting covered U.S. investments in certain advanced technology sectors in countries of concern. Covered transactions with a completion date on or after January 2, 2025, are subject to the Final Rule, including the prohibition and notification requirements, as applicable.
The Final Rule targets technologies and products in the semiconductor and microelectronics, quantum information technologies, and artificial intelligence (AI) sectors that may impact U.S. national security. It prohibits certain transactions and requires notification of certain other transactions in those technologies and products. The Final Rule has two primary components:

Notifiable Transactions: A requirement that notification of certain covered transactions involving both a U.S. person and a “covered foreign person” (including but not limited to a person of a country of concern engaged in “covered activities” related to certain technologies and products) be provided to the Treasury Department. A U.S. person subject to the notification requirement is required to file on Treasury’s Outbound Investment Security Program website by specified deadlines. The Final Rule includes the detailed information and certification required in the notification and a 10-year record retention period for filing and supporting information.
Prohibited Transaction: A prohibition on certain U.S. person investments in a covered foreign person that is engaged in a more sensitive sub-set of activities involving identified technologies and products. A U.S. person is required to take all reasonable steps to prohibit and prevent its controlled foreign entity from undertaking transaction that would be a prohibited transaction if undertaken by a U.S. person.  The Final Rule contains a list of factors that the Treasury Department would consider whether the relevant U.S. person took all reasonable steps.

The Final Rule focuses on investments in “countries of concern,” which currently include only the People’s Republic of China, including Hong Kong and Macau. The Final Rule targets U.S. investments in Chinese companies involved in the following three sensitive technologies sub-sets: semiconductor and microelectronics, quantum information technologies and artificial intelligence. The Final Rule sets forth prohibited and notifiable transactions in each of the three sectors:
Semiconductors and Microelectronics

Prohibited: Covered transactions relating to certain electronic design automation software, fabrication or advanced packaging tools, advanced packaging techniques, and the design and fabrication of certain advanced integrated circuits and supercomputers.
Notifiable: Covered transactions relating to the design, fabrication and packaging of integrated circuits not covered by the prohibited transactions.

Quantum Information Technologies

All Prohibited: Covered transactions involving the development of quantum computers and production of critical components, the development or production of certain quantum sensing platforms, and the development or production of quantum networking and quantum communication systems.

Artificial Intelligence (AI) Systems

Prohibited:

Covered transactions relating to AI systems designed exclusively for or intended to be used for military, government intelligence or mass surveillance end uses.
Covered transactions relating to development of any AI system that is trained using a quantity of computing power meeting certain technical specifications and/or using primarily biological sequence data.

Notifiable: Covered transactions involving AI systems designed or intended to be used for cybersecurity applications, digital forensics tools, penetration testing tools, control of robotic systems or that trained using a quantity of computing power meeting certain technical specifications.

The Final Rule specifically defines the key terms “country of concern,” “U.S. person,” “controlled foreign entity,” “covered activity,” “covered foreign person,” “knowledge” and “covered transaction” and other related terms and sets forth the prohibitions and notification requirements in line with the national security objectives stated in the Executive Order.  The Final Rule also provides a list of transactions that are excepted from such requirements.
U.S. investors intending to invest in China, particularly in the sensitive sectors set forth above, should carefully review the Final Rule and conduct robust due diligence to determine whether a proposed transaction would be covered by the Final Rule (either prohibited or notifiable) before undertaking any such transaction. 
Any person subject to U.S. jurisdiction may face substantial civil and/or criminal penalties for violation or attempted violation of the Final Rule, including civil fines of up to $368,137 per violation (adjusted annually for inflation) or twice the amount of the transaction, whichever is greater, and/or criminal penalties up to $1 million or 20 years in prison for willful violations.  In addition, the Secretary of the Treasury can take any authorized action to nullify, void, or otherwise require divestment of any prohibited transaction.

OR AG Issues Guidance Regarding OR State Laws and AI

On December 24, 2024, the Oregon Attorney General published AI guidance, “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” (the “Guidance”) that clarifies how existing Oregon consumer protection, privacy and anti-discrimination laws apply to AI tools. Through various examples, the Guidance highlights key themes such as privacy, accountability and transparency, and provides insight into “core concerns,” including bias and discrimination.
Consumer Protection – Oregon’s Unlawful Trade Practice Act (“UTPA”)
The Guidance emphasizes that misrepresentations, even when they are not directly made to the consumer, may be actionable under the UTPA, and an AI developer or deployer may be “liable to downstream consumers for the harm its products cause.” The Guidance provides a non-exhaustive list of examples that may constitute violations of the UTPA, such as:

failing to disclose any known material defect or nonconformity when delivering an AI product;
misrepresenting that an AI product has characteristics, uses, benefits or qualities that it does not have;
using AI to misrepresent that real estate, goods or services have certain characteristics, uses, benefits or qualities (e.g., a developer or deployer using a chatbot while falsely representing that it is human);
using AI to make false or misleading representations about price reductions (e.g., using AI generated ads or emails indicating “limited time” or “flash sale” when a similar discount is offered year-round);
using AI to set excessively high prices during an emergency;
using an AI-generated voice as part of a robocall campaign to misrepresent or falsify certain information, such as the caller’s identity and the purpose of the call; and
leveraging AI to use unconscionable tactics regarding the sale, rental or disposal of real estate, goods or services, or collecting or enforcing an obligation (e.g., knowingly taking advantage of a consumer’s ignorance or knowingly permitting a consumer to enter into a transaction that does not materially benefit them).

Data Privacy – Oregon Consumer Protection Act (“OCPA”)
In addition, the Guidance notes that developers, suppliers and users of AI may be subject to OCPA, given generative AI systems ingest a significant amount of words, images and other content that often consists of personal data. Key takeaways from the Guidance regarding OCPA include:

developers that use personal data to train AI systems must clearly disclose that they do so in an accessible and clear privacy notice;
if personal data includes any categories of sensitive data, entities must first obtain explicit consent from consumers before using the data to develop or train AI models;
if the developer purchases or uses another data’s company for model training, the developer may be considered a “controller” under OCPA, and therefore must comply with the same standards as the company that initially collected the data;
data suppliers and developers are prohibited from “retroactively or passively” altering privacy notices or terms of use to legitimatize the use of previously collected personal data to train AI models, and instead are required to obtain affirmative consent for any secondary or new uses of that data;
developers and users of AI must provide a mechanism for consumers to withdraw previously-given consent (and if the consent is revoked, stop processing the data within 15 days of receiving the revocation);
entities subject to OCPA must consider how to account for specific consumer rights when using AI models, including a consumer’s right to (1) opt-out of the use of profiling in decisions that have legal or similarly significant effects (e.g., housing, education or lending) and (2) request the deletion of their personal data; and
in connection with OCPA’s requirement to conduct data protection assessments for certain processing activities, due to the complexity of generative AI models and proprietary data and algorithms, entities “should be aware that feeding consumer data into AI models and processing it in connection with these models likely poses heightened risks to consumers.”

Data Security – Oregon Consumer Information Protection Act
The Guidance clarifies that AI developers (as well as their data suppliers and users) that “own, license, maintain, store, manage, collect, acquire or otherwise possess” personal information also must comply with the Oregon Consumer Information Protection Act, which requires businesses to safeguard personal information and implement an information security program that meets specific requirements. The Guidance also notes that to the extent there is a security breach, AI developers, data suppliers and users may be required to notify consumers and the Oregon Attorney General.
Anti-Discrimination – Oregon Equality Act
The Guidance explains that AI systems that “utilize discretionary inputs or produce biased outcomes that harm individuals based on protected characteristics” may trigger the Oregon Equality Act. The law prohibits discrimination based on race, color, religion, sex, sexual orientation, gender identity, national origin, marital status, age or disability, including in connection with housing and public accommodations. The Guidance also includes an illustrative example regarding how the law applies to the use of AI. Specifically, the Guidance notes that a rental management company’s use of an AI mortgage approval system that consistently denies loans to qualified applicants based on certain neighborhoods or ethnic backgrounds because the AI system was trained on historically biased data may be considered a violation of the law.

Watch Out, Employers: Using Smart Devices in the Workplace May Not Be So Smart

What does the EEOC have to do with smart watches, rings, glasses, helmets and other devices that track bodily movement and other data? These devices, known as “wearables,” can track location, brain activity, heart rate, and other mental or physical information about the wearer, which has led some employers to require their employees to wear company-issued wearables. While the wearables may provide useful data, the EEOC recently warned employers to watch out for the dangers associated with them. A summary of the EEOC’s reported risks are identified below. You can find the full guidance here.
What to watch out for
Per the EEOC’s guidance, there are three categories of risk: collecting information, using information, and reasonable accommodations.
1. Collecting Information – Among other things, wearables collect information related to employees’ physical or mental condition (e.g., heart rate and blood pressure). The EEOC warned that collecting this type of information may pose risks under the Americans with Disabilities Act.
The EEOC considers tracking this sort of information as the equivalent of a disability-related inquiry, or even a medical examination under the ADA. Both inquiries and medical examinations for all employees (not just those with disabilities) are limited to situations where the inquiry or exam is job-related and consistent with business necessity or otherwise permitted by the ADA. The ADA allows inquiries and examinations for employees in the following circumstances:

When a federal, safety-related law or regulation allows for the inquiry or exam,
For certain public-safety related positions (e.g., police or firefighters), or
If the inquiry or exam is voluntary and part of the employer’s health program that is reasonably designed to promote health or prevent disease.

Outside of these three exceptions, the ADA prohibits disability-related inquiries and medical examinations. Also, if you are tracking this information, keep it confidential, just like you would any other medical information.
2. Using Information – Even if collection is permitted, employers must use caution when determining how to use the information. If an employer uses the information in a way that adversely effects employees due to a protected status, it could trigger anti-discrimination laws. A few cautionary examples from the guidance:

Using heart rate or other information to infer an employee is pregnant, then taking adverse action against her
Relying on wearable data, which is less accurate for individuals with dark skin, and making an adverse decision based on that data
Tracking an employee’s location, particularly when they are on a break or off work, and asking questions about their visits to medical facilities, which could elicit genetic information
Analyzing heart rate data to infer or predict menopause and refusing to promote employee because of sex, age, and/or disability
Increased tracking of employees who have previously reported allegations of discrimination or other protected activity

Employers need to be cautious about policies regarding mandated-wearable use. Requiring some, but not all, employees to wear these devices may trigger risks of discrimination under Title VII. If you plan to use these devices, you need human oversight and the employees monitoring the data must understand the device flaws, imperfections in the data, and potential ways of misusing the information.
3. Reasonable Accommodations – Even if an employer’s mandated-wearable requirement meets one of the above-listed exceptions, you may need to make reasonable accommodations if an employee meets the requirements for a religious exception or based on pregnancy or disability.
Takeaways
Technology in the workplace is ever-changing, and you need to stay informed about potential issues before you decide to use it. Do you need this information? If so, do you need it on all of your employees? Remember that if you don’t know about an employee’s protected status, you are less likely to be accused of basing a decision on it.
Before you implement (or continue using) mandated wearables, meet with your employment lawyer to work out a plan for implementation, follow-up, and continued policy monitoring for these devices. Also, check out our prior blog on AI in the workforce for additional tips on responsible use of technology in the workplace.
Listen to this post 

2025 Labor and Employment Outlook for Manufacturers: Employer-Friendly Skies on the Horizon

As we look ahead to 2025, several important labor and employment law changes, planned and potential, are on the horizon. With President Trump set to return to the Oval Office on January 20, 2025, labor and employment law priorities at the federal level are expected to change significantly. Meanwhile, state legislatures remain active in enacting new laws that will impact the labor and employment law landscape for manufacturers. Below are a few key issues likely to impact manufacturers in 2025.
Minimum Wage for Non-Exempt Employees and Salary Threshold for Exempt Employees
While President Biden called for an increase to the federal minimum wage (currently $7.25 per hour) to $15 per hour during his presidency—that did not occur; rather, the minimum wage rate has remained unchanged since 2009. During President-elect Trump’s recent campaign, he signaled an openness to raising the federal minimum wage. However, any forthcoming increase will likely be substantially less than the Biden administration’s goal. Similarly, it is unlikely that the incoming administration will seek to revive the Biden administration’s increased “white collar” overtime exemption salary threshold, which a federal judge recently struck down. Nonetheless, manufacturers should remain current on federal and state minimum wage rates and salary thresholds.
Independent Contractor v. Employee Classification Enforcement
It is possible that the incoming administration may undo the Biden administration’s efforts to make it more difficult for manufacturers to classify workers as independent contractors, thereby simplifying wage and hour compliance for manufacturers under the Fair Labor Standards Act (FLSA). Further, the Trump administration may not prioritize this issue from an enforcement perspective. Regardless, manufacturers should continue to ensure compliance with more stringent state and local laws and guidance regarding worker classification.
Status of Equal Employment Opportunity and Diversity, Equity, Inclusion, & Belonging (DEIB) Programs and Policies
As seen during Trump’s first presidency, the Equal Employment Opportunity Commission (EEOC) under the incoming administration may aim at governmental and corporate diversity, equity, inclusion, and belonging (DEIB) initiatives in employment, focusing on equality compared to equity. Some entities are preemptively rolling back their DEIB programs and practices regarding recruiting, hiring, promotions, and similar efforts in anticipation of the new administration’s position. The Trump administration may also change protections for LGBTQ+ workers. At the state and local levels, it is expected that there will be a continued expansion of protected statutes. Manufacturers should be aware of these developments and ensure that their handbooks and policies comply with the state and local laws where their employees are working.
Workers’ Right to Organize and the National Labor Relations Board (NLRB)
Under the incoming Trump administration, the National Labor Relations Board (NLRB) may revert to more employer-friendly policies aimed at ensuring companies have rights regarding union organizing and similar activities, as seen during the first Trump administration. We also anticipate that the incoming General Counsel of the NLRB will rescind the memoranda issued by the current NLRB General Counsel, which implemented a pro-labor policy by expanding the scope of available remedies for unfair labor practices and restricting permissible non-compete agreements, among other key efforts to support union-organizing activity. The new NLRB may return to using more balanced standards and rulings when analyzing employer policies and confidentiality and non-disparagement provisions. Whether unionized or union-free, manufacturers may be impacted by these changes at the NLRB and beyond and should be aware of these developments.
Artificial Intelligence (AI)
While manufacturers continue to turn towards artificial intelligence (AI) and algorithm-based technologies for recruiting, hiring, and other employment needs, there may be developments in legislation at the state and federal levels. At the federal level, it is possible that the incoming administration could approach the issue of AI from a self-governance perspective, meaning refraining from legislating around the use of AI in employment and, instead, relying on employers to monitor their use of AI in recruiting, hiring, etc. AI tools could be a key focus for state and local legislatures in 2025. Manufacturers should ensure that their deployment and use of AI tools in employment comply with federal, state, and local laws.
Non-Compete Legislation
The Federal Trade Commission’s (FTC) final rule banning non-compete agreements did not go into effect as planned in 2024, and the FTC will likely abandon its efforts to revive the final rule once the incoming administration takes office. We do not anticipate further legislative or regulatory efforts at the federal level during the second Trump administration. However, at the state level, we expect to see more states and localities enact laws banning or restricting the scope of non-compete agreements, including based on position and salary, thereby challenging manufacturer efforts to protect their business interests and proprietary information and defend against unfair competition.

FDA Issues New Recommendations on Use of AI for Medical Devices, Drugs, and Biologics

In its most recent effort to keep pace with advancing technology, the US Food and Drug Administration (FDA) recently issued two draft guidances on the use of artificial intelligence (AI) in the context of drugs, biologics, and medical devices.

Medical Device Guidance
The first draft guidance is entitled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” The guidance provides an overview of the type of documentation and information that companies will need to submit to the FDA to obtain medical device regulatory approval — part of the so-called “marketing submission” process. Among other things, the FDA advises that such documentation and information should include:

A description of the device inputs and outputs, including whether inputs are entered manually or automatically.
A description of how AI is used to achieve the device’s intended purpose.
A description of the device’s intended users, their characteristics, and the level and type of training they are expected to have and/or receive.
A description of the intended use environment (e.g., clinical setting, home setting).
A description of the degree of automation that the device provides in comparison to the workflow for the current standard of care.
A comprehensive risk assessment and risk management plan.
Data management information, including how data were collected, limitations of the dataset, and an explanation of how the data are representative of the intended use population.
A cybersecurity assessment, particularly focusing on those risks that may be unique to AI.

Guidance for Drugs and Biological Products
The second draft guidance issued this month is entitled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” The guidance addresses considerations for the use of AI to support regulatory decision-making for drugs and biologics. Specifically, the draft guidance discusses the use of AI models to produce information or data to support regulatory decision-making regarding safety, effectiveness, or quality for these products. To that end, the FDA recommends utilizing the following seven-step process to establish and assess the credibility of an AI model output for a specific context of use (COU) based on model risk:

Step 1: Define the question of interest that will be addressed by the AI model.
Step 2: Define the COU for the AI model.
Step 3: Assess the AI model risk.
Step 4: Develop a plan to establish the credibility of AI model output within the COU.
Step 5: Execute the plan.
Step 6: Document the results of the credibility assessment plan and discuss deviations from the plan.
Step 7: Determine the adequacy of the AI model for the COU.

Each of these steps is discussed in detail in the draft guidance, with examples provided. The FDA states that this will provide a “risk-based credibility assessment framework” that will help manufacturers and other interested parties plan, gather, organize, and document information to establish the credibility of AI model outputs when the model is used to produce information or data intended to support regulatory decision-making.
Conclusion
The new guidances from the FDA provide further indication that the agency is closely scrutinizing the use of AI in and in connection with FDA-regulated medical devices, drugs, and biological products. To reduce the risk of unnecessary regulatory delays, companies seeking approval of FDA-regulated products should carefully review their regulatory submissions to ensure they align with the new AI guidance documents.

Continued FTC Crackdown on False Product Reviews

Consumer protection wins again! The Federal Trade Commission (FTC) announced a final order settling its complaint against Rytr LLC, an artificial intelligence (AI) writing assistant tool that was capable of producing detailed and specific false product reviews using AI technology.
The FTC further alleged that Rytr subscribers used the service to generate product reviews potentially containing false information, deceiving potential consumers who sought to use the reviews to make purchasing decisions. The final order settling the complaint, which was published on December 18, 2024, bars Rytr from engaging in similar illegal conduct in the future and prohibits the company from advertising, marketing, promoting, offering for sale, or selling any services “dedicated to or promoted as generating consumer reviews or testimonials.”
The decision highlights increased scrutiny against AI tools that can be used to generate false and deceptive content, which may mislead consumers. AI developers should prioritize transparency in how AI-generated content is created and used and ensure that AI services comply with advertising and consumer protection laws. The decision also reflects the need for AI developers to balance innovation with ensuring their innovations do not harm consumers.

Out with a Bang: President Biden Ends Final Week in Office with Three AI Actions — AI: The Washington Report

President Biden’s final week in office included three AI actions — a new rule on chip and AI model export controls, an executive order on AI infrastructure and data centers, and an executive order on cybersecurity.
On Monday, the Department of Commerce issued a rule on responsible AI diffusion limiting chip and AI model exports made to certain countries of concern. The rule is particularly aimed at curbing US AI technology exports to China and includes exceptions for US allies.
On Tuesday, President Biden signed an executive order (EO) on AI infrastructure, which directs agencies to lease federal sites for the development of large-scale AI data centers.
On Thursday, Biden signed an EO on cybersecurity, which directs the federal government to strengthen its cybersecurity systems and implement more rigorous requirements for software providers and other third-party contractors.
The actions come just days before President-elect Trump begins his second term. Yet, it remains an open question whether President Trump, who has previously supported chip export controls and data center investments, will keep these actions in place or undo them.  

 In its final week, the Biden administration issued three final actions on AI, capping off the administration that took the first steps toward creating a government response to AI. On Monday, the Biden administration announced a rule on responsible AI diffusion through chip and AI model export controls, which limit such exports to certain foreign countries. On Tuesday, President Biden signed an Executive Order (EO) on Advancing United States Leadership in Artificial Intelligence Infrastructure, which directs agencies to lease federal sites for the development of AI data centers. And on Thursday, Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity, which directs the federal government to strengthen its cybersecurity operations.
The new AI actions come just days before President-elect Trump takes the White House. What Trump decides to do with Biden’s new and old AI actions, as we discuss below, may provide the first indication of the direction of his second administration’s approach to AI.
Rule on Responsible Diffusion of Advanced AI Technology
On Monday, the Department of Commerce’s Bureau of Industry and Security announced a sweeping rule on export controls on chips and AI models, which requires licenses for exports of the most advanced chips and AI models. The rule aims to allow US companies to export advanced chips and AI models to global allies while also preventing the diffusion of those technologies, either directly or through an intermediary, into countries of concern, including China and Russia.
“To enhance U.S. national security and economic strength, it is essential that we do not offshore [AI] and that the world’s AI runs on American rails,” according to a White House fact sheet. “It is important to work with AI companies and foreign governments to put in place critical security and trust standards as they build out their AI ecosystems.”
The rule divides countries into three categories, with different levels of export controls and licensing requirements for each category based on their risk level:

Eighteen (18) close allies can receive a license exception. Close allies are “jurisdictions with robust technology protection regimes and technology ecosystems aligned with the national security and foreign policy interests of the United States.” They include Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Taiwan, and the United Kingdom.
Countries of concern, including China and Russia, must receive a license to export chips. A “presumption of denial” will apply to license applications from these countries.
All other countries are allowed to apply for a license, and “license applications will be reviewed under a presumption of approval.” But after a certain number of chips are exported, certain restrictions will apply for these countries.

The rule’s export controls fall into four categories depending on the country, its security standards, and the types of chips being exported.

Orders for chips of up to 1,700 advanced GPUs “do not require a license and do not count against national chip caps.”
Entities headquartered in close allies can obtain “Universal Verified End User” (UVEU) status by meeting high security and trust standards. With this status, these countries “can then place up to 7% of their global AI computational capacity in countries around the world — likely amounting to hundreds of thousands of chips.”
Entities not headquartered in a country of concern can obtain “National Verified End User” status by meeting the same high security and trust standards, “enabling them to purchase computational power equivalent to up to 320,000 advanced GPUs over the next two years.”
Entities not headquartered in a close ally and without VEU status “can still purchase large amounts of computational power, up to the equivalent of 50,000 advanced GPUs per country.”

The rule also includes specific export restrictions and licensing requirements for AI models.

Advanced Closed-Weight AI Models: A license is required to export any closed-weight AI model —“i.e., a model with weights that are not published” — “that has been trained on more than 1026 computational power.” Applications for these licenses will be reviewed under a presumption of denial policy “to ensure that the licensing process consistently accounts for the risks associated with the most advanced AI models.”
Open-Weight AI Models: The rule does “not [impose] controls on the model weights of open-weight models,” the most advanced of which “are currently less powerful than the most advanced closed-weight models.”

The new chip export controls build on previous export controls from 2022 and 2023, which we previously covered.
Executive Order on AI Infrastructure
On Tuesday, Biden signed an Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. The EO directs the Department of Defense and Department of Energy to lease federal sites to the private sector for the development of gigawatt-scale AI data centers that adhere to certain clean energy standards.
“These efforts also will help position America to lead the world in clean energy deployment… This renewed partnership between the government and industry will ensure that the United States will continue to lead the age of AI,” President Biden said in a statement.
The EO requires the Secretary of Defense and Secretary of Energy to identify three sites for AI data centers by February 28, 2025. Developers that build on these sites “will be required to bring online sufficient clean energy generation resources to match the full electricity needs of their data centers, consistent with applicable law.”
The EO also directs agencies “to expedite the processing of permits and approvals required for the construction and operation of AI infrastructure on Federal sites.” The Department of Energy will work to develop and upgrade transmission lines around the new sites and “facilitate [the] interconnection of AI infrastructure to the electric grid.”
Private developers of AI data centers on federal sites are also subject to numerous lease obligations, including paying for the full cost of building and maintaining AI infrastructure and data centers, adhering to lab security and labor standards, and procuring certain clean energy generation resources.
Executive Order on Cybersecurity
On Thursday, President Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity. The EO directs the federal government to strengthen the cybersecurity of its federal systems and adopt more rigorous security and transparency standards for software providers and other third-party contractors. It directs various agencies — with some deadlines as soon as 30 days from the EO’s issuance — to evaluate their cybersecurity systems, launch cybersecurity pilot programs, and implement strengthened cybersecurity practices, including for communication and identity management systems.
The EO also aims to integrate AI into government cybersecurity operations. The EO directs the Secretary of Energy to launch a pilot program “on the use of AI to enhance the cyber defense of critical infrastructure in the energy sector.” Within 150 days of the EO, various agencies shall also “prioritize funding for their respective programs that encourage the development of large-scale, labeled datasets needed to make progress on cyber defense research.” Also, within 150 days of the EO, various agencies shall pursue research on a number of AI topics, including “human-AI interaction methods to assist defensive cyber analysis” and “methods for designing secure AI systems.”
The Fate of President Biden’s AI Actions Under a Trump Administration?
It remains an open question whether Biden’s new AI infrastructure EO, cybersecurity EO, and chip export control rule will survive intact, be modified, or be eliminated under the Trump administration, which begins on Monday. What Trump decides to do with the new export control rule, in particular, may signal the direction of his administration’s approach to AI. Trump may keep the export controls due to his stated commitment to win the AI race against China, or he may get rid of them or tone them down out of concerns that they overly burden US AI innovation and business.

New Jersey Guidance on AI: Employers Must Comply With State Anti-Discrimination Standards

On January 9, 2025, New Jersey Attorney General Matthew J. Platkin and the Division on Civil Rights issued guidance stating that New Jersey’s anti-discrimination law applies to artificial intelligence. Specifically, the New Jersey Law Against Discrimination (“LAD”) applies to algorithmic discrimination – discrimination that results from the use of automated decision-making tools – the same way it has long applied to other forms of discriminatory conduct.
In a statement accompanying the guidance, the Attorney General explained that while “technological innovation . . . has the potential to revolutionize key industries . . . it is also critically important that the needs of our state’s diverse communities are considered as these new technologies are deployed.” This move is part of a growing trend among states to address and mitigate the risks of potential algorithmic discrimination resulting from employers’ use of AI systems.
LAD’s Prohibition of Algorithmic Discrimination
The guidance explains that the term “automated decision-making tool” refers to any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process. Automated decision-making tools can incorporate technologies such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The guidance makes clear that under the LAD, discrimination is prohibited regardless of whether it is caused by automated decision-making tools or human actions. The LAD’s broad purpose is to eliminate discrimination, and it doesn’t distinguish between the mechanisms used to discriminate. This means that employers will still be held accountable under the LAD for discriminatory practices, even if those practices rely on automated systems. An employer can violate the LAD even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool. Essentially, claims of algorithmic discrimination are assessed the same way as other discrimination claims under the LAD.
The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. The LAD also prohibits algorithmic discrimination when it precludes or impedes the provision of reasonable accommodations, or of modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.
Unlike the New York City law that restricts employers’ ability to use automated employment decision tools in hiring and promotion decisions within New York City and requires employers to perform a bias audit of such tools to assess the potential disparate impact on sex, race, and ethnicity, there is no audit requirement under the LAD. However, the Attorney General’s guidance does recognize that “algorithmic bias” can occur in the use of automated decision-making tools and recommends various steps employers can take to identify and eliminate such bias, such as:

implementing quality control measures for any data used in designing, training, and deploying the tool;
conducting impact assessments;
having pre-and post-deployment bias audits performed by independent parties;
providing notice of their use of an automated decision making tool;
involving people impacted by their use of a tool in the development of the tool; and
purposely attacking the tools to search for flaws.

This new guidance highlights the need for employers to exercise caution when using artificial intelligence and to thoroughly assess any automated decision-making tools they intend to implement. 
Tamy Dawli is a law clerk and contributed to this article

Colorado Attorney General Announces Adoption of Amendments to Colorado Privacy Act Rules + Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement

Colorado Adopts Amendments to CPA Rules
The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement
A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025. 

The BR International Trade Report: January 2025

Recent Developments
President Biden blocks Nippon Steel’s acquisition of US Steel. On January 3, President Biden announced that he would block the $15 billion sale of U.S. Steel to Japan’s Nippon Steel, citing national security concerns. President Biden’s decision came after the Committee on Foreign Investment in the United States (“CFIUS”) reportedly deadlocked in its review of the transaction and referred the matter to the President. U.S. Steel and Nippon Steel condemned the President’s action in a joint statement, arguing it marked “a clear violation of due process and the law governing CFIUS,” and on January 6 filed suit challenging the measure. 
Canadian Prime Minister Justin Trudeau announces his resignation as party leader and prime minister. On January 6, Prime Minister Trudeau, who has served as the Liberal Party leader since 2013 and prime minister since 2015, declared his intention to “resign as party leader, as prime minister, after the party selects its next leader through a robust, nationwide, competitive process.” Governor General Mary Simon suspended, or prorogued, the Canadian Parliament until March 24 to allow the Liberal Party time to select its new leader—who will replace Trudeau as prime minister leading up to the general elections, which must be held by October 20. Separately, details have begun to leak of the potential Canadian retaliation against President-elect Trump’s threatened tariffs on Canadian goods. This retaliation could include tariffs on certain steel, ceramics, plastics, and orange juice. 
U.S. Department of Commerce announces new export controls for AI chips. On January 13, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) issued a new interim final rule in an effort to keep advanced artificial intelligence (“AI”) chips from foreign adversaries. The interim final rule seeks to implement a three-tiered system of export restrictions. Under the new rule, (i) certain allied countries would face no new restrictions, (ii) non-allied countries would face certain restrictions, and (iii) U.S. adversaries would face almost absolute restrictions. BIS followed up with another rule on January 15 imposing heightened export controls for foundries and packaging companies exporting advanced chips, with exceptions for exports to an approved list of chip designers and for chips packaged by certain approved outsourced semiconductor assembly and test services (“OSAT”) companies.
Biden Administration imposes sanctions against Russia’s energy sector in parting blow. On January 10, the U.S. Department of the Treasury (“Treasury”) issued determinations authorizing the imposition of sanctions against any person operating in Russia’s energy sector and prohibiting U.S. persons from supplying petroleum services to Russia, and designated two oil majors—Gazprom Neft and Surgutneftegas—among others.
BIS issues final ICTS rule on connected vehicle imports and begins review of drone supply chain. On January 14, BIS issued a final rule under the Information and Communications Technology and Services (“ICTS”) supply chain regulations prohibiting the import of certain connected vehicles and connected vehicle hardware, capping a rulemaking process that started in March 2024. The rules, which will have a significant impact on the auto industry supply chain, will apply in certain cases to model year 2027 and in certain other cases to model year 2029. (See our alert on BIS’s proposed rule from September 2024.) Meanwhile, BIS launched an ICTS review on January 2 into the potential risk associated with Chinese and Russian involvement in the supply chains of unmanned aircraft systems, issuing an Advance Notice of Proposed Rulemaking.
China implicated in cyberattack on the U.S. Treasury. In December, a China state-sponsored Advanced Persistent Threat (“APT”) actor hacked Treasury, using a stolen key. Reports suggest that attack targeted Treasury’s Office of Foreign Assets Control (“OFAC”), which administers U.S. sanctions programs, among other elements of Treasury. Initial reporting indicated that only unclassified documents were accessed by hackers, although the extent of the attack is still largely unknown. The Chinese government has denied involvement.
United Kingdom joins the Comprehensive and Progressive Agreement for Trans-Pacific Partnership. On December 15, the United Kingdom officially joined the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (“CPTPP”)—a trade agreement between Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam—nearly four years after submitting its 2021 application. The United Kingdon is the first non-founding country to join the CPTPP. 
Fallout of failed presidential martial law declaration continues in South Korea. South Korea continues to face unrest after last month’s short-lived declaration of martial law by President Yoon Suk Yeol, which led to his December 14 impeachment and January 15 arrest by anti-corruption investigators. On December 27, the National Assembly also impeached Prime Minister Han Duk-soo, who had been serving as acting president for the two weeks following Yoon’s impeachment. Finance Minister Choi Sang-mok now serves as acting president, and faces calls from South Korean investigators to order the presidential security service to comply with a warrant for President Yoon’s arrest.
Office of the U.S. Trade Representative initiates investigation into legacy chips from China. In late December, U.S. Trade Representative (“USTR”) Katherine Tai announced a new Section 301 investigation “regarding China’s acts, policies, and practices related to the targeting of the semiconductor industry for dominance.” The USTR will focus its initial investigation on “legacy chips,” which are integral to the U.S. manufacturing economy. The USTR began accepting written comments and requests to appear at the hearing on January 6. The public hearing is scheduled for March 11-12. 
President-elect Donald Trump eyes the Panama Canal and Greenland. At the December 2024 annual conference for Turning Point USA, President-elect Donald Trumpcriticized Panama’s management of the Panama Canal, indicating that the United States should reclaim control due to “exorbitant prices” to American shipping and naval vessels and Chinese influence in the Canal Zone. Panamanian President José Raúl Mulino rejected Trump’s claims, stating “[t]he canal is Panamanian and belongs to Panamanians. There’s no possibility of opening any kind of conversation around this reality.” President-elect Trump also has sought to revive his 2019 proposal to purchase Greenland from Denmark, emphasizing its strategic position in the Arctic and untapped natural resources. In response, Greenland’s Prime Minister Mute Egede stated that Greenland is not for sale, but would “work with the U.S.—yesterday, today, and tomorrow.”
Nicolás Maduro sworn in for third presidential term, despite disputed election results. On January 10, Nicolás Maduro Moros was inaugurated for another six-year term as president of Venezuela, despite evidence he lost the election to opposition candidate Edmundo González Urrutia. Gonzalez, recognized by the Biden Administration as the president-elect of Venezuela, met with President Biden in the White House on January 6. In response to Maduro’s inauguration, the United States announced new sanctions programs against Maduro associates and extended the 2023 designation of Venezuela for Temporary Protected Status by 18 months. 
U.S. Department of Defense designates more entities on Chinese Military Companies list. In its annual update of the Chinese Military Companies list (“CMC list”), the Department of Defense (“DoD”) added dozens of Chinese companies to the list, including well-known technology, AI, and battery companies, bringing the total number of CMC List entities to 134. Beginning in June 2026, DoD is prohibited from dealing with the newly designated companies.
European Union and China consider summit to mend ties. On January 14, European Council President António Costa and Chinese President Xi Jinping spoke via phone call, reportedly agreeing to host a summit on May 6, 2025—the 50th anniversary of EU-China diplomatic relations. The conversation comes just days before the inauguration of President-elect Donald Trump, who has threatened additional tariffs on Chinese goods and pushed the European Union to further decouple from China. Despite Beijing’s and Brussels’s willingness to meet, China-EU trade tensions remain high, highlighted by the European Commission’s October decision to impose duties of up to 35% on Chinese-made electric vehicles.

President Biden Issues Second Cybersecurity Executive Order

In light of recent cyberattacks targeting the federal government and United States supply chains, President Biden’s administration has released an Executive Order (the “Order”) in an attempt to modernize and enhance the federal government’s cybersecurity posture, as well as introduce and expand upon new or existing requirements imposed on third-party suppliers to federal agencies.
To the extent that the mandates set forth in this Order remain in place after President-elect Donald Trump takes office, third-party vendors and suppliers that contract with the federal government will need to ensure compliance with new or updated cybersecurity standards in order to remain eligible to contract with federal agencies. With that said, even if this Executive Order does not pass through to the next administration, it still provides general guidance on best practices for cybersecurity. While some of these practices may not be novel to the cybersecurity industry, it would serve as yet another guidance document for companies on what constitutes “reasonable security.”
Below is a high-level, non-exhaustive summary of some of the key highlights in the Executive Order. Please note that the mandates would take effect on different dates in accordance with the time frames discussed in the Order.
Federal Government’s Latest Attempt to Modernize its Cybersecurity Posture
The Executive Order underscores the importance of modernizing the federal government’s cybersecurity infrastructure to defend against cyber campaigns by foreign adversaries targeting the government.
One of the ways in which the new Order attempts to do this is by directing federal agencies to implement “strong identity authentication and encryption” across communications transmitted via the internet, including email, voice and video conferencing, and instant messaging.
In addition, as federal agencies have improved their cyber defenses, adversaries have targeted the weak links in agency supply chains and the products and services upon which the government relies. In light of this pervasive threat, the Executive Order places a strong emphasis on the need for federal agencies to integrate cybersecurity supply chain risk management programs into enterprise-wide risk management by requiring those agencies, via the Office of Management and Budget (OMB), to (i) comply with the guidance in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-161 (Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations), and (ii) provide annual updates to OMB on their compliance efforts with respect to the same. The OMB’s requirements will address the integration of cybersecurity into the acquisition lifecycle through acquisition planning, source selection, responsibility determination, security compliance evaluation, contract administration, and performance evaluation.
The Executive Order also addresses the potential to use artificial intelligence (AI) to defend against cyberattacks by increasing the government’s ability to quickly identify new vulnerabilities and automate cyber defenses. Specifically, the Order directs certain agencies to prioritize research on topics related to AI and cyber defense, which include: (i) human-AI interaction methods to assist with defensive cyber analysis; (ii) security of AI coding assistance and the security of AI-generated code; (iii) methods for designing secure AI systems; and (iv) methods for prevention, response, remediation, and recovery from cyber incidents involving AI systems.
Beyond using modern technology to defend against increasing cyber threats, the Executive Order aims to centralize the government’s cybersecurity governance by expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) role as the lead agency overseeing federal civilian agencies’ cybersecurity programs.
Enhancing and Expanding Upon Requirements Imposed on Third-Party Vendors of Federal Agencies
In addition to requiring federal agencies to adjust their cybersecurity posture, the Executive Order also aims to ensure that third-party vendors of federal agencies undertake various measures that are intended to help ensure the safety and security of our federal government and critical infrastructure systems, and strengthen the United States supply chains, from malicious cyber-attacks.
Third-Party Software Providers and Secure Software Development Practices
Part of the latest Executive Order focuses on transparency and deployment of secure software that meets standards set forth in the Biden administrations first cybersecurity Executive Order 14028, which was issued in May 2021. Under that Order, suppliers are required to attest that they adhere to secure software development practices, in language spurred by Russian hackers who infected an update of the widely used SolarWinds Orion software to penetrate the networks of federal agencies. Given that insecure software remains a challenge for both providers and users, it has continued to make the federal government and critical infrastructure systems vulnerable to additional malicious cyber incidents. This was recently illustrated by several attacks, including the 2024 exploitation of a vulnerability in a popular file transfer application used by multiple federal agencies.
Against this backdrop, the newly released Executive Order sets forth more robust attestation requirements for software providers that support critical government services and pushes for enhanced transparency by publicizing when these providers have submitted their attestations so that others can know what software meets the secure standards. In a similar vein, the new Order also aims to provide federal agencies with a coordinated set of practical and effective security practices to require when they procure software by calling for (i) updates to certain frameworks established by NIST that are adhered to by federal agencies – such as NIST SP 800-218 (Secure Software Development Framework) (SSDF) – for the secure development and delivery of software, (ii) the issuance of new requirements by OMB that derive from NIST’s updated SSDF to apply to federal agencies’ use of third-party software, and (iii) potential revisions to CISA’s Secure Software Development Attestation to conform to OMB’s requirements.
Vendors of Consumer Internet-of-Thing (IoT) Products and U.S. Cyber Trust Mark Label
To further protect the supply chain, the Executive Order recognizes the risks federal agencies face when purchasing IoT products. To address these risks, the Order requires the development of additional requirements for contracts with consumer IoT providers. Consumer IoT providers contracting with federal agencies will have to (i) comply with the minimum cybersecurity practices outlined by NIST, and (ii) carry United States Cyber Trust Mark labeling on their products. The initiative related to Cyber Trust Mark labeling was announced by the White House on January 7, 2025, and will require consumer IoT products to pass a U.S. cybersecurity audit and legally display the mark on advertising and packaging.
Cloud Service Providers
The Executive Order also requires the development of new guidelines for cloud service providers, which is unsurprising in light of the recent cyber attack on the U.S. Treasury Department where a sophisticated Chinese hacking group known as Silk Typhoon stole a digital key from BeyondTrust Inc.—a third-party service provider for the Treasury Department—and used it to access unclassified information maintained on Treasury Department user workstations. The breach utilized a technique known as token theft. Authentication tokens are designed to enhance security by allowing users to stay logged in without repeated password entry. However, if compromised, these tokens enable attackers to impersonate legitimate users, granting unauthorized access to sensitive systems. 
While this incident is likely not the impetus behind the updated guidelines for cloud service providers, it underscores the importance of auditing third-party vendor security practices and taking measures to reduce the lifespan of tokens so as to limit their usefulness if stolen. These new guidelines under the Executive Order would mandate multifactor authentication, complex passwords, and storing cryptographic keys using hardware security keys for cloud service providers of federal agencies.
Key Takeaways
Although the fate of the Executive Order is uncertain with an incoming administration, organizations that contract with the federal government should closely monitor any developments as they will have to adhere to the new or enhanced cybersecurity requirements set out in the Order.
In addition, even if this Executive Order gets revoked by the incoming administration, organizations should not miss the opportunity to evaluate whether their cybersecurity programs comply with industry standard guidelines, such as NIST, as well as general best practices.