Continued FTC Crackdown on False Product Reviews

Consumer protection wins again! The Federal Trade Commission (FTC) announced a final order settling its complaint against Rytr LLC, an artificial intelligence (AI) writing assistant tool that was capable of producing detailed and specific false product reviews using AI technology.
The FTC further alleged that Rytr subscribers used the service to generate product reviews potentially containing false information, deceiving potential consumers who sought to use the reviews to make purchasing decisions. The final order settling the complaint, which was published on December 18, 2024, bars Rytr from engaging in similar illegal conduct in the future and prohibits the company from advertising, marketing, promoting, offering for sale, or selling any services “dedicated to or promoted as generating consumer reviews or testimonials.”
The decision highlights increased scrutiny against AI tools that can be used to generate false and deceptive content, which may mislead consumers. AI developers should prioritize transparency in how AI-generated content is created and used and ensure that AI services comply with advertising and consumer protection laws. The decision also reflects the need for AI developers to balance innovation with ensuring their innovations do not harm consumers.

Out with a Bang: President Biden Ends Final Week in Office with Three AI Actions — AI: The Washington Report

President Biden’s final week in office included three AI actions — a new rule on chip and AI model export controls, an executive order on AI infrastructure and data centers, and an executive order on cybersecurity.
On Monday, the Department of Commerce issued a rule on responsible AI diffusion limiting chip and AI model exports made to certain countries of concern. The rule is particularly aimed at curbing US AI technology exports to China and includes exceptions for US allies.
On Tuesday, President Biden signed an executive order (EO) on AI infrastructure, which directs agencies to lease federal sites for the development of large-scale AI data centers.
On Thursday, Biden signed an EO on cybersecurity, which directs the federal government to strengthen its cybersecurity systems and implement more rigorous requirements for software providers and other third-party contractors.
The actions come just days before President-elect Trump begins his second term. Yet, it remains an open question whether President Trump, who has previously supported chip export controls and data center investments, will keep these actions in place or undo them.  

 In its final week, the Biden administration issued three final actions on AI, capping off the administration that took the first steps toward creating a government response to AI. On Monday, the Biden administration announced a rule on responsible AI diffusion through chip and AI model export controls, which limit such exports to certain foreign countries. On Tuesday, President Biden signed an Executive Order (EO) on Advancing United States Leadership in Artificial Intelligence Infrastructure, which directs agencies to lease federal sites for the development of AI data centers. And on Thursday, Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity, which directs the federal government to strengthen its cybersecurity operations.
The new AI actions come just days before President-elect Trump takes the White House. What Trump decides to do with Biden’s new and old AI actions, as we discuss below, may provide the first indication of the direction of his second administration’s approach to AI.
Rule on Responsible Diffusion of Advanced AI Technology
On Monday, the Department of Commerce’s Bureau of Industry and Security announced a sweeping rule on export controls on chips and AI models, which requires licenses for exports of the most advanced chips and AI models. The rule aims to allow US companies to export advanced chips and AI models to global allies while also preventing the diffusion of those technologies, either directly or through an intermediary, into countries of concern, including China and Russia.
“To enhance U.S. national security and economic strength, it is essential that we do not offshore [AI] and that the world’s AI runs on American rails,” according to a White House fact sheet. “It is important to work with AI companies and foreign governments to put in place critical security and trust standards as they build out their AI ecosystems.”
The rule divides countries into three categories, with different levels of export controls and licensing requirements for each category based on their risk level:

Eighteen (18) close allies can receive a license exception. Close allies are “jurisdictions with robust technology protection regimes and technology ecosystems aligned with the national security and foreign policy interests of the United States.” They include Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Taiwan, and the United Kingdom.
Countries of concern, including China and Russia, must receive a license to export chips. A “presumption of denial” will apply to license applications from these countries.
All other countries are allowed to apply for a license, and “license applications will be reviewed under a presumption of approval.” But after a certain number of chips are exported, certain restrictions will apply for these countries.

The rule’s export controls fall into four categories depending on the country, its security standards, and the types of chips being exported.

Orders for chips of up to 1,700 advanced GPUs “do not require a license and do not count against national chip caps.”
Entities headquartered in close allies can obtain “Universal Verified End User” (UVEU) status by meeting high security and trust standards. With this status, these countries “can then place up to 7% of their global AI computational capacity in countries around the world — likely amounting to hundreds of thousands of chips.”
Entities not headquartered in a country of concern can obtain “National Verified End User” status by meeting the same high security and trust standards, “enabling them to purchase computational power equivalent to up to 320,000 advanced GPUs over the next two years.”
Entities not headquartered in a close ally and without VEU status “can still purchase large amounts of computational power, up to the equivalent of 50,000 advanced GPUs per country.”

The rule also includes specific export restrictions and licensing requirements for AI models.

Advanced Closed-Weight AI Models: A license is required to export any closed-weight AI model —“i.e., a model with weights that are not published” — “that has been trained on more than 1026 computational power.” Applications for these licenses will be reviewed under a presumption of denial policy “to ensure that the licensing process consistently accounts for the risks associated with the most advanced AI models.”
Open-Weight AI Models: The rule does “not [impose] controls on the model weights of open-weight models,” the most advanced of which “are currently less powerful than the most advanced closed-weight models.”

The new chip export controls build on previous export controls from 2022 and 2023, which we previously covered.
Executive Order on AI Infrastructure
On Tuesday, Biden signed an Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. The EO directs the Department of Defense and Department of Energy to lease federal sites to the private sector for the development of gigawatt-scale AI data centers that adhere to certain clean energy standards.
“These efforts also will help position America to lead the world in clean energy deployment… This renewed partnership between the government and industry will ensure that the United States will continue to lead the age of AI,” President Biden said in a statement.
The EO requires the Secretary of Defense and Secretary of Energy to identify three sites for AI data centers by February 28, 2025. Developers that build on these sites “will be required to bring online sufficient clean energy generation resources to match the full electricity needs of their data centers, consistent with applicable law.”
The EO also directs agencies “to expedite the processing of permits and approvals required for the construction and operation of AI infrastructure on Federal sites.” The Department of Energy will work to develop and upgrade transmission lines around the new sites and “facilitate [the] interconnection of AI infrastructure to the electric grid.”
Private developers of AI data centers on federal sites are also subject to numerous lease obligations, including paying for the full cost of building and maintaining AI infrastructure and data centers, adhering to lab security and labor standards, and procuring certain clean energy generation resources.
Executive Order on Cybersecurity
On Thursday, President Biden signed an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity. The EO directs the federal government to strengthen the cybersecurity of its federal systems and adopt more rigorous security and transparency standards for software providers and other third-party contractors. It directs various agencies — with some deadlines as soon as 30 days from the EO’s issuance — to evaluate their cybersecurity systems, launch cybersecurity pilot programs, and implement strengthened cybersecurity practices, including for communication and identity management systems.
The EO also aims to integrate AI into government cybersecurity operations. The EO directs the Secretary of Energy to launch a pilot program “on the use of AI to enhance the cyber defense of critical infrastructure in the energy sector.” Within 150 days of the EO, various agencies shall also “prioritize funding for their respective programs that encourage the development of large-scale, labeled datasets needed to make progress on cyber defense research.” Also, within 150 days of the EO, various agencies shall pursue research on a number of AI topics, including “human-AI interaction methods to assist defensive cyber analysis” and “methods for designing secure AI systems.”
The Fate of President Biden’s AI Actions Under a Trump Administration?
It remains an open question whether Biden’s new AI infrastructure EO, cybersecurity EO, and chip export control rule will survive intact, be modified, or be eliminated under the Trump administration, which begins on Monday. What Trump decides to do with the new export control rule, in particular, may signal the direction of his administration’s approach to AI. Trump may keep the export controls due to his stated commitment to win the AI race against China, or he may get rid of them or tone them down out of concerns that they overly burden US AI innovation and business.

New Jersey Guidance on AI: Employers Must Comply With State Anti-Discrimination Standards

On January 9, 2025, New Jersey Attorney General Matthew J. Platkin and the Division on Civil Rights issued guidance stating that New Jersey’s anti-discrimination law applies to artificial intelligence. Specifically, the New Jersey Law Against Discrimination (“LAD”) applies to algorithmic discrimination – discrimination that results from the use of automated decision-making tools – the same way it has long applied to other forms of discriminatory conduct.
In a statement accompanying the guidance, the Attorney General explained that while “technological innovation . . . has the potential to revolutionize key industries . . . it is also critically important that the needs of our state’s diverse communities are considered as these new technologies are deployed.” This move is part of a growing trend among states to address and mitigate the risks of potential algorithmic discrimination resulting from employers’ use of AI systems.
LAD’s Prohibition of Algorithmic Discrimination
The guidance explains that the term “automated decision-making tool” refers to any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process. Automated decision-making tools can incorporate technologies such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The guidance makes clear that under the LAD, discrimination is prohibited regardless of whether it is caused by automated decision-making tools or human actions. The LAD’s broad purpose is to eliminate discrimination, and it doesn’t distinguish between the mechanisms used to discriminate. This means that employers will still be held accountable under the LAD for discriminatory practices, even if those practices rely on automated systems. An employer can violate the LAD even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool. Essentially, claims of algorithmic discrimination are assessed the same way as other discrimination claims under the LAD.
The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. The LAD also prohibits algorithmic discrimination when it precludes or impedes the provision of reasonable accommodations, or of modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.
Unlike the New York City law that restricts employers’ ability to use automated employment decision tools in hiring and promotion decisions within New York City and requires employers to perform a bias audit of such tools to assess the potential disparate impact on sex, race, and ethnicity, there is no audit requirement under the LAD. However, the Attorney General’s guidance does recognize that “algorithmic bias” can occur in the use of automated decision-making tools and recommends various steps employers can take to identify and eliminate such bias, such as:

implementing quality control measures for any data used in designing, training, and deploying the tool;
conducting impact assessments;
having pre-and post-deployment bias audits performed by independent parties;
providing notice of their use of an automated decision making tool;
involving people impacted by their use of a tool in the development of the tool; and
purposely attacking the tools to search for flaws.

This new guidance highlights the need for employers to exercise caution when using artificial intelligence and to thoroughly assess any automated decision-making tools they intend to implement. 
Tamy Dawli is a law clerk and contributed to this article

Colorado Attorney General Announces Adoption of Amendments to Colorado Privacy Act Rules + Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement

Colorado Adopts Amendments to CPA Rules
The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement
A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025. 

The BR International Trade Report: January 2025

Recent Developments
President Biden blocks Nippon Steel’s acquisition of US Steel. On January 3, President Biden announced that he would block the $15 billion sale of U.S. Steel to Japan’s Nippon Steel, citing national security concerns. President Biden’s decision came after the Committee on Foreign Investment in the United States (“CFIUS”) reportedly deadlocked in its review of the transaction and referred the matter to the President. U.S. Steel and Nippon Steel condemned the President’s action in a joint statement, arguing it marked “a clear violation of due process and the law governing CFIUS,” and on January 6 filed suit challenging the measure. 
Canadian Prime Minister Justin Trudeau announces his resignation as party leader and prime minister. On January 6, Prime Minister Trudeau, who has served as the Liberal Party leader since 2013 and prime minister since 2015, declared his intention to “resign as party leader, as prime minister, after the party selects its next leader through a robust, nationwide, competitive process.” Governor General Mary Simon suspended, or prorogued, the Canadian Parliament until March 24 to allow the Liberal Party time to select its new leader—who will replace Trudeau as prime minister leading up to the general elections, which must be held by October 20. Separately, details have begun to leak of the potential Canadian retaliation against President-elect Trump’s threatened tariffs on Canadian goods. This retaliation could include tariffs on certain steel, ceramics, plastics, and orange juice. 
U.S. Department of Commerce announces new export controls for AI chips. On January 13, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) issued a new interim final rule in an effort to keep advanced artificial intelligence (“AI”) chips from foreign adversaries. The interim final rule seeks to implement a three-tiered system of export restrictions. Under the new rule, (i) certain allied countries would face no new restrictions, (ii) non-allied countries would face certain restrictions, and (iii) U.S. adversaries would face almost absolute restrictions. BIS followed up with another rule on January 15 imposing heightened export controls for foundries and packaging companies exporting advanced chips, with exceptions for exports to an approved list of chip designers and for chips packaged by certain approved outsourced semiconductor assembly and test services (“OSAT”) companies.
Biden Administration imposes sanctions against Russia’s energy sector in parting blow. On January 10, the U.S. Department of the Treasury (“Treasury”) issued determinations authorizing the imposition of sanctions against any person operating in Russia’s energy sector and prohibiting U.S. persons from supplying petroleum services to Russia, and designated two oil majors—Gazprom Neft and Surgutneftegas—among others.
BIS issues final ICTS rule on connected vehicle imports and begins review of drone supply chain. On January 14, BIS issued a final rule under the Information and Communications Technology and Services (“ICTS”) supply chain regulations prohibiting the import of certain connected vehicles and connected vehicle hardware, capping a rulemaking process that started in March 2024. The rules, which will have a significant impact on the auto industry supply chain, will apply in certain cases to model year 2027 and in certain other cases to model year 2029. (See our alert on BIS’s proposed rule from September 2024.) Meanwhile, BIS launched an ICTS review on January 2 into the potential risk associated with Chinese and Russian involvement in the supply chains of unmanned aircraft systems, issuing an Advance Notice of Proposed Rulemaking.
China implicated in cyberattack on the U.S. Treasury. In December, a China state-sponsored Advanced Persistent Threat (“APT”) actor hacked Treasury, using a stolen key. Reports suggest that attack targeted Treasury’s Office of Foreign Assets Control (“OFAC”), which administers U.S. sanctions programs, among other elements of Treasury. Initial reporting indicated that only unclassified documents were accessed by hackers, although the extent of the attack is still largely unknown. The Chinese government has denied involvement.
United Kingdom joins the Comprehensive and Progressive Agreement for Trans-Pacific Partnership. On December 15, the United Kingdom officially joined the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (“CPTPP”)—a trade agreement between Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam—nearly four years after submitting its 2021 application. The United Kingdon is the first non-founding country to join the CPTPP. 
Fallout of failed presidential martial law declaration continues in South Korea. South Korea continues to face unrest after last month’s short-lived declaration of martial law by President Yoon Suk Yeol, which led to his December 14 impeachment and January 15 arrest by anti-corruption investigators. On December 27, the National Assembly also impeached Prime Minister Han Duk-soo, who had been serving as acting president for the two weeks following Yoon’s impeachment. Finance Minister Choi Sang-mok now serves as acting president, and faces calls from South Korean investigators to order the presidential security service to comply with a warrant for President Yoon’s arrest.
Office of the U.S. Trade Representative initiates investigation into legacy chips from China. In late December, U.S. Trade Representative (“USTR”) Katherine Tai announced a new Section 301 investigation “regarding China’s acts, policies, and practices related to the targeting of the semiconductor industry for dominance.” The USTR will focus its initial investigation on “legacy chips,” which are integral to the U.S. manufacturing economy. The USTR began accepting written comments and requests to appear at the hearing on January 6. The public hearing is scheduled for March 11-12. 
President-elect Donald Trump eyes the Panama Canal and Greenland. At the December 2024 annual conference for Turning Point USA, President-elect Donald Trumpcriticized Panama’s management of the Panama Canal, indicating that the United States should reclaim control due to “exorbitant prices” to American shipping and naval vessels and Chinese influence in the Canal Zone. Panamanian President José Raúl Mulino rejected Trump’s claims, stating “[t]he canal is Panamanian and belongs to Panamanians. There’s no possibility of opening any kind of conversation around this reality.” President-elect Trump also has sought to revive his 2019 proposal to purchase Greenland from Denmark, emphasizing its strategic position in the Arctic and untapped natural resources. In response, Greenland’s Prime Minister Mute Egede stated that Greenland is not for sale, but would “work with the U.S.—yesterday, today, and tomorrow.”
Nicolás Maduro sworn in for third presidential term, despite disputed election results. On January 10, Nicolás Maduro Moros was inaugurated for another six-year term as president of Venezuela, despite evidence he lost the election to opposition candidate Edmundo González Urrutia. Gonzalez, recognized by the Biden Administration as the president-elect of Venezuela, met with President Biden in the White House on January 6. In response to Maduro’s inauguration, the United States announced new sanctions programs against Maduro associates and extended the 2023 designation of Venezuela for Temporary Protected Status by 18 months. 
U.S. Department of Defense designates more entities on Chinese Military Companies list. In its annual update of the Chinese Military Companies list (“CMC list”), the Department of Defense (“DoD”) added dozens of Chinese companies to the list, including well-known technology, AI, and battery companies, bringing the total number of CMC List entities to 134. Beginning in June 2026, DoD is prohibited from dealing with the newly designated companies.
European Union and China consider summit to mend ties. On January 14, European Council President António Costa and Chinese President Xi Jinping spoke via phone call, reportedly agreeing to host a summit on May 6, 2025—the 50th anniversary of EU-China diplomatic relations. The conversation comes just days before the inauguration of President-elect Donald Trump, who has threatened additional tariffs on Chinese goods and pushed the European Union to further decouple from China. Despite Beijing’s and Brussels’s willingness to meet, China-EU trade tensions remain high, highlighted by the European Commission’s October decision to impose duties of up to 35% on Chinese-made electric vehicles.

President Biden Issues Second Cybersecurity Executive Order

In light of recent cyberattacks targeting the federal government and United States supply chains, President Biden’s administration has released an Executive Order (the “Order”) in an attempt to modernize and enhance the federal government’s cybersecurity posture, as well as introduce and expand upon new or existing requirements imposed on third-party suppliers to federal agencies.
To the extent that the mandates set forth in this Order remain in place after President-elect Donald Trump takes office, third-party vendors and suppliers that contract with the federal government will need to ensure compliance with new or updated cybersecurity standards in order to remain eligible to contract with federal agencies. With that said, even if this Executive Order does not pass through to the next administration, it still provides general guidance on best practices for cybersecurity. While some of these practices may not be novel to the cybersecurity industry, it would serve as yet another guidance document for companies on what constitutes “reasonable security.”
Below is a high-level, non-exhaustive summary of some of the key highlights in the Executive Order. Please note that the mandates would take effect on different dates in accordance with the time frames discussed in the Order.
Federal Government’s Latest Attempt to Modernize its Cybersecurity Posture
The Executive Order underscores the importance of modernizing the federal government’s cybersecurity infrastructure to defend against cyber campaigns by foreign adversaries targeting the government.
One of the ways in which the new Order attempts to do this is by directing federal agencies to implement “strong identity authentication and encryption” across communications transmitted via the internet, including email, voice and video conferencing, and instant messaging.
In addition, as federal agencies have improved their cyber defenses, adversaries have targeted the weak links in agency supply chains and the products and services upon which the government relies. In light of this pervasive threat, the Executive Order places a strong emphasis on the need for federal agencies to integrate cybersecurity supply chain risk management programs into enterprise-wide risk management by requiring those agencies, via the Office of Management and Budget (OMB), to (i) comply with the guidance in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-161 (Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations), and (ii) provide annual updates to OMB on their compliance efforts with respect to the same. The OMB’s requirements will address the integration of cybersecurity into the acquisition lifecycle through acquisition planning, source selection, responsibility determination, security compliance evaluation, contract administration, and performance evaluation.
The Executive Order also addresses the potential to use artificial intelligence (AI) to defend against cyberattacks by increasing the government’s ability to quickly identify new vulnerabilities and automate cyber defenses. Specifically, the Order directs certain agencies to prioritize research on topics related to AI and cyber defense, which include: (i) human-AI interaction methods to assist with defensive cyber analysis; (ii) security of AI coding assistance and the security of AI-generated code; (iii) methods for designing secure AI systems; and (iv) methods for prevention, response, remediation, and recovery from cyber incidents involving AI systems.
Beyond using modern technology to defend against increasing cyber threats, the Executive Order aims to centralize the government’s cybersecurity governance by expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) role as the lead agency overseeing federal civilian agencies’ cybersecurity programs.
Enhancing and Expanding Upon Requirements Imposed on Third-Party Vendors of Federal Agencies
In addition to requiring federal agencies to adjust their cybersecurity posture, the Executive Order also aims to ensure that third-party vendors of federal agencies undertake various measures that are intended to help ensure the safety and security of our federal government and critical infrastructure systems, and strengthen the United States supply chains, from malicious cyber-attacks.
Third-Party Software Providers and Secure Software Development Practices
Part of the latest Executive Order focuses on transparency and deployment of secure software that meets standards set forth in the Biden administrations first cybersecurity Executive Order 14028, which was issued in May 2021. Under that Order, suppliers are required to attest that they adhere to secure software development practices, in language spurred by Russian hackers who infected an update of the widely used SolarWinds Orion software to penetrate the networks of federal agencies. Given that insecure software remains a challenge for both providers and users, it has continued to make the federal government and critical infrastructure systems vulnerable to additional malicious cyber incidents. This was recently illustrated by several attacks, including the 2024 exploitation of a vulnerability in a popular file transfer application used by multiple federal agencies.
Against this backdrop, the newly released Executive Order sets forth more robust attestation requirements for software providers that support critical government services and pushes for enhanced transparency by publicizing when these providers have submitted their attestations so that others can know what software meets the secure standards. In a similar vein, the new Order also aims to provide federal agencies with a coordinated set of practical and effective security practices to require when they procure software by calling for (i) updates to certain frameworks established by NIST that are adhered to by federal agencies – such as NIST SP 800-218 (Secure Software Development Framework) (SSDF) – for the secure development and delivery of software, (ii) the issuance of new requirements by OMB that derive from NIST’s updated SSDF to apply to federal agencies’ use of third-party software, and (iii) potential revisions to CISA’s Secure Software Development Attestation to conform to OMB’s requirements.
Vendors of Consumer Internet-of-Thing (IoT) Products and U.S. Cyber Trust Mark Label
To further protect the supply chain, the Executive Order recognizes the risks federal agencies face when purchasing IoT products. To address these risks, the Order requires the development of additional requirements for contracts with consumer IoT providers. Consumer IoT providers contracting with federal agencies will have to (i) comply with the minimum cybersecurity practices outlined by NIST, and (ii) carry United States Cyber Trust Mark labeling on their products. The initiative related to Cyber Trust Mark labeling was announced by the White House on January 7, 2025, and will require consumer IoT products to pass a U.S. cybersecurity audit and legally display the mark on advertising and packaging.
Cloud Service Providers
The Executive Order also requires the development of new guidelines for cloud service providers, which is unsurprising in light of the recent cyber attack on the U.S. Treasury Department where a sophisticated Chinese hacking group known as Silk Typhoon stole a digital key from BeyondTrust Inc.—a third-party service provider for the Treasury Department—and used it to access unclassified information maintained on Treasury Department user workstations. The breach utilized a technique known as token theft. Authentication tokens are designed to enhance security by allowing users to stay logged in without repeated password entry. However, if compromised, these tokens enable attackers to impersonate legitimate users, granting unauthorized access to sensitive systems. 
While this incident is likely not the impetus behind the updated guidelines for cloud service providers, it underscores the importance of auditing third-party vendor security practices and taking measures to reduce the lifespan of tokens so as to limit their usefulness if stolen. These new guidelines under the Executive Order would mandate multifactor authentication, complex passwords, and storing cryptographic keys using hardware security keys for cloud service providers of federal agencies.
Key Takeaways
Although the fate of the Executive Order is uncertain with an incoming administration, organizations that contract with the federal government should closely monitor any developments as they will have to adhere to the new or enhanced cybersecurity requirements set out in the Order.
In addition, even if this Executive Order gets revoked by the incoming administration, organizations should not miss the opportunity to evaluate whether their cybersecurity programs comply with industry standard guidelines, such as NIST, as well as general best practices.

California Attorney General Issues Two Advisories Summarizing Law Applicable to AI

If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The second advisory focuses on healthcare entities.
“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI.” Attorney General Bonta
The advisories summarize existing California laws that may apply to entities who develop, sell, or use AI. They also address several new California AI laws that went into effect on January 1, 2025.
The first advisory points to several existing laws, such as California’s Unfair Competition Law and Civil Rights Laws, designed to protect consumers from unfair and fraudulent business practices, anticompetitive harm, discrimination and bias, and abuse of their data.
California’s Unfair Competition Law, for example, protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. The advisory notes that “AI provides new tools for businesses and consumers alike, and also creates new opportunity to deceive Californians.” Under a similar federal law, the Federal Trade Commission (FTC) recently ordered an online marketer to pay $1 million resulting from allegations concerning deceptive claims that the company’s AI product could make websites compliant with accessibility guidelines. Considering the explosive growth of AI products and services, organizations should be revisiting their procurement and vendor assessment practices to be sure they are appropriately vetting vendors of AI systems.
Additionally, the California Fair Employment and Housing Act (FEHA) protects Californians from harassment or discrimination in employment or housing based on a number of protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status. These FEHA protections extend to uses of AI systems when developed for and used in the workplace. Expect new regulations soon as the California Civil Rights Counsel continues to mull proposed AI regulations under the FEHA.
Recognizing that “data is the bedrock underlying the massive growth in AI,” the advisory points to the state’s constitutional right to privacy, applicable to both government and private entities, as well as to the California Consumer Privacy Act (CCPA). Of course, California has several other privacy laws that may need to be considered when developing and deploying AI systems – the California Invasion of Privacy Act (CIPA), the Student Online Personal Information Protection Act (SOPIPA), and the Confidentiality of Medical Information Act (CMIA).
Beyond these existing laws, the advisory also summarizes new laws in California directed at AI, including:

Disclosure Requirements for Businesses
Unauthorized Use of Likeness
Use of AI in Election and Campaign Materials
Prohibition and Reporting of Exploitative Uses of AI

The second advisory recounts many of the same risks and concerns about AI as relevant to the healthcare sector. Consumer protection, anti-discrimination, patient privacy and other concerns all are challenges entities in the healthcare sector face when developing or deploying AI. The advisory provides examples of applications of AI systems in healthcare that may be unlawful, here are a couple:

Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides doctors’ views about necessary treatment.
Use generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.

The advisory also addresses data privacy, reminding readers that the state’s CMIA may be more protective in some respects than the popular federal healthcare privacy law, HIPAA. It also discusses recent changes to the CMIA that require providers and electronic health records (EHR) and digital health companies enable patients to keep their reproductive and sexual health information confidential and separate from the rest of their medical records. These and other requirements need to be taken into account when incorporating AI into EHRs and related applications.
In both advisories, the Attorney General makes clear that in addition to the laws referenced above, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply to AI. In short:
Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.
Both advisories provide a helpful summary of laws potentially applicable to AI systems, and can be useful resources when building policies and procedures around the development and/or deployment of AI systems.

New Jersey AG Says Anti-Discrimination Law Covers Algorithmic Discrimination

Last week, New Jersey Attorney General Matthew Platkin announced new guidance that the New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination, i.e., when automated systems treat people differently or negatively based on protected characteristics. This can happen with algorithms trained on biased data or with systems designed with biases in mind. LAD prohibits discrimination based on a protected characteristic like race, religion, national origin, sex, pregnancy, and gender identity, among other things. According to the guidance, employers, housing providers, and places of public accommodation who make discriminatory decisions using automated decision-making tools, like artificial intelligence (AI), would violate LAD. LAD is not an intent-based statute. Therefore, a party can violate LAD even if it uses an automated decision-maker with no intent to discriminate or uses a discriminatory algorithm developed by a third party. The guidance does not create any new rights or obligations. However, in noting that the law covers automated decision-making, the guidance encourages companies to carefully design, test, and evaluate any AI system they seek to employ to help avoid producing discriminatory impacts.

The CIO-CMO Collaboration: Powering Ethical AI and Customer Engagement

The rapid advancement of artificial intelligence (AI) technologies is reshaping the corporate landscape, offering unparalleled opportunities to enhance customer experiences and streamline operations. At the intersection of this digital transformation lie two key executives—the Chief Information Officer (CIO) and the Chief Marketing Officer (CMO). This dynamic duo, when aligned, can drive ethical AI adoption, ensure compliance, and foster personalized customer engagement powered by innovation and responsibility.
This blog explores how the collaboration between CIOs and CMOs is essential in balancing ethical AI implementations with compelling customer experiences. From data governance to technology infrastructure and cybersecurity, below is a breakdown of the critical aspects of this partnership and why organizations must align these roles to remain competitive in the AI-driven world.
Understanding Ethical AI: Balancing Innovation with Responsibility
Ethical AI isn’t just a buzzword; it’s a guiding principle that ensures AI solutions respect user privacy, avoid bias, and operate transparently. To create meaningful customer experiences while addressing the societal concerns surrounding AI, CIOs, and CMOs must collaborate to design AI applications that are innovative and responsible.
CMOs focus on delivering dynamic, real-time, and personalized interactions to meet rising customer expectations. However, achieving this requires vast amounts of personal data, potentially risking violations of privacy regulations like the General Data Protection Regulation and the California Consumer Privacy Act. Enter the CIO, who ensures the technical infrastructure adheres to these laws while safeguarding the organization’s reputation. Together, the CIO and CMO can delicately balance between leveraging AI for customer engagement and adhering to responsible AI practices.
The Role of Data Governance in AI-Driven Strategies
Data governance is the backbone of ethical AI and compelling customer engagement. CMOs rely on customer data to craft hyper-personalized campaigns, while CIOs are charged with maintaining that data’s the security, accuracy, and ethical usage. Without proper governance, organizations risk breaches, regulatory fines, and, perhaps most damagingly, a loss of trust among consumers.
Collaboration between CIOs and CMOs is necessary to establish clear data management protocols; this includes ensuring that all collected data is anonymized as needed, securely stored, and utilized in compliance with emerging AI content labeling regulations. The result is a transparent system that reassures customers and consistently delivers high-quality experiences.
Robust Technology Infrastructure for AI-Powered Customer Engagement
For AI to deliver on its promise of customer engagement, organizations require scalable, secure, and agile technology infrastructure. A close alignment between CIOs and CMOs ensures that marketing campaigns are supported by IT systems capable of handling diverse AI workloads.
Platforms driven by machine learning and big data analytics allow marketing teams to create real-time, omnichannel campaigns. Meanwhile, CIOs ensure these platforms integrate seamlessly into the organization’s technology stack without sacrificing security or performance. This partnership allows marketers to focus on innovative strategies while IT supports them with reliable and forward-thinking infrastructure.
Cybersecurity Challenges and the Integrated Approach of CIOs and CMOs
Customer engagement strategies powered by AI rely heavily on consumer trust, but cybersecurity threats lurk around every corner. According to Palo Alto Networks’ predictions, customer data is central to modern marketing initiatives. However, without an early alignment between CIOs and CMOs, the organization is exposed to risks like data breaches, compliance violations, and AI-related controversies.
A proactive collaboration between CIOs and CMOs ensures that potential vulnerabilities are identified and mitigated before they evolve into full-blown crises. Measures such as end-to-end data encryption, regular cybersecurity audits, and robust AI content labeling policies can protect the organization’s digital assets and reputation. This integrated approach enables businesses to foster lasting customer trust in a world of increasingly sophisticated cyber threats.
Case Studies: Successful CIO-CMO Collaborations

Case Study 1: A Retail Giant’s TransformationOne of the world’s largest retail chains successfully transformed its customer experience through the CIO-CMO collaboration. The CIO rolled out a scalable AI-driven recommendation engine, while the CMO used this tool to craft personalized shopping experiences. The result? A 35% increase in customer retention within a year and significant growth in lifetime customer value.
Case Study 2: Financial Services LeaderA financial services firm adopted an AI-powered chatbot to enhance its customer service. The CIO ensured compliance with strict financial regulations, while the CMO leveraged customer insights to refine the chatbot’s conversational design. Together, they created a seamless, trustworthy digital service channel that improved customer satisfaction scores by 28%.
These examples reinforce the advantages of partnership. By uniting their expertise, CIOs and CMOs deliver next-generation strategies that drive measurable business outcomes.

Future Trends in AI, Compliance, and Executive Collaboration
The evolving landscape of AI, compliance, and customer engagement is reshaping the roles of CIOs and CMOs. Here are a few trends to watch for in the coming years:

AI Transparency: Regulations will increasingly require companies to disclose how AI models were trained and how customer data is used. Alignment between CIOs and CMOs will be vital in meeting these demands without derailing marketing campaigns.
Hyper-Personalization: Advances in machine learning will allow marketers to offer even more granular personalization, but this will require sophisticated data-centric systems designed by CIOs.
AI Content Labeling: From machine-generated text to synthetic media, organizations must adopt clear labeling practices to distinguish between AI-driven and human-generated content.

By staying ahead of these trends, organizations can cement themselves as leaders in ethical AI and customer engagement.
Forging a Path to Sustainable AI Innovation The digital transformation of business will continue to deepen the interconnected roles of the CIO and CMO. These two leaders occupy the dual pillars required for success in the AI era—technology prowess and customer-centric creativity. By aligning their goals and strategies early on, they can power ethical AI innovation, ensure compliance, and elevate customer experiences to new heights.

California AG Issues AI-Related Legal Guidelines for Developers and Healthcare Entities

The California Attorney General published two legal advisories this week:

Legal Advisory on the Application of Existing California Laws to Artificial Intelligence
Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare 

These advisories seek to remind businesses of consumer rights under the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA), and to advise developers who create, sell, or use artificial intelligence (AI) about their obligations under the CCPA.
Attorney General Rob Bonta said, “California is an economic powerhouse built in large part on technological innovation. And right alongside that economic might is a strong commitment to economic justice, workers’ rights, and competitive markets. We’re not successful in spite of that commitment — we’re successful because of it [. . .] AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.” 
Advisory No. 1: Application of Existing California Laws to Artificial Intelligence
This advisory:

Provides an overview of existing California laws (i.e., consumer protection, civil rights, competition, data protection laws, and election misinformation laws) that may apply to companies that develop, sell, or use AI;
Summarizes the new California AI law that went into effect on January 1, 2025, such as:
Disclosure Requirements for Businesses
Unauthorized Use of Likeness
Use of AI in Election and Campaign Materials
Prohibition and Reporting of Exploitative Uses of AI

Advisory No. 2: Application of Existing California Law to Artificial Intelligence in Healthcare 
AI tools are used for tasks such as appointment scheduling, medical risk assessment, and medical diagnosis and treatment decisions. This advisory:

Provides guidance under California law, i.e., consumer protection, civil rights, data privacy, and professional licensing laws—for healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems;
Reminds such entities that AI carries harmful risks and that all AI systems must be tested, validated, and audited for safe, ethical, and lawful use;
Informs such entities that they must be transparent about using patient data to train AI systems and alert patients on how they are using AI to make decisions affecting their health and/or care;

This is yet another example of how issues related to the safe and ethical use of AI will likely be at the forefront for many regulators across many industries.

Biden Administration Releases Executive Order Advancing Artificial Intelligence

Highlights
The Biden administration’s latest executive order represents a transformative step in the U.S.’ approach to AI, integrating innovation with sustainability and security
Businesses will have an opportunity to align with this strategic vision, contribute to an ecosystem that will sustain U.S. leadership, and encourage economic competitiveness
The principles outlined in the executive order will guide federal agencies to ensure AI infrastructure supports national priorities while fostering innovation, sustainability, and inclusivity

On Jan. 14, 2025, President Biden issued an executive order on advancing the United States’ position as a leader in the creation of artificial intelligence (AI) infrastructure.
AI is a transformative technology with critical implications for national security and economic competitiveness. Recent advancements highlight AI’s growing role in industries and areas including logistics, military capabilities, intelligence analysis, and cybersecurity. Developing AI domestically could be essential in preventing adversaries from exploiting powerful systems, maintaining national security, and avoiding reliance on foreign infrastructure.
The executive order posits that to secure U.S. leadership in AI development, significant private sector investments are needed to build advanced computing clusters, expand energy infrastructure, and establish secure supply chains for critical components. AI’s increasing computational and energy demands necessitate innovative solutions, including advancements in clean energy technologies such as geothermal, solar, wind, and nuclear power.
The executive order notes:
National Security and Leadership

AI infrastructure development should enhance U.S. national security and leadership in AI, including collaboration between the federal government and the private sector; ensuring safeguards for cybersecurity, supply chains, and physical security; and managing risks from future frontier AI capabilities.
The Secretary of State, in coordination with key federal officials and agencies, will create a plan to engage allies and partners in accelerating the global development of trusted AI infrastructure. The plan will focus on advancing collaboration on building trusted AI infrastructure worldwide.

Economic Competitiveness

AI infrastructure should also strengthen U.S. economic competitiveness by fostering a fair, open, and innovative technology ecosystem by supporting small developers, securing reliable supply chains, and ensuring that AI benefits all Americans.

Clean Energy Leadership

The U.S. aims to lead in operating AI data centers powered by clean energy to help ensure that new data center electricity demands do not take clean power away from other end users or increase grid emissions. This involves modernizing energy infrastructure, streamlining permitting processes, and advancing clean energy technologies, ensuring AI infrastructure development aligns with new clean electricity generation.
The Department of Energy, in coordination with other agencies, will expand research and development efforts to improve AI data center efficiency, focusing on building systems, energy use, cooling infrastructure, software, and wastewater heat reuse. A report will be submitted to the president with recommendations for advancing industry-wide efficiency, including innovations like server consolidation, hardware optimization, and power management.
The Secretary of Energy will provide technical assistance to state public utility commissions on rate structures, such as clean transition tariffs, to enable AI infrastructure to use clean energy without raising electricity or water costs unnecessarily.

Cost and Community Considerations

Because building AI in the U.S. requires enormous private-sector investments, the AI infrastructure must be developed without increasing energy costs for consumers and businesses. Companies participating in AI development, clean energy technology, and grid and semiconductor development can work with federal agencies to strategically further these initiatives that align with broader ethical and operational standards.
The Secretaries of Defense and Energy will each identify at least three federally managed sites suitable for leasing to non-federal entities for the construction and operation of frontier AI data centers and clean energy facilities. These sites should aim to be fully permitted for construction by the end of 2025 and operational by the end of 2027.
Priority will be given to locations that 1) have appropriate terrain, land gradients, and soil conditions for AI data centers; 2) minimize adverse impacts on local communities, natural or cultural resources, and protected species; and 3) are near communities seeking to host AI infrastructure, supporting local employment opportunities in design, construction, and operations.

Worker and Community Benefits

AI infrastructure projects should uphold high labor standards, involve close collaboration with affected communities, and prioritize safety and equity, ensuring the broader population benefits from technological innovation.
The Director of the Office of Management and Budget, in consultation with the Council on Environmental Quality, will evaluate best practices for public participation in siting and energy-related infrastructure decisions for AI data centers. Recommendations will be made to the Secretaries of Defense and Energy, who will incorporate these into their decision-making processes to ensure effective governmental engagement and meaningful community input on health, safety, and environmental impacts.
Relevant agencies will prioritize measures to keep electricity costs low for households, consumers, and businesses when implementing AI infrastructure on Federal sites.

Takeaways
The U.S. is committed to enabling the development and operation of AI infrastructure, including data centers, guided by five key principles: 1) national security and leadership; 2) economic competitiveness; 3) leadership in clean energy; 4) cost and community consideration; and 5) workforce and community benefits.
The Biden administration’s latest initiative aims to foster a competitive technology ecosystem, enable small and large companies to thrive, keep electricity costs low for consumers, and ensure that AI infrastructure development benefits workers and their local communities.

FTC to Hold Hearing on Impersonation Rule Amendment

The Federal Trade Commission (FTC) will hold an informal hearing at 1:00pm EST on January 17, regarding the proposed amendment to its existing impersonation rule.
We first wrote about the proposed changes to the FTC rule in an article in February 2024. The current impersonation rule, which governs only government and business impersonation, first went into effect in April 2024, and is aimed at combatting impersonation fraud resulting in part from artificial intelligence- (AI) generated deepfakes. When announcing the rule, the FTC also stated that it was accepting public comments for a supplemental notice of proposed rulemaking aimed at prohibiting impersonation of individuals. In essence, the rule makes the impersonation of a government entity or official or company unfair or deceptive.
The FTC announced the January hearing date in December 2024. The purpose of the hearing is to address amending the existing rule to include an individual impersonation ban and allow interested parties an opportunity to provide oral statements. There are nine parties participating in the hearing, including: the Abundance Institute, Andreesen Horowitz, the Consumer Technology Association, the Software & Information Industry Association, TechFreedom, TechNet, the Electronic Privacy Information Center; the Internet & Television Association, and Truth in Advertising.
While the original announcement of the proposed amendment indicated that the FTC would be accept public comments on the addition of both a prohibition of individual impersonation and a prohibition on providing scammers with the means and instrumentalities to execute these types of scams, the FTC has decided not to proceed with the proposed means and instrumentalities provision at this time. The sole purpose of the January 17 hearing is to “address issues relating to the proposed prohibition on impersonating individuals.” The public is invited to join the hearing live via webcast using this link.