DORA Becomes Applicable in the EU
On January 17, 2025, Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector (“DORA”) becomes applicable in the EU.
DORA intends to strengthen the IT security and operational resiliency of financial entities and to ensure that the financial sector in the EU is able to stay resilient in the event of severe operational disruption. DORA applies to financial entities engaging in activities in the EU. Traditional financial entities, such as banks, investment firms, insurers, and credit institutions, and non-traditional entities, like crypto-asset service providers and crowdfunding platforms, are all within scope.
Financial entities under DORA will be required to comply with new requirements in the areas of (1) risk management, (2) third-party risk management, (3) incident management and reporting, and (4) resilience testing. Key obligations include:
Create and maintain a register of ICT service providers and, on an annual basis, report relevant information from the register to financial authorities.
Comprehensive incident reporting obligations requiring initial notification in 4 hours after the incident is classified as major and a maximum of 24 hours after becoming aware. Follow-up notifications will be required, at least, in 72 hours and one month. Entities under scope will be required, without undue delay, to notify their clients where a major incident occurs and has a financial impact on their interests. For significant cyber threats, entities under scope should, where applicable, inform their clients that are potentially affected of any appropriate protection measures which the latter may consider taking.
Maintain a sound, comprehensive and well-documented ICT risk management framework. The financial entities’ management bodies should define, approve, oversee and take responsibility for the implementation of the ICT risk management framework. In addition, appropriate audits must be conducted with respect to the ICT risk management framework.
Implement post ICT-related incident reviews after a major ICT-related incident disrupts core activities.
Establish and maintain a sound and comprehensive digital operational resilience testing program.
Clearly allocate, in writing, the rights and obligations of the financial entity when engaging with ICT service providers, including mandatory DORA contractual provisions.
Adopt, and regularly review, a strategy on ICT third-party risk.
In addition to financial entities, ICT service providers providing services to financial entities will also have a level of exposure to DORA. This level of exposure will vary in accordance with how critical the ICT service provider is in the sector. All ICT service providers will be subject to indirect obligations resulting from the requirements that their customers (i.e., in-scope financial entities) will be subject to under DORA (e.g., mandatory contractual provisions). In addition, ICT service providers designated as “critical” will be subject to direct obligations and specific oversight mechanisms under DORA.
Read the full text of DORA.
Breaking News: U.S. Supreme Court Upholds TikTok Ban Law
On January 17, 2024, the Supreme Court of the United States (“SCOTUS”) unanimously upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (the “Act”), which restricts companies from making foreign adversary controlled applications available (i.e., on an app store) and from providing hosting services with respect to such apps. The Act does not apply to covered applications for which a qualified divestiture is executed.
The result of this ruling is that TikTok, an app which is owned by Chinese company ByteDance and qualifies as a foreign adversary controlled application under the Act, will face a ban when the law enters into effect on January 19, 2025. To continue operations in the United States in compliance with the Act, the law requires that ByteDance sell the U.S. arm of the company such that it is no longer controlled by a company in a foreign adversary country. In the absence of a divestiture, U.S. companies that make the app available or provide hosting services for the app will face enforcement under the Act.
It remains to be seen how the Act will be enforced in light of the upcoming changes to the U.S. administration. TikTok has 170 million users in the United States.
Bondi and Bessent Affirm Support for Whistleblowers in Confirmation Hearings
During their Senate confirmation hearings, both Pam Bondi, nominee for Attorney General, and Scott Bessent, nominee for Treasury secretary, affirmed their support for whistleblowers in response to questions from Senator Chuck Grassley.
Bondi Promises to Defend Constitutionality of False Claims Act
During Bondi’s confirmation hearing on January 15, Senate Grassley asked if she believed that the False Claims Act is constitutional and if she would commit to continuing the Department of Justice’s defense of its constitutionality. Grassley spoke about how, thanks in large part to “patriotic whistleblowers,” the False Claims Act has resulted in over $78 billion in collections for the government since 1986.
“I would defend the constitutionality of course of the False Claims Act,” Bondi stated. “The False Claims Act is so important, especially by what you said with whistleblowers.”
The constitutionality of the False Claims Act’s qui tam provisions have faced challenges recently. In September, the U.S. District Court for the Middle District of Florida ruled that the qui tam provisions are unconstitutional because they violate the Appointments Clause of Article II. The court ruled that by filing a qui tam lawsuit alleging Medicare fraud, whistleblower Clarissa Zafirov was granted “core executive power” without any “proper appointment under the Constitution.”
The U.S. government is urging the Eleventh Circuit to reverse the district judge’s “outlier ruling,” noting in a brief that “other than the district court here, every court to have addressed the constitutionality of the False Claims Act’s qui tam provisions has upheld them.”
Under the False Claims Act’s qui tam provisions, individuals may file lawsuits alleging government contracting fraud on behalf of the United States. The government then has the ability to intervene and take over the case, intervene and dismiss the case, or not intervene and let the whistleblower proceed with the suit. In successful qui tam cases, regardless of whether the government intervenes, whistleblowers are eligible to receive between 15 and 30% of the settlement or judgment.
Since the False Claims Act’s qui tam provisions were amended in 1986, the government has recovered over $78 billion, with more than $55 billion stemming from qui tam whistleblower suits. Striking down the constitutionality of qui tam would thus cripple the most important law protecting taxpayer funds from fraud.
Bessent States He will Support IRS Whistleblower Program
During Bessent’s confirmation hearing on January 16, Senator Grassley brought up the importance of the Internal Revenue Service (IRS) Whistleblower Program, noting that since it was established in 2006 ““it’s brought $6 billion back into the federal treasury.”
“This program could raise billions more if the IRS would use it to its full potential,” Grassley stated. “So I hope I can count on you, if you’re confirmed, to be supportive of this whistleblower program and work to ensure its full use to its full potential.”
“Senator Grassley, we are in complete alignment on this program,” Bessent said in response.
Through the IRS Whistleblower Program, qualified whistleblowers, individuals who voluntarily provide original information that leads to a successful IRS action, are eligible to receive monetary awards of 15-30% of the money collected thanks to their disclosure.
The program, which revolutionized tax enforcement by incentivizing insiders to come forward and disclose hard-to-detect misconduct, has struggled in recent years as delays have grown and payouts to whistleblowers have dropped. While recent administrative reforms have strengthened the program, advocates believe that it has even more potential.
Transferring U.S. Data Overseas? Consider Whether the DOJ’s Bulk Data Regulations or PADFA May Apply to Your Organization
Though attempts to pass comprehensive federal consumer privacy legislation again stalled in 2024, efforts targeted at addressing national security-related privacy concerns had more success. Along with the Protecting Americans from Foreign Adversary Controlled Applications Act, Congress passed the Protecting Americans’ Data from Foreign Adversaries Act (“PADFA”) as part of a sweeping foreign aid bill, which was subsequently signed into law by President Biden on April 23, 2024. PADFA, which went into effect on June 24, 2024, followed President Biden’s Feb. 2024 Executive Order 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” (“EO”), under which the Department of Justice was directed to establish and implement regulations (initially reported by SPB here). The DOJ’s rulemaking process, which began in late fall of last year, culminated in the issuance of a final rule (“Bulk Data Regs”) on December 27, 2024, and publication of the same in the Federal Register on January 4, 2025. The Bulk Data Regs largely become effective 90 days after publication in the Federal Register, on April 4, with certain provisions going into effect 270 days following publication.
Below, we provide a discussion of various key aspects of PADFA and the Bulk Data Regs and key considerations to bear in mind, including with respect to the scope of application, covered data, service provider/vendor transfers, security requirements, downstream transfer and diligence obligations, and important exemptions provided under each. Further below, we provide a table for handy reference with select definitions and information from each legal regime.
At first blush, given their focus on national security and sensitive data, PADFA and the Bulk Data Regs would appear to apply to a limited slice of companies in the U.S. that do business with certain foreign adversaries or countries of concern, or persons or companies related to them. However, upon a deeper look, these regimes provide extremely broad definitions of “sensitive” data and offer potential applicability to any U.S. business transferring data overseas (the Bulk Data Regs in particular), including multi-national companies that transfer data between and among affiliated companies throughout the world. As a result, U.S.-based and multi-national companies that do business or transfer U.S. data overseas, whether to adversarial countries like China and Russia or elsewhere, should carefully review PADFA and the Bulk Data Regs to understand whether and to what extent these legal regimes may apply to their organizations.
If you have any questions, please reach out to the author or your SPB relationship partner.
Scope of ApplicationPADFA only applies to “data brokers” that transfer “personally identifiable sensitive data” to certain foreign adversaries or persons located in, or controlled by, foreign adversaries, namely China, Russia, Iran, and North Korea. The Bulk Data Regs potentially apply to any U.S. entity that transfers “government-related data” or “bulk” “sensitive personal data” overseas, including other than to countries of concern or “covered persons.” A “covered person” under the Bulk Data Regs includes a foreign entity that is 50% owned, directly or indirectly, by an entity that is organized/chartered under the laws of, or has a principal place of business in, a country of concern. (This definition is broader and more nuanced but squarely covers entities that are majority-owned by individuals/entities in China or other countries of concern. See table below.) The Bulk Data Regs’ countries of concern consist of China (incl. Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela.
Under PADFA, “data broker” is provided a similar definition to those found under U.S. state data broker laws, covering entities that collect and sell or otherwise make available data regarding individuals from whom the entity did not collect directly (see the table below for the definition). On the other hand, the Bulk Data Regs’ concept of “data brokerage” focuses on the lack of a direct relationship between the data subject and the entity receiving the data from the U.S. entity.
Covered DataBoth PADFA and the Bulk Data Regs are incredibly far-reaching when it comes to their respective covered data definitions, providing “sensitive” data terms that are much broader than those found in state consumer privacy laws. As to the Bulk Data Regs’ “sensitive personal data”, certain thresholds must be met (e.g., 1,000 devices for precise geolocation data, 100,000 individuals for “covered identifiers”) to invoke its requirements, which may serve to exclude from its scope companies making incidental transfers of certain sensitive data. However, it is worth noting that the definition of “bulk” is somewhat contrary to the common notion of the term since, for some types of data, the threshold is quite low (e.g., 1,000 data subjects for precise geo, biometric, and human ‘omic data). In any event, the thresholds may not help companies in data-intensive industries such as advertising technology in avoiding the reach of the Bulk Data Regs. The Bulk Data Regs’ thresholds do not apply to “government-related data” such that any transfer of such data to countries of concern or covered persons falls within its scope.
Transfers to Service Providers and Vendors; Security RequirementsPADFA exempts transfers to “service providers” from its scope of restricted transfer. The definition of “service provider” includes entities that would typically qualify as service providers and processors under other legal schemes, namely entities that receive data from or on behalf of a data controller and that collect, process, or transfer data on behalf of, and at the direction of, the data controller (provided that the data controller is not a foreign adversary country or controlled by a foreign adversary country) (see the service provider definition in the “exemptions” section of the table below).
The Bulk Data Regs explicitly prohibit transfers of certain data made pursuant to “vendor agreements,” subject to an exemption where the U.S. entity imposes specific security requirements on the vendor. Notably, this exemption does not apply to transfers of bulk “human ‘omic data”. The security requirements exemption also applies to covered data transactions involving employment agreements and investment agreements. The applicable security requirements were promulgated in parallel by the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”), which is part of the U.S. Department of Homeland Security. PADFA does not require entities to impose security requirements on service providers.
Downstream Transfer and Diligence ObligationsIn addition to the restrictions on certain transfers to countries of concern and covered persons, the Bulk Data Regs require U.S. entities to contractually restrict “foreign person”-recipients of covered data in “data brokerage” transactions from transferring such data to countries of concern and covered persons, and to implement a diligence and reporting program for violations of recipients’ obligations. As a result, this aspect of the Bulk Data Regs may impose compliance obligations, including ongoing diligence on overseas data transfers, on a broad swath of U.S. entities, even if they do not do business with countries of concern or covered persons. PADFA does not impose similar obligations.
ExemptionsThe Bulk Data Regs provide a number of exemptions for various transactions or transfers, including those related to official business of the U.S. government, transactions “ordinarily incident to and part of the provision of financial services”; corporate group transactions; “Transactions required or authorized by Federal law or international agreements, or necessary for compliance with Federal law”; investment agreements subject to a CFIUS action”; and transactions “ordinarily incident to and part of the provision of telecommunications services”. While many of these exemptions may necessitate a deeper look for various companies, U.S. companies that are subsidiaries of companies in China or other countries of concern, or U.S. companies that otherwise have affiliates in such countries, should carefully consider the corporate group transaction exemption. This provision exempts from much of the regulations’ scope data transactions between U.S. entities and subsidiaries or affiliates located in or otherwise subject to the ownership, direction, jurisdiction, or control of a country of concern and that are ordinarily incident to and part of administrative or ancillary business operations (including HR, payroll and other corporate financial activities, sharing data with advisors for regulatory compliance, business travel, employee benefits, and employee communications).
PADFA does not have similar exemptions, though there are a number of activities that exclude entities from the definition of data broker, including transfers to service providers as discussed above, as well as data-level exemptions such as those for certain publicly available information. These are laid out further in the table below.
Key Concepts and Definitions
Bulk Data Regs
PADFA
Prohibited Activities
The Bulk Data Regs make it illegal to knowingly engage in a covered data transaction involving data brokerage with a country of concern or covered person.
Covered Data TransactionA covered data transaction is any transaction that involves any access by a country of concern or covered person to any government-related data or bulk U.S. sensitive personal data and that involves:(1) Data brokerage;(2) A vendor agreement;(3) An employment agreement; or(4) An investment agreement.
Under PADFA, it is unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, provide access to, or otherwise make available personally identifiable sensitive data of a United States individual to–(1) any foreign adversary country; or (2) any entity that is controlled by a foreign adversary.
Data broker definition
“Data brokerage” means the sale of data, licensing of access to data, or similar commercial transactions, excluding an employment agreement, investment agreement, or a vendor agreement, involving the transfer of data from any person (the provider) to any other person (the recipient), where the recipient did not collect or process the data directly from the individuals linked or linkable to the collected or processed data. (There is no definition of “data broker.”)
A “data broker” is defined as an entity that, for valuable consideration, sells, licenses, rents, trades, transfers, releases, discloses, provides access to, or otherwise makes available data of United States individuals that the entity did not collect directly from such individuals to another entity that is not acting as a service provider.
Covered Data
The Bulk Data Regs regulate covered transactions involving government-related data and bulk sensitive personal data.
Government-Related Data(1) Any precise geolocation data, regardless of volume, for any location enumerated on the “Government-Related Location Data List” in the Bulk Data Regs.(2) Any sensitive personal data, regardless of volume, that a transacting party markets as linked or linkable to current or recent former employees or contractors, or former senior officials, of the United States Government, including the military and Intelligence Community.
Sensitive Personal DataThe term sensitive personal data means covered personal identifiers, precise geolocation data, biometric identifiers, human ‘omic data, personal health data, personal financial data, or any combination thereof.
Covered Personal IdentifiersThe term covered personal identifiers means any listed identifier: (1) In combination with any other listed identifier; or (2) In combination with other data that is disclosed by a transacting party pursuant to the transaction such that the listed identifier is linked or linkable to other listed identifiers or to other sensitive personal data. (b) Exclusion. The term covered personal identifiers excludes: (1) Demographic or contact data that is linked only to other demographic or contact data (such as first and last name, birthplace, ZIP code, residential street or postal address, phone number, and email address and similar public account identifiers); and (2) A network-based identifier, account-authentication data, or call-detail data that is linked only to other network-based identifier, account-authentication data, or call detail data as necessary for the provision of telecommunications, networking, or similar service.
Listed IdentifierThe term listed identifier means any piece of data in any of the following data fields: (a) Full or truncated government identification or account number (such as a Social Security number, driver’s license or State identification number, passport number, or Alien Registration Number); (b) Full financial account numbers or personal identification numbers associated with a financial institution or financial-services company; (c) Device-based or hardware-based identifier (such as International Mobile Equipment Identity (“IMEI”), Media Access Control (“MAC”) address, or Subscriber Identity Module (“SIM”) card number); (d) Demographic or contact data (such as first and last name, birth date, birthplace, ZIP code, residential street or postal address, phone number, email address, or similar public account identifiers); (e) Advertising identifier (such as Google Advertising ID, Apple ID for Advertisers, or other mobile advertising ID (“MAID”)); (f) Account-authentication data (such as account username, account password, or an answer to security questions); (g) Network-based identifier (such as Internet Protocol (“IP”) address or cookie data); or (h) Call-detail data (such as Customer Proprietary Network Information (“CPNI”)).
Personal Financial DataThe term personal financial data means data about an individual’s credit, charge, or debit card, or bank account, including purchases and payment history; data in a bank, credit, or other financial statement, including assets, liabilities, debts, or trades in a securities portfolio; or data in a credit report or in a “consumer report” (as defined in 15 U.S.C. 1681a(d)).
Personal Health DataThe term personal health data means health information that indicates, reveals, or describes the past, present, or future physical or mental health or condition of an individual; the provision of healthcare to an individual; or the past, present, or future payment for the provision of healthcare to an individual. This term includes basic physical measurements and health attributes (such as bodily functions, height and weight, vital signs, symptoms, and allergies); social, psychological, behavioral, and medical diagnostic, intervention, and treatment history; test results; logs of exercise habits; immunization data; data on reproductive and sexual health; and data on the use or purchase of prescribed medications.
Human ‘Omic DataThe term human ‘omic data means human genomic data, human epigenomic data, human proteomic data, and human transcriptomic data, but excludes pathogen-specific data embedded in human ‘omic data sets.
BulkThe term bulk means any amount of sensitive personal data that meets or exceeds the following thresholds at any point in the preceding 12 months, whether through a single covered data transaction or aggregated across covered data transactions involving the same U.S. person and the same foreign person or covered person: (a) Human ‘omic data collected about or maintained on more than 1,000 U.S. persons, or, in the case of human genomic data, more than 100 U.S. persons; (b) Biometric identifiers collected about or maintained on more than 1,000 U.S. persons; (c) Precise geolocation data collected about or maintained on more than 1,000 U.S. devices; (d) Personal health data collected about or maintained on more than 10,000 U.S. persons; (e) Personal financial data collected about or maintained on more than 10,000 U.S. persons; (f) Covered personal identifiers collected about or maintained on more than 100,000 U.S. persons; or (g) Combined data, meaning any collection or set of data that contains more than one of the categories in paragraphs (a) through (g) of this section, or that contains any listed identifier linked to categories in paragraphs (a) through (e) of this section, where any individual data type meets the threshold number of persons or devices collected or maintained in the aggregate for the lowest number of U.S. persons or U.S. devices in that category of data.
ExclusionsThe term sensitive personal data, and each of the categories of sensitive personal data, excludes: (1) Public or nonpublic data that does not relate to an individual, including such data that meets the definition of a “trade secret” (as defined in 18 U.S.C. 1839(3)) or “proprietary information” (as defined in 50 U.S.C. 1708(d)(7)); (2) Data that is, at the time of the transaction, lawfully available to the public from a Federal, State, or local government record (such as court records) or in widely distributed media (such as sources that are generally available to the public through unrestricted and open-access repositories); (3) Personal communications; and (4) Information or informational materials and ordinarily associated metadata or metadata reasonably necessary to enable the transmission or dissemination of such information or informational materials.
(5) Personally identifiable sensitive data -The term `personally identifiable sensitive data” means any sensitive data that identifies or is linked or reasonably linkable, alone or in combination with other data, to an individual or a device that identifies or is linked or reasonably linkable to an individual. This is much broader than the Bulk Data Regs, in part because it does not require a certain volume of data.
(7) Sensitive data. — The term “sensitive data” includes the following:• (A) A government-issued identifier, such as a Social Security number, passport number, or driver’s license number.• (B) Any information that describes or reveals the past, present, or future physical health, mental health, disability, diagnosis, or healthcare condition or treatment of an individual.• (C) A financial account number, debit card number, credit card number, or information that describes or reveals the income level or bank account balances of an individual.• (D) Biometric information.• (E) Genetic information.• (F) Precise geolocation information.• (G) An individual’s private communications such as voicemails, emails, texts, direct messages, mail, voice communications, and video communications, or information identifying the parties to such communications or pertaining to the transmission of such communications, including telephone numbers called, telephone numbers from which calls were placed, the time calls were made, call duration, and location information of the parties to the call.• (H) Account or device log-in credentials, or security or access codes for an account or device.• (I) Information identifying the sexual behavior of an individual.• (J) Calendar information, address book information, phone or text logs, photos, audio recordings, or videos, maintained for private use by an individual, regardless of whether such information is stored on the individual’s device or is accessible from that device and is backed up in a separate location.• (K) A photograph, film, video recording, or other similar medium that shows the naked or undergarment-clad private area of an individual.• (L) Information revealing the video content requested or selected by an individual.• (M) Information about an individual under the age of 17.• (O) Information identifying an individual’s online activities over time and across websites or online services.• (P) Information that reveals the status of an individual as a member of the Armed Forces.(Q) Any other data that a data broker sells, licenses, rents, trades, transfers, releases, discloses, provides access to, or otherwise makes available to a foreign adversary country, or entity that is controlled by a foreign adversary, for the purpose of identifying the types of data listed in subparagraphs (A) through (P).
Covered data recipients
The term covered person means: (1) A foreign person that is an entity that is 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more countries of concern or persons described in paragraph (a)(2) of this section; or that is organized or chartered under the laws of, or has its principal place of business in, a country of concern; (2) A foreign person that is an entity that is 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more persons described in paragraphs (a)(1), (3), (4), or (5) of this section; (3) A foreign person that is an individual who is an employee or contractor of a country of concern or of an entity described in paragraphs (a)(1), (2), or (5) of this section; (4) A foreign person that is an individual who is primarily a resident in the territorial jurisdiction of a country of concern; or (5) Any person, wherever located, determined by the Attorney General: (i) To be, to have been, or to be likely to become owned or controlled by or subject to the jurisdiction or direction of a country of concern or covered person; (ii) To act, to have acted or purported to act, or to be likely to act for or on behalf of a country of concern or covered person; or (iii) To have knowingly caused or directed, or to be likely to knowingly cause or direct a violation of this part.
Countries of concern = China (incl. Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela.
“Person” means an individual or entity.
“Foreign person” means any person that is not a U.S. person.
“U.S. person” means any United States citizen, national, or lawful permanent resident; any individual admitted to the United States as a refugee under 8 U.S.C. 1157 or granted asylum under 8 U.S.C. 1158; any entity organized solely under the laws of the United States or any jurisdiction within the United States (including foreign branches); or any person in the United States.
“Foreign adversary” = China, Russia, Iran, and North Korea.
The term “controlled by a foreign adversary” means, with respect to an individual or entity, that such individual or entity is– (A) a foreign person that is domiciled in, is headquartered in, has its principal place of business in, or is organized under the laws of a foreign adversary country; (B) an entity with respect to which a foreign person or combination of foreign persons described in subparagraph (A) directly or indirectly own at least a 20 percent stake; or (C) a person subject to the direction or control of a foreign person or entity described in subparagraph (A) or (B).
Notable Exemptions
The Final Rule provides a number of exemptions:• Personal communications;• Information or informational materials;• Travel;• Official business of the U.S. government;• Transactions “ordinarily incident to and part of the provision of financial services”;• Corporate group transactions;• “Transactions required or authorized by Federal law or international agreements, or necessary for compliance with Federal law”;• Investment agreements subject to a CFIUS action”;• Transactions “ordinarily incident to and part of the provision of telecommunications services”;• “Drug, biological product, and medical device authorizations”; and• “Other clinical investigations and post-marketing surveillance data.”
(B) Exclusion.–The term “data broker” does not include an entity to the extent such entity–(i) is transmitting data of a United States individual, including communications of such an individual, at the request or direction of such individual, (ii) is providing, maintaining, or offering a product or service with respect to which personally identifiable sensitive data, or access to such data, is not the product or service; (iii) is reporting or publishing news or information that concerns local, national, or international events or other matters of public interest; (iv) is reporting, publishing, or otherwise making available news or information that is available to the general public–(I) including information from–(aa) a book, magazine, telephone book, or online directory; (bb) a motion picture; (cc) a television, internet, or radio program; (dd) the news media; or (ee) an internet site that is available to the general public on an unrestricted basis; and (II) not including an obscene visual depiction (as such term is used in section 1460 of title 18, United States Code); or (v) is acting as a service provider.
(8) Service provider.–The term “service provider” means an entity that– (A) collects, processes, or transfers data on behalf of, and at the direction of– (i) an individual or entity that is not a foreign adversary country or controlled by a foreign adversary; or (ii) a Federal, State, Tribal, territorial, or local government entity; and (B) receives data from or on behalf of an individual or entity described in subparagraph (A)(i) or a Federal, State, Tribal, territorial, or local government entity.
Enforcement and Penalties
The Bulk Data Regs are enforced by the Dept. of Justice, and allow for the imposition of both civil and criminal penalties.
Current maximum civil penalties are not to exceed the greater of $368,136 or an amount that is twice the amount of the transaction that is the basis of the violation with respect to which the penalty is imposed.
Potential criminal fines and imprisonment are available for willful violations of the regulations. In particular, a maximum of $1,000,000 fine and imprisonment of not more than 20 years, or both, are available in the event of willful violations.
A violation of [PADFA] shall be treated as a violation of a rule defining an unfair or a deceptive act or practice under section18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)).
The Federal Trade Commission is provided with enforcement authority under PADFA. Remedies for violation of Section 18(a)(1)(B) of the FTC Act include civil penalties of up to $50,120 per violation and various forms of equitable relief (e.g., disgorgement, injunctions, etc.).
Colorado Attorney General Announces Adoption of Amendments to Colorado Privacy Act Rules + Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement
Colorado Adopts Amendments to CPA Rules
The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement
A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025.
The BR International Trade Report: January 2025
Recent Developments
President Biden blocks Nippon Steel’s acquisition of US Steel. On January 3, President Biden announced that he would block the $15 billion sale of U.S. Steel to Japan’s Nippon Steel, citing national security concerns. President Biden’s decision came after the Committee on Foreign Investment in the United States (“CFIUS”) reportedly deadlocked in its review of the transaction and referred the matter to the President. U.S. Steel and Nippon Steel condemned the President’s action in a joint statement, arguing it marked “a clear violation of due process and the law governing CFIUS,” and on January 6 filed suit challenging the measure.
Canadian Prime Minister Justin Trudeau announces his resignation as party leader and prime minister. On January 6, Prime Minister Trudeau, who has served as the Liberal Party leader since 2013 and prime minister since 2015, declared his intention to “resign as party leader, as prime minister, after the party selects its next leader through a robust, nationwide, competitive process.” Governor General Mary Simon suspended, or prorogued, the Canadian Parliament until March 24 to allow the Liberal Party time to select its new leader—who will replace Trudeau as prime minister leading up to the general elections, which must be held by October 20. Separately, details have begun to leak of the potential Canadian retaliation against President-elect Trump’s threatened tariffs on Canadian goods. This retaliation could include tariffs on certain steel, ceramics, plastics, and orange juice.
U.S. Department of Commerce announces new export controls for AI chips. On January 13, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) issued a new interim final rule in an effort to keep advanced artificial intelligence (“AI”) chips from foreign adversaries. The interim final rule seeks to implement a three-tiered system of export restrictions. Under the new rule, (i) certain allied countries would face no new restrictions, (ii) non-allied countries would face certain restrictions, and (iii) U.S. adversaries would face almost absolute restrictions. BIS followed up with another rule on January 15 imposing heightened export controls for foundries and packaging companies exporting advanced chips, with exceptions for exports to an approved list of chip designers and for chips packaged by certain approved outsourced semiconductor assembly and test services (“OSAT”) companies.
Biden Administration imposes sanctions against Russia’s energy sector in parting blow. On January 10, the U.S. Department of the Treasury (“Treasury”) issued determinations authorizing the imposition of sanctions against any person operating in Russia’s energy sector and prohibiting U.S. persons from supplying petroleum services to Russia, and designated two oil majors—Gazprom Neft and Surgutneftegas—among others.
BIS issues final ICTS rule on connected vehicle imports and begins review of drone supply chain. On January 14, BIS issued a final rule under the Information and Communications Technology and Services (“ICTS”) supply chain regulations prohibiting the import of certain connected vehicles and connected vehicle hardware, capping a rulemaking process that started in March 2024. The rules, which will have a significant impact on the auto industry supply chain, will apply in certain cases to model year 2027 and in certain other cases to model year 2029. (See our alert on BIS’s proposed rule from September 2024.) Meanwhile, BIS launched an ICTS review on January 2 into the potential risk associated with Chinese and Russian involvement in the supply chains of unmanned aircraft systems, issuing an Advance Notice of Proposed Rulemaking.
China implicated in cyberattack on the U.S. Treasury. In December, a China state-sponsored Advanced Persistent Threat (“APT”) actor hacked Treasury, using a stolen key. Reports suggest that attack targeted Treasury’s Office of Foreign Assets Control (“OFAC”), which administers U.S. sanctions programs, among other elements of Treasury. Initial reporting indicated that only unclassified documents were accessed by hackers, although the extent of the attack is still largely unknown. The Chinese government has denied involvement.
United Kingdom joins the Comprehensive and Progressive Agreement for Trans-Pacific Partnership. On December 15, the United Kingdom officially joined the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (“CPTPP”)—a trade agreement between Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, and Vietnam—nearly four years after submitting its 2021 application. The United Kingdon is the first non-founding country to join the CPTPP.
Fallout of failed presidential martial law declaration continues in South Korea. South Korea continues to face unrest after last month’s short-lived declaration of martial law by President Yoon Suk Yeol, which led to his December 14 impeachment and January 15 arrest by anti-corruption investigators. On December 27, the National Assembly also impeached Prime Minister Han Duk-soo, who had been serving as acting president for the two weeks following Yoon’s impeachment. Finance Minister Choi Sang-mok now serves as acting president, and faces calls from South Korean investigators to order the presidential security service to comply with a warrant for President Yoon’s arrest.
Office of the U.S. Trade Representative initiates investigation into legacy chips from China. In late December, U.S. Trade Representative (“USTR”) Katherine Tai announced a new Section 301 investigation “regarding China’s acts, policies, and practices related to the targeting of the semiconductor industry for dominance.” The USTR will focus its initial investigation on “legacy chips,” which are integral to the U.S. manufacturing economy. The USTR began accepting written comments and requests to appear at the hearing on January 6. The public hearing is scheduled for March 11-12.
President-elect Donald Trump eyes the Panama Canal and Greenland. At the December 2024 annual conference for Turning Point USA, President-elect Donald Trumpcriticized Panama’s management of the Panama Canal, indicating that the United States should reclaim control due to “exorbitant prices” to American shipping and naval vessels and Chinese influence in the Canal Zone. Panamanian President José Raúl Mulino rejected Trump’s claims, stating “[t]he canal is Panamanian and belongs to Panamanians. There’s no possibility of opening any kind of conversation around this reality.” President-elect Trump also has sought to revive his 2019 proposal to purchase Greenland from Denmark, emphasizing its strategic position in the Arctic and untapped natural resources. In response, Greenland’s Prime Minister Mute Egede stated that Greenland is not for sale, but would “work with the U.S.—yesterday, today, and tomorrow.”
Nicolás Maduro sworn in for third presidential term, despite disputed election results. On January 10, Nicolás Maduro Moros was inaugurated for another six-year term as president of Venezuela, despite evidence he lost the election to opposition candidate Edmundo González Urrutia. Gonzalez, recognized by the Biden Administration as the president-elect of Venezuela, met with President Biden in the White House on January 6. In response to Maduro’s inauguration, the United States announced new sanctions programs against Maduro associates and extended the 2023 designation of Venezuela for Temporary Protected Status by 18 months.
U.S. Department of Defense designates more entities on Chinese Military Companies list. In its annual update of the Chinese Military Companies list (“CMC list”), the Department of Defense (“DoD”) added dozens of Chinese companies to the list, including well-known technology, AI, and battery companies, bringing the total number of CMC List entities to 134. Beginning in June 2026, DoD is prohibited from dealing with the newly designated companies.
European Union and China consider summit to mend ties. On January 14, European Council President António Costa and Chinese President Xi Jinping spoke via phone call, reportedly agreeing to host a summit on May 6, 2025—the 50th anniversary of EU-China diplomatic relations. The conversation comes just days before the inauguration of President-elect Donald Trump, who has threatened additional tariffs on Chinese goods and pushed the European Union to further decouple from China. Despite Beijing’s and Brussels’s willingness to meet, China-EU trade tensions remain high, highlighted by the European Commission’s October decision to impose duties of up to 35% on Chinese-made electric vehicles.
President Biden Issues Second Cybersecurity Executive Order
In light of recent cyberattacks targeting the federal government and United States supply chains, President Biden’s administration has released an Executive Order (the “Order”) in an attempt to modernize and enhance the federal government’s cybersecurity posture, as well as introduce and expand upon new or existing requirements imposed on third-party suppliers to federal agencies.
To the extent that the mandates set forth in this Order remain in place after President-elect Donald Trump takes office, third-party vendors and suppliers that contract with the federal government will need to ensure compliance with new or updated cybersecurity standards in order to remain eligible to contract with federal agencies. With that said, even if this Executive Order does not pass through to the next administration, it still provides general guidance on best practices for cybersecurity. While some of these practices may not be novel to the cybersecurity industry, it would serve as yet another guidance document for companies on what constitutes “reasonable security.”
Below is a high-level, non-exhaustive summary of some of the key highlights in the Executive Order. Please note that the mandates would take effect on different dates in accordance with the time frames discussed in the Order.
Federal Government’s Latest Attempt to Modernize its Cybersecurity Posture
The Executive Order underscores the importance of modernizing the federal government’s cybersecurity infrastructure to defend against cyber campaigns by foreign adversaries targeting the government.
One of the ways in which the new Order attempts to do this is by directing federal agencies to implement “strong identity authentication and encryption” across communications transmitted via the internet, including email, voice and video conferencing, and instant messaging.
In addition, as federal agencies have improved their cyber defenses, adversaries have targeted the weak links in agency supply chains and the products and services upon which the government relies. In light of this pervasive threat, the Executive Order places a strong emphasis on the need for federal agencies to integrate cybersecurity supply chain risk management programs into enterprise-wide risk management by requiring those agencies, via the Office of Management and Budget (OMB), to (i) comply with the guidance in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-161 (Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations), and (ii) provide annual updates to OMB on their compliance efforts with respect to the same. The OMB’s requirements will address the integration of cybersecurity into the acquisition lifecycle through acquisition planning, source selection, responsibility determination, security compliance evaluation, contract administration, and performance evaluation.
The Executive Order also addresses the potential to use artificial intelligence (AI) to defend against cyberattacks by increasing the government’s ability to quickly identify new vulnerabilities and automate cyber defenses. Specifically, the Order directs certain agencies to prioritize research on topics related to AI and cyber defense, which include: (i) human-AI interaction methods to assist with defensive cyber analysis; (ii) security of AI coding assistance and the security of AI-generated code; (iii) methods for designing secure AI systems; and (iv) methods for prevention, response, remediation, and recovery from cyber incidents involving AI systems.
Beyond using modern technology to defend against increasing cyber threats, the Executive Order aims to centralize the government’s cybersecurity governance by expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) role as the lead agency overseeing federal civilian agencies’ cybersecurity programs.
Enhancing and Expanding Upon Requirements Imposed on Third-Party Vendors of Federal Agencies
In addition to requiring federal agencies to adjust their cybersecurity posture, the Executive Order also aims to ensure that third-party vendors of federal agencies undertake various measures that are intended to help ensure the safety and security of our federal government and critical infrastructure systems, and strengthen the United States supply chains, from malicious cyber-attacks.
Third-Party Software Providers and Secure Software Development Practices
Part of the latest Executive Order focuses on transparency and deployment of secure software that meets standards set forth in the Biden administrations first cybersecurity Executive Order 14028, which was issued in May 2021. Under that Order, suppliers are required to attest that they adhere to secure software development practices, in language spurred by Russian hackers who infected an update of the widely used SolarWinds Orion software to penetrate the networks of federal agencies. Given that insecure software remains a challenge for both providers and users, it has continued to make the federal government and critical infrastructure systems vulnerable to additional malicious cyber incidents. This was recently illustrated by several attacks, including the 2024 exploitation of a vulnerability in a popular file transfer application used by multiple federal agencies.
Against this backdrop, the newly released Executive Order sets forth more robust attestation requirements for software providers that support critical government services and pushes for enhanced transparency by publicizing when these providers have submitted their attestations so that others can know what software meets the secure standards. In a similar vein, the new Order also aims to provide federal agencies with a coordinated set of practical and effective security practices to require when they procure software by calling for (i) updates to certain frameworks established by NIST that are adhered to by federal agencies – such as NIST SP 800-218 (Secure Software Development Framework) (SSDF) – for the secure development and delivery of software, (ii) the issuance of new requirements by OMB that derive from NIST’s updated SSDF to apply to federal agencies’ use of third-party software, and (iii) potential revisions to CISA’s Secure Software Development Attestation to conform to OMB’s requirements.
Vendors of Consumer Internet-of-Thing (IoT) Products and U.S. Cyber Trust Mark Label
To further protect the supply chain, the Executive Order recognizes the risks federal agencies face when purchasing IoT products. To address these risks, the Order requires the development of additional requirements for contracts with consumer IoT providers. Consumer IoT providers contracting with federal agencies will have to (i) comply with the minimum cybersecurity practices outlined by NIST, and (ii) carry United States Cyber Trust Mark labeling on their products. The initiative related to Cyber Trust Mark labeling was announced by the White House on January 7, 2025, and will require consumer IoT products to pass a U.S. cybersecurity audit and legally display the mark on advertising and packaging.
Cloud Service Providers
The Executive Order also requires the development of new guidelines for cloud service providers, which is unsurprising in light of the recent cyber attack on the U.S. Treasury Department where a sophisticated Chinese hacking group known as Silk Typhoon stole a digital key from BeyondTrust Inc.—a third-party service provider for the Treasury Department—and used it to access unclassified information maintained on Treasury Department user workstations. The breach utilized a technique known as token theft. Authentication tokens are designed to enhance security by allowing users to stay logged in without repeated password entry. However, if compromised, these tokens enable attackers to impersonate legitimate users, granting unauthorized access to sensitive systems.
While this incident is likely not the impetus behind the updated guidelines for cloud service providers, it underscores the importance of auditing third-party vendor security practices and taking measures to reduce the lifespan of tokens so as to limit their usefulness if stolen. These new guidelines under the Executive Order would mandate multifactor authentication, complex passwords, and storing cryptographic keys using hardware security keys for cloud service providers of federal agencies.
Key Takeaways
Although the fate of the Executive Order is uncertain with an incoming administration, organizations that contract with the federal government should closely monitor any developments as they will have to adhere to the new or enhanced cybersecurity requirements set out in the Order.
In addition, even if this Executive Order gets revoked by the incoming administration, organizations should not miss the opportunity to evaluate whether their cybersecurity programs comply with industry standard guidelines, such as NIST, as well as general best practices.
DEA Tightens Buprenorphine Telemedicine Prescribing Rules
The Drug Enforcement Administration (DEA) and the U.S. Department of Health & Human Services (HHS) just finalized their March 2023 proposed rule regarding telemedicine prescribing of buprenorphine. The final rule, effective February 17, 2025, allows DEA‑registered practitioners to prescribe Schedule III-V controlled substances, i.e., buprenorphine, to treat opioid use disorder (OUD) through audio-video visits and through audio-only visits in specific circumstances after certain requirements are met. Although these practices are currently allowed under the COVID-era telemedicine prescribing flexibilities through the end of the 2025, the final rule introduces additional requirements for these prescriptions.
Requirements of the Final Rule
PDMP Check
Before prescribing a Schedule III-V controlled substance approved by the U.S. Food & Drug Administration (FDA) to treat OUD via telemedicine (currently limited to buprenorphine), DEA-registered practitioners must review the prescription drug monitoring program (PDMP) database of the state in which the patient is located at the time of the encounter.
Scope of Review: Practitioners must check PDMP data for any controlled substances issued to the patient within the past year. If less than a year of data is available, practitioners must review the entire available period.
Initial Prescription:
After reviewing the PDMP data and documenting the review, practitioners may issue an initial six-month supply of buprenorphine, which may be divided across several prescriptions, totaling six calendar months.
If the PDMP data is not available but the attempt to access it is documented, practitioners may prescribe only a seven-day supply of buprenorphine. Practitioners must continue to check the PDMP database to issue subsequent prescriptions. If, after checking, the PDMP remains unavailable and access attempts are documented, practitioners may prescribe subsequent seven-day supplies, up to the six-month limit.
Follow-Up Prescriptions
After the initial six-month supply, practitioners may issue additional prescriptions if they either:
Conduct an in-person medical exam; or
Meet one of the seven narrow exceptions under the Ryan Haight Act (discussed below) for telemedicine practitioners.
Once an in-person medical exam has been conducted, the practitioner and patient are no longer considered to be engaged in the practice of telemedicine, and the obligations outlined in the final rule will no longer apply.
Pharmacist Verification
Before dispensing these prescriptions, pharmacists must verify the identity of the patient using one of the following:
A state government-issued ID;
A federal government-issued ID; or
Other acceptable documentation, such as a paycheck, bank or credit card statement, utility bill, tax bill, or voter registration card.
A Brief History
The rules stem from the Ryan Haight Act, which amended the Controlled Substances Act to restrict practitioners from prescribing controlled substances unless the practitioner conducts an in-person examination of the patient. The Ryan Haight Act (at 21 U.S.C. § 802(54)) outlines seven exceptions under which practitioners may prescribe controlled substances via telemedicine without an in-person exam. The fifth exception involves practitioners who have obtained the long-awaited special registration. (Stay tuned for our discussion on the DEA’s proposed rule establishing a special registration.) The seventh exception involves other circumstances specified by regulation.
During the COVID-19 Public Health Emergency (PHE), the DEA issued letters on March 25, 2020, and March 31, 2020, granting temporary exceptions to the Ryan Haight Act and its implementing rules that enabled DEA-registered practitioners to prescribe controlled substances without an in-person exam and with a DEA registration in only one state. These telemedicine flexibilities enabled practitioners to prescribe Schedule II-V controlled substances through audio-video visits and audio-only visits. Audio-only visits are permitted if the practitioner has the capability to use audio-video, but the patient is either unable to use video or does not consent to it.
In March 2023, in anticipation of the PHE ending, the DEA issued a proposed rule regarding the expansion of telemedicine prescribing of buprenorphine, which received significant criticism from stakeholders. In response, the DEA quickly rescinded the proposed rule and extended the COVID-era flexibilities in May 2023. The flexibilities were subsequently extended in October 2023 and November 2024 and are now set to expire on December 31, 2025. (For more details, see our previous discussions on the DEA’s proposed rules for telemedicine prescribing of controlled substances and the first, second, and third temporary rules extending COVID-era flexibilities.) Now, in an effort to not lose ground on the expansion of telemedicine prescribing of buprenorphine, especially if the telemedicine flexibilities expire with the incoming Trump administration, the DEA and HHS have revised and finalized their proposed buprenorphine rule.
Comparing the Proposed and Final Rules
The final rule introduces several changes to the proposed rule, some of which are described below:
Supply Limitation: The initial 30-day prescription supply limitation via audio-only was increased to a six-month supply.
In-Person Medical Evaluation: The requirement to have an in-person medical evaluation, with three options for conducting it, to prescribe more than the initial supply of buprenorphine was removed.
Recordkeeping: The detailed recordkeeping requirements for each prescription a practitioner issues through a telemedicine encounter, such as whether the encounter was conducted via audio-video or audio-only, were removed.
PDMP Review: Although reviewing the PDMP database of the state in which the patient is located at the time of the encounter is still required, the specifications and recordkeeping requirements for the review were changed.
The DEA and HHS state that these changes are likely to address and alleviate many of the concerns raised by commentors, acknowledging that some of the previously proposed requirements would have placed undue burdens on both patients and practitioners.
Conclusion
We anticipate that many stakeholders will be dissatisfied with the final rule, particularly with the six-month duration for an initial supply, which may still be too short, and the nationwide PDMP check requirement, which is overly burdensome given the absence of a nationwide PDMP database — a burden the DEA continues to underestimate.
If the COVID-era telemedicine prescribing flexibilities expire without further extension, the final rule offers protection for prescribing buprenorphine to treat OUD. However, that protection is contingent on establishing a legitimate special registration process, which the DEA has yet to propose or implement. Given the uncertainty surrounding the incoming Trump administration’s priorities and its views on telemedicine prescribing of controlled substances, it is unclear whether the final rule will be withdrawn or left as-is. There is also uncertainty about whether the telemedicine prescribing flexibilities will expire after 2025.
FTC Finalizes Long-Awaited Updates to Children’s Privacy Rule
On January 16, 2025, the FTC announced the issuance of updates to the FTC’s Children’s Online Privacy Protection Rule (the “Rule”), which implements the federal Children’s Online Privacy Protection Act of 1998 (“COPPA”). The updates to the Rule come more than five years after the FTC initiated a rule review. The Commission vote on the Rule was 5-0, with various Commissioners filing separate statements. The updated Rule, which will be published in the Federal Register, contains several significant changes, but also stops short of the version proposed by the FTC in January 2024. The Rule will go into effect 60 days after its publication in the Federal Register; most entities subject to the Rule will have one year after publication to comply.
Key updates to the Rule include:
Requirement to obtain opt-in consent for targeted advertising to children and other disclosures of children’s personal information to third parties: The Rule will require operators of child-directed websites or online services to obtain separate verifiable parental consent before disclosing children’s personal information to third parties. According to a statement filed by outgoing FTC Chair Lina Khan, this means that operators will be prohibited from selling children’s personal information or disclosing it for targeted advertising purposes unless parents separately agree and opt in to these uses.
Limits on data retention: The Rule will prevent operators from retaining children’s personal information for longer than necessary than the specific documented purposes for which the data was collected. Operators also must maintain a written data retention policy that (1) details the specific business need for retaining children’s personal information and (2) sets forth a timeline for deleting this data. Operators may not retain children’s personal information indefinitely.
Changes to key definitions: The Rule also makes several changes to the definitions that govern its application. For example, the definition of “personal information” now includes biometric identifiers that can be used for the automated or semi-automated recognition of a child (e.g., fingerprints, handprints, retina patterns, iris patterns, genetic data – including a DNA sequence, voiceprints, gait patterns, facial templates, or faceprints). In addition, the factors the Commission will take into account in considering whether a website or service is “directed to children” will be expanded to include marketing or promotional materials or plans, representations to consumers or third parties, reviews by users or third parties and the ages of users on similar websites or services.
Increased Safe Harbor transparency: FTC-approved COPPA Safe Harbor programs are required to identify in their annual reports to the Commission each operator subject to the self-regulatory program (“subject operator”) and all approved websites or online services, as well as any subject operator that left the program during the time period covered by the annual report. The Safe Harbor programs also must outline their business models in greater detail and provide copies of each consumer complaint related to a member’s violation of the program’s guidelines. In addition, Safe Harbor programs must publicly post a list of all current subject operators and, for each such operator, list each certified website or online service.
Importantly, the Rule is notable for what it does not contain.
No EdTech changes: Despite having proposed imposing a wide range of obligations on EdTech companies operating in the education space, the Rule avoids incorporating any education-related requirements. According to the FTC, because the Department of Education has indicated its intention to update its FERPA regulations (34 C.F.R. 99), the Commission sought to avoid changing COPPA in any way that might conflict with the DOE’s eventual amendments. Instead, the Commission states it will continue to enforce COPPA in the EdTech context consistent with its existing guidance.
No coverage of user engagement techniques: The Rule does not incorporate the proposal to require parental notification and consent for the collection of data used to encourage or prompt children’s prolonged use of a website or online service. The Commission indicated that, after reviewing the public comments, it believes the proposed use restriction “was overly broad and would constrain beneficial prompts and notifications.” The FTC cautioned, however, that it nevertheless may pursue enforcement under Section 5 of the FTC Act in appropriate cases to address unfair or deceptive acts or practices encouraging prolonged use of websites and online services that increase risks of harm to children.
Personalization and contextual advertising still exempted: The Rule does not limit the “support for the internal operations” exemption under COPPA to exclude operator-driven personalization or contextual advertising.
No need to tie personal information collected to specific uses: The Rule will not require that operators correlate each data element collected online from children to the particular use(s) of such data element.
In voting in support of the revised Rule, incoming FTC Chair Andrew Ferguson filed a separate statement expressing what he termed “serious problems” with the Rule, which he blamed on “the result of the outgoing administration’s irresponsible rush to issue last-minute rules.” Ferguson would have required the Rule to clarify instances in which an operator’s addition of third parties to whom they provide children’s personal information would trigger a need for updated notice and refreshed consent. He also took issue with the prohibition on indefinite retention of children’s personal information, predicting that it “is likely to generate outcomes hostile to users.” Finally, he indicated his belief that the FTC missed an opportunity to make clear the Rule is not an obstacle to the use of children’s personal information solely for the purpose of age verification.
New Jersey AG Says Anti-Discrimination Law Covers Algorithmic Discrimination
Last week, New Jersey Attorney General Matthew Platkin announced new guidance that the New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination, i.e., when automated systems treat people differently or negatively based on protected characteristics. This can happen with algorithms trained on biased data or with systems designed with biases in mind. LAD prohibits discrimination based on a protected characteristic like race, religion, national origin, sex, pregnancy, and gender identity, among other things. According to the guidance, employers, housing providers, and places of public accommodation who make discriminatory decisions using automated decision-making tools, like artificial intelligence (AI), would violate LAD. LAD is not an intent-based statute. Therefore, a party can violate LAD even if it uses an automated decision-maker with no intent to discriminate or uses a discriminatory algorithm developed by a third party. The guidance does not create any new rights or obligations. However, in noting that the law covers automated decision-making, the guidance encourages companies to carefully design, test, and evaluate any AI system they seek to employ to help avoid producing discriminatory impacts.
The CIO-CMO Collaboration: Powering Ethical AI and Customer Engagement
The rapid advancement of artificial intelligence (AI) technologies is reshaping the corporate landscape, offering unparalleled opportunities to enhance customer experiences and streamline operations. At the intersection of this digital transformation lie two key executives—the Chief Information Officer (CIO) and the Chief Marketing Officer (CMO). This dynamic duo, when aligned, can drive ethical AI adoption, ensure compliance, and foster personalized customer engagement powered by innovation and responsibility.
This blog explores how the collaboration between CIOs and CMOs is essential in balancing ethical AI implementations with compelling customer experiences. From data governance to technology infrastructure and cybersecurity, below is a breakdown of the critical aspects of this partnership and why organizations must align these roles to remain competitive in the AI-driven world.
Understanding Ethical AI: Balancing Innovation with Responsibility
Ethical AI isn’t just a buzzword; it’s a guiding principle that ensures AI solutions respect user privacy, avoid bias, and operate transparently. To create meaningful customer experiences while addressing the societal concerns surrounding AI, CIOs, and CMOs must collaborate to design AI applications that are innovative and responsible.
CMOs focus on delivering dynamic, real-time, and personalized interactions to meet rising customer expectations. However, achieving this requires vast amounts of personal data, potentially risking violations of privacy regulations like the General Data Protection Regulation and the California Consumer Privacy Act. Enter the CIO, who ensures the technical infrastructure adheres to these laws while safeguarding the organization’s reputation. Together, the CIO and CMO can delicately balance between leveraging AI for customer engagement and adhering to responsible AI practices.
The Role of Data Governance in AI-Driven Strategies
Data governance is the backbone of ethical AI and compelling customer engagement. CMOs rely on customer data to craft hyper-personalized campaigns, while CIOs are charged with maintaining that data’s the security, accuracy, and ethical usage. Without proper governance, organizations risk breaches, regulatory fines, and, perhaps most damagingly, a loss of trust among consumers.
Collaboration between CIOs and CMOs is necessary to establish clear data management protocols; this includes ensuring that all collected data is anonymized as needed, securely stored, and utilized in compliance with emerging AI content labeling regulations. The result is a transparent system that reassures customers and consistently delivers high-quality experiences.
Robust Technology Infrastructure for AI-Powered Customer Engagement
For AI to deliver on its promise of customer engagement, organizations require scalable, secure, and agile technology infrastructure. A close alignment between CIOs and CMOs ensures that marketing campaigns are supported by IT systems capable of handling diverse AI workloads.
Platforms driven by machine learning and big data analytics allow marketing teams to create real-time, omnichannel campaigns. Meanwhile, CIOs ensure these platforms integrate seamlessly into the organization’s technology stack without sacrificing security or performance. This partnership allows marketers to focus on innovative strategies while IT supports them with reliable and forward-thinking infrastructure.
Cybersecurity Challenges and the Integrated Approach of CIOs and CMOs
Customer engagement strategies powered by AI rely heavily on consumer trust, but cybersecurity threats lurk around every corner. According to Palo Alto Networks’ predictions, customer data is central to modern marketing initiatives. However, without an early alignment between CIOs and CMOs, the organization is exposed to risks like data breaches, compliance violations, and AI-related controversies.
A proactive collaboration between CIOs and CMOs ensures that potential vulnerabilities are identified and mitigated before they evolve into full-blown crises. Measures such as end-to-end data encryption, regular cybersecurity audits, and robust AI content labeling policies can protect the organization’s digital assets and reputation. This integrated approach enables businesses to foster lasting customer trust in a world of increasingly sophisticated cyber threats.
Case Studies: Successful CIO-CMO Collaborations
Case Study 1: A Retail Giant’s TransformationOne of the world’s largest retail chains successfully transformed its customer experience through the CIO-CMO collaboration. The CIO rolled out a scalable AI-driven recommendation engine, while the CMO used this tool to craft personalized shopping experiences. The result? A 35% increase in customer retention within a year and significant growth in lifetime customer value.
Case Study 2: Financial Services LeaderA financial services firm adopted an AI-powered chatbot to enhance its customer service. The CIO ensured compliance with strict financial regulations, while the CMO leveraged customer insights to refine the chatbot’s conversational design. Together, they created a seamless, trustworthy digital service channel that improved customer satisfaction scores by 28%.
These examples reinforce the advantages of partnership. By uniting their expertise, CIOs and CMOs deliver next-generation strategies that drive measurable business outcomes.
Future Trends in AI, Compliance, and Executive Collaboration
The evolving landscape of AI, compliance, and customer engagement is reshaping the roles of CIOs and CMOs. Here are a few trends to watch for in the coming years:
AI Transparency: Regulations will increasingly require companies to disclose how AI models were trained and how customer data is used. Alignment between CIOs and CMOs will be vital in meeting these demands without derailing marketing campaigns.
Hyper-Personalization: Advances in machine learning will allow marketers to offer even more granular personalization, but this will require sophisticated data-centric systems designed by CIOs.
AI Content Labeling: From machine-generated text to synthetic media, organizations must adopt clear labeling practices to distinguish between AI-driven and human-generated content.
By staying ahead of these trends, organizations can cement themselves as leaders in ethical AI and customer engagement.
Forging a Path to Sustainable AI Innovation The digital transformation of business will continue to deepen the interconnected roles of the CIO and CMO. These two leaders occupy the dual pillars required for success in the AI era—technology prowess and customer-centric creativity. By aligning their goals and strategies early on, they can power ethical AI innovation, ensure compliance, and elevate customer experiences to new heights.
California AG Issues AI-Related Legal Guidelines for Developers and Healthcare Entities
The California Attorney General published two legal advisories this week:
Legal Advisory on the Application of Existing California Laws to Artificial Intelligence
Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare
These advisories seek to remind businesses of consumer rights under the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA), and to advise developers who create, sell, or use artificial intelligence (AI) about their obligations under the CCPA.
Attorney General Rob Bonta said, “California is an economic powerhouse built in large part on technological innovation. And right alongside that economic might is a strong commitment to economic justice, workers’ rights, and competitive markets. We’re not successful in spite of that commitment — we’re successful because of it [. . .] AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.”
Advisory No. 1: Application of Existing California Laws to Artificial Intelligence
This advisory:
Provides an overview of existing California laws (i.e., consumer protection, civil rights, competition, data protection laws, and election misinformation laws) that may apply to companies that develop, sell, or use AI;
Summarizes the new California AI law that went into effect on January 1, 2025, such as:
Disclosure Requirements for Businesses
Unauthorized Use of Likeness
Use of AI in Election and Campaign Materials
Prohibition and Reporting of Exploitative Uses of AI
Advisory No. 2: Application of Existing California Law to Artificial Intelligence in Healthcare
AI tools are used for tasks such as appointment scheduling, medical risk assessment, and medical diagnosis and treatment decisions. This advisory:
Provides guidance under California law, i.e., consumer protection, civil rights, data privacy, and professional licensing laws—for healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems;
Reminds such entities that AI carries harmful risks and that all AI systems must be tested, validated, and audited for safe, ethical, and lawful use;
Informs such entities that they must be transparent about using patient data to train AI systems and alert patients on how they are using AI to make decisions affecting their health and/or care;
This is yet another example of how issues related to the safe and ethical use of AI will likely be at the forefront for many regulators across many industries.