Health-e Law Episode 16: Crossroads of Care: Navigating Executive Orders with Jonathan Meyer, former DHS GC and Partner at Sheppard Mullin [Podcast]

Welcome to Health-e Law, Sheppard Mullin’s podcast exploring the fascinating health tech topics and trends of the day. In this episode, Jonathan Meyer, a partner at Sheppard Mullin and Leader of the firm’s National Security Team, joins us again to discuss the early days of the new Trump administration and what might be on the horizon in terms of cybersecurity and data privacy.
What We Discussed in This Episode:

What can we expect from the new administration in relation to cybersecurity and data protection?
How do these concerns translate to healthcare, both in terms of managing our care and protecting our data?
What is Sheppard Mullin’s executive actions tracker, why it matters, and how can listeners use it?
How is healthcare struggling with privacy and immigration, and how does this impact national security?

Click Here to Read Transcript

Exploring DORA: Potential Implications for EU and UK Businesses

On Jan. 17, 2025, EU Regulation 2022/2554 on digital operational resilience for the financial sector (DORA) became applicable in the EU.
DORA focusses on risk management and resilience testing, with a strong focus on vendor risk management, incident management and reporting, and resilience testing of key systems.
DORA applies to financial institutions that are authorized to provide financial services in the EU and is designed to strengthen their IT security and operational resiliency.
It is worth noting, particularly for UK financial institutions, that DORA does not apply directly to organizations, including UK organizations, that are providing non-regulated services in the EU financial services industry. However, if a UK organization is providing any IT related services to an EU financial institution, it may be classified as an information and communication technology (ICT) third-party service provider under DORA. Depending on the nature of the organization and its services, it could be designated as a critical ICT third-party service provider, in which case it would have direct compliance obligations under DORA (which would include implementing a comprehensive governance and control framework to manage IT and operational resiliency risk).
As a high-level summary, financial institutions subject to DORA must:

Create and maintain a register of vendors (ICT third-party service providers) and report relevant information from the register to financial authorities annually.
Implement comprehensive security incident reporting obligations, requiring initial notification four hours after the incident is classified as major and a maximum of 24 hours after becoming aware. Follow-up obligations will also be required. 
Implement post ICT-related incident reviews after a major ICT-related incident disrupts core activities.
Implement and maintain a sound, comprehensive, and well-documented ICT risk management framework, which must include appropriate audits.
Establish and maintain a sound and comprehensive digital operational resilience testing program, which for critical functions must involve penetration testing.
Clearly allocate, in writing, the financial entity’s rights and obligations when engaging with ICT third-party service providers, including mandatory DORA contractual provisions.
Adopt and maintain a strategy on ICT third-party risk.

As discussed above, ICT third-party service providers delivering services to financial entities will also be subject to DORA obligations. The nature of these obligations, and whether the ICT third-party service provider falls directly under DORA, will depend on various factors, including how critical the ICT service provider is to the EU financial services eco system, the nature of functions being supported, and services being provided. With that said, all ICT third-party service providers will be subject to contractual obligations resulting from the requirement for in-scope financial entities to flow down certain obligations to their service providers under DORA.
In light of the above, UK organizations providing services in the EU should carefully consider whether they fall directly under DORA in their capacity as a financial institution, and/or whether their services may cause them to be considered an ICT third-party service provider.

Light at the End of the Tunnel – Are You Ready for the New California Privacy and Cybersecurity Rules?

After what seems like forever, the most recent (and last?) public comment period for the draft California Consumer Privacy Act (CCPA) regulations finally closed on February 19, 2025. (Read Privacy World coverage here and here.) 
Following an initial public comment period on an earlier draft, the formal comment period for the current version of the proposed CPPA regulations (Proposed Regulations) began on November 22, 2024. The Proposed Regulations include amendments to the existing CCPA regulations and new regulations on automated decision-making technology, profiling, cybersecurity audits, requirements for insurance companies and data practice risk assessments. The California Privacy Protection Agency (CPPA) may either submit a final rulemaking package to the California Office of Administrative Law (OAL, which confirms statutory authority) or modify the Proposed Regulations in response to comments received during the public comment period.
If the CPPA proposes new changes to the Proposed Regulations, a new 15-day comment period follows. During the 15-day period, new comments must relate only to the CPPA’s newly proposed changes. This process repeats until the CPPA submits its final rulemaking package to the OAL. The OAL has up to 30 business days to review and approve the CPPA’s final rulemaking package. Once the OAL approves, the effective date of the Proposed Regulations (Effective Date) is determined by § 11343.4(b)(3) of the California Government Code.
We are hopeful that the CPPA and OAL will issue final regulations by this summer. Once final, some requirements apply as of the Effective Date and others phase-in for up to 24 months after the Effective Date. This means that, even though the CPPA could further modify the Proposed Regulations, the immediate effectiveness of parts of the Proposed Regulations calls for businesses to start their preparations now.
We addressed the notable amendments to the existing CCPA regulations in a prior post. We offer a quick summary of the new requirements and compliance timing, as well as a checklist to help jump-start the compliance process below. All references to section numbers and compliance dates relate to the Proposed Regulations. (Privacy World will consider the requirements for insurance companies in a future post.)
For more detailed guidance on complying with the current CCPA regulations and the Proposed Regulations, Squire Patton Boggs Services Ireland, Limited, Ankura Consulting Group, LLC and Exterro, Inc. have developed assessment templates, checklists and comparison charts that are available for license as non-legal services.[1] 

What Are the 2025 Proposed Regulations? How Do They Compare to Other States’ Obligations?
When Do We Need to Comply With Which Parts of the Proposed Regulations?

 The Proposed Regulations amend the existing regulations and add new requirements covering automated decision-making technology (ADMT), profiling, cybersecurity audits, requirements for insurance companies and data practice risk assessments. 
Many states regulate profiling, but none has done so as robustly as the CPPA proposes. Most states have whole or partial exemptions for insurance businesses. While assessment requirements are included in most of the other state consumer privacy laws, the Proposed Regulations exceed the other states’ requirements in operational and reporting requirements. The cybersecurity audit requirements in the Proposed Regulations are unique to California. Also, only California regulates personal information in business-to-business and human resources contexts, which makes the scope of the Proposed Regulations broader than in other states’ consumer privacy laws. 
Assessment requirements under other state consumer privacy laws include: 
● Virginia – Assessments are required for processing activities conducted or generated after January 1, 2023.
● Colorado, Connecticut and Florida – Assessments are required for processing activities conducted or generated after July 1, 2023. (The Colorado consumer privacy law prescribes detailed requirements on how to conduct and document assessments.)
● New Hampshire, Oregon, Tennessee[2] and Texas – Assessments are required for processing activities created or generated after July 1, 2024.
● Montana – Assessments are required for processing activities that occur on or after January 1, 2025.
● Nebraska – Assessments are required as of the effective date of the law, January 1, 2025.
● New Jersey – Assessments are required for processing activities that involve personal information acquired on or after January 15, 2025.
● Delaware – Assessments are required for processing activities conducted or generated on or after July 1, 2025.
● Minnesota – Assessments are required as of the effective date of the law, July 31, 2025.
● Maryland – Assessments are required for processing activities that occur on or after October 1, 2025.
● Indiana – Assessments are required for processing activities created or generated after December 31, 2025.
● Rhode Island – Assessments are required for processing activities that occur on or after January 1, 2026.
● Kentucky – Assessments are required for processing activities created or generated on or after June 1, 2026. 
Despite some material differences, the Proposed Regulations are like the assessment requirements in the Colorado Privacy Act but other state consumer privacy laws do not have as detailed assessment requirements. Businesses may wish to consider the assessment requirements in the California or Colorado consumer privacy laws – or even the European Data Protection Board guidelines under GDPR (which seem to have influenced Colorado and California) – as benchmarks.  
REQUIREMENTS THAT APPLY AS OF THE EFFECTIVE DATE 
Notice of Use of ADMT● A business must provide a “Pre-use Notice” before the business processes a consumer’s personal information:(1) using ADMT for a “significant decision” (§ 7200(a)(1)) or for “extensive profiling” (§ 7200(a)(2)), or(2) for training uses of ADMT that is capable of use (a) for a significant decision, (b) to establish individual identity, (c) for physical or biological identification or profiling, or (d) for the generation of a “deepfake.” § 7220(a)(3).● A business that uses ADMT to make certain a significant decision adverse to a consumer must provide the consumer with notice of the consumer’s “right to access ADMT” (§ 7001(vv)) as soon as feasibly possible but no later than 15 business days after the date of the adverse significant decision. § 7222(k). 
Request to Opt-Out of ADMT● If a consumer submits a “request to opt-out of ADMT” before the business has initiated that processing, the business must not initiate the processing. § 7221(m).● If a consumer submits a request to opt-out of ADMT after the business has initiated that processing, and none of the Opt-out Exceptions (defined below) apply, then the business must cease processing the consumer’s personal information as soon as feasibly possible but no later than 15 business days after the date of receipt of the consumer’s request (which is the same timing as the opt-out of sale/sharing). § 7221(n).● Opt-Out Exceptions include security, fraud prevention, and safety; certain admission, acceptance, or hiring decisions, assignment of work and compensation decisions, educational profiling, and a method provided by the business for a consumer to appeal the ADMT decision to a qualified human reviewer who has the authority to overturn the decision (among others). § 7221(b). 
Request to Access ADMT and Right to Appeal ADMT● No later than 10 business days after receipt of either a “request to access ADMT” (§ 7001(vv)) or a “request to appeal ADMT” (§ 7001(nn)), a business must confirm receipt of the request. (The request to appeal ADMT applies only if the business is providing the human appeal right instead of the ADMT opt-out right.) § 7021(a). ● No later than 45 calendar days after receipt of the request, a business must respond to a request to access ADMT and a request to appeal ADMT, subject to a 45-calendar-day extension. § 7021(b).● These timing requirements are the same as for requests to delete, correct and know. 
Evaluation and Policy Requirements for Physical or Biological Identification or Profiling● When a business uses “physical or biological identification or profiling” (PBIP) (§ 7001(gg)) for a significant decision or for extensive profiling, the business must evaluate the PBIP to ensure that the technology works as intended for the business’s proposed use and does not discriminate based on protected classes. § 7201(a)(1).● The business also must implement policies, procedures and training to ensure that the PBIP works as intended for the business’s proposed use and does not discriminate based on protected classes. § 7201(a)(2). 
REQUIREMENTS THAT PHASE IN UP TO 24 MONTHS AFTER THE EFFECTIVE DATE 
Cybersecurity Audits● A business has up to 24 months after the Effective Date to complete its first cybersecurity audit. § 7121(a). ● A cybersecurity audit is required if a business’s processing presents “significant risk to consumers’ security.” § 7120(b). 
Risk Assessments● For any processing activity requiring an assessment that the business initiated prior to the Effective Date and continues after the Effective Date, the business must conduct and document a risk assessment within 24 months after the Effective Date. § 7155(c).● For processing activities conducted after the Effective Date, a business has 24 months to make its first submission of the risk assessment materials (compliance certificates and assessment summaries) to the CPPA. § 7157(a)(1).● A risk assessment is required when a business’s processing “presents significant risk to consumers’ privacy” § 7150(a).

CHECKLIST
The high-level checklist below is for educational purposes to help you prepare for the Proposed Regulations.
I. Automated Decision-making Technology and Related Processing
Consider whether the following apply if the business is engaging in ADMT or PBIP for a significant decision or extensive profiling:

 If using ADMT (i) for a significant decision; (ii) for extensive profiling; or (iii) for training uses of ADMT that is capable of being used for a significant decision, to establish individual identity, for PBIP; or for the generation of a deepfake (§ 7200):

Provide consumers with a Pre-use Notice or a consolidated Pre-use Notice (i.e., a Pre-use Notice that addresses the use of ADMT for multiple purposes, or the use of multiple ADMTs) that meets the content requirements of § 7220(b)-(d). 
Update the business’s privacy policy to provide consumers the new right to opt-out of ADMT.

If using ADMT for (i) a significant decision or (ii) extensive profiling: Update the business’s privacy policy to provide consumers the notice of the right to access ADMT. If ADMT is used solely for training uses of the ADMT, then the business is not required to respond to a request to access ADMT, but the business still must comply with a consumer’s request to know (per §7204.) § 7222(a).
If providing a right to appeal ADMT to a qualified human reviewer instead of a right to opt-out: Update the business’s privacy policy to provide consumers the right to appeal ADMT. § 7221(b)(2).
Establish procedures to (1) confirm the business’s receipt of a request to access ADMT or request to appeal ADMT within 10 business days after receipt of the request and provide information about how the business will process the request, (2) respond to a request to access or appeal ADMT within 45 calendar days, or 90 calendar days if the business properly extends its response period, and (3) provide all information required by § 7222 to consumers who request to access ADMT, which includes the purpose(s) for using ADMT, outputs of ADMT, how the business used outputs and the logic (i.e., operational details) of the ADMT.
Ensure that the business stops processing a consumer’s personal information for ADMT within 15 business days after the date that the consumer’s request to opt-out is received unless an Opt-out Exception applies. (A business must always provide the right to opt-out for use of ADMT for profiling for behavioral advertising or for training uses of ADMT). § 7221(b)(6).
Conduct the required evaluation of PBIP used for a significant decision or for extensive profiling and implement all required policies, procedures, and trainings to ensure that the PBIP works as intended for the business’s proposed use and does not discriminate based on protected classes (n.b., this evaluation requirement is different from a risk assessment and is not subject to the 24 month phase-in.) § 7201.
Conduct a full risk assessment (see Section II below) if the business uses ADMT for a significant decision or extensive profiling or processes personal information to train ADMT or AI that is “capable of being used” (a) for a significant decision, (b) to establish individual identity, (c) for PBIP, (d) for the generation of a deepfake or (e) for operation of “generative models.” § 7150(b).

II. Risk Assessments

Determine whether a risk assessment is needed because the processing of personal information “presents significant risk to consumers’ privacy” (§ 7150(a) – (b)), which means:

“Selling” or “sharing” personal information
Processing sensitive personal information
Using ADMT for a significant decision
Using ADMT for extensive profiling
Processing personal information to train ADMT or AI for any of the following uses: for a significant decision, PBIP, generation of a deepfake, operation of “generative models,” or to establish individual identity

Conduct and document risk assessments as of the Effective Date within 24 months after the Effective Date. § 7155(c).
Ensure that internal and external stakeholders contribute to or review the risk assessment according to their level of involvement with the data processing. § 7151(a).
Ensure that the risk assessment meets all of the relevant content requirements set forth in § 7152, including:

Purpose(s) for processing consumers’ personal information
Categories of personal information, including sensitive personal information, to be processed and other information about the quality of personal information as discussed in § 7152(a)(2)
Operational elements of the data processing, including the seven elements identified in § 7152(a)(3), such as the ADMT’s “built-in” assumptions, limitations, parameters and other elements of the “logic”
Benefits of the data processing to the business, the consumer, other stakeholders and the public, as well as the negative impacts to consumers’ privacy (consider the nine examples provided in § 7152(a)(5))
Safeguards that the business will implement to address the negative impacts to consumers’ privacy, considering the four examples provided in § 7152(a)(6)(A) and specific questions related to use of ADMT in § 7152(a)(6)(B)
Whether the business will initiate the data processing subject to the risk assessment (i.e., do the benefits outweigh the risks as mitigated by the safeguards?)
All contributors to the risk assessment and dates of review and approval
All additional inquiries related to processing to train ADMT or AI, as per § 7153

Within 24 months after the Effective Date (unless an exemption applies), complete each required risk assessment (“first submission”) and submit to the CPPA’s website the first annual certification of conduct and abridged risk assessment (on a form to be provided by the CPPA). § 7157(b).
Prepare to provide an unabridged version of each risk assessment due within 24 months after the Effective Date within 10 days after a request from the CPPA or California Attorney General. § 7157(d).
Review and update each risk assessment at least once every three years or when a material change to the data processing is planned. § 7155.

III. Cybersecurity Audits

Determine if the business’s processing of personal information presents significant risk to consumers’ security and complete a cybersecurity audit if the business (i) derived 50 % or more of its revenues in the preceding calendar year from selling or sharing California residents’ personal information, or (ii) in the preceding calendar year, had global gross annual revenue of over US$25 million (as adjusted by the CCPA for inflation) and either (a) processed the personal information of 250,000 or more California residents or households, or (b) processed the sensitive personal information of 50,000 or more California residents. § 7120(b).
Complete a cybersecurity audit using a qualified, objective and independent auditor within 24 months after the Effective Date and annually thereafter. § 7121, § 7122.
Ensure that the cybersecurity audit contains the required content. § 7122(d)-(i), § 7123.
Starting two years after the Effective Date, submit to the CPPA each calendar year a written certification that the business completed a compliant cybersecurity audit. § 7121(a), § 7124(a).

In addition to the compliance steps outlined above, meeting the consumer rights, evaluation and assessment obligations in the Proposed Regulations also will require careful diligence of, and contracting with, technology providers and processors, particularly for recruitment and employment practices that are most likely to generate significant decisions and risks, and these upcoming requirements are under the radar of many HR departments.
The authors are grateful for the assistance of Mary Aldrich, Paralegal (New York).

[1] DISCLAIMER — PRIVACY POWERED BY SQUIRE PATTON BOGGS:™ (1) Provided as educational reference material and not legal advice; and (2) There is no attorney-client relationship with Squire Patton Boggs unless a written attorney-client engagement agreement is entered into with Squire Patton Boggs. Use of licensed materials is subject to the terms of the license between the end user and licensor Squire Patton Boggs Services Ireland, Limited, including limiting access and use to the licensee. Consult legal counsel with regard to use of the materials. ©2025 Squire Patton Boggs Services Ireland, Limited. All rights reserved.
While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.
[2] The Tennessee law is effective July 1, 2025, but assessment obligations are for activities commencing July 1, 2024.

Minnesota Court Rules Websites are Public Accommodations under ADA

Joining a number of courts across the country that have ruled similarly, the District Court for District of Minnesota held recently that the Americans with Disabilities Act’s (ADA) prohibition against discrimination in “places of public accommodation” applies to websites. In Frost v. Lion Brand Yarn Company, the plaintiffs, who are both legally blind, asserted that the functionality of the defendant’s retail website was “limited,” at best, for individuals with vision-related disabilities. The plaintiffs, who have filed numerous similar cases against other national retailers, filed a class action lawsuit in federal court, alleging that the defendant violated Title III of the ADA, as well as state law, by failing to provide its website’s content and services in a manner that is compatible with screen reading aids.

Quick Hits

A federal court in Minnesota recently ruled that the ADA’s “public accommodations” provision applies to websites, aligning with other courts that have made similar decisions.
The plaintiffs who filed the suit claimed that the defendant’s website was not accessible to individuals with vision-related disabilities.
The court rejected the argument that the ADA only applies to physical places of public accommodation, emphasizing the law’s broad evolving nature and denying the defendant’s motion to dismiss the case.

The defendant filed a motion to dismiss the lawsuit, arguing that Title III of the ADA only applies to “places of public accommodation,” and that a website is not a “place.” Although the issue was one of first impression in the Eighth Circuit Court of Appeals, which includes Minnesota, several other circuit courts have addressed the issue, and while the Third, Sixth, and Ninth Circuits have ruled that places of public accommodation include only places with physical structures, the First and Seventh Circuits have ruled that the law is not so limited. Those circuit court decisions all involved insurance benefits, rather than websites, however. In contrast, many federal district courts across the country have addressed the issues specifically as to websites but have issued inconsistent decisions.
The Minnesota District Court analyzed the issue by examining, and rejecting, those cases from other jurisdictions that had held “places of public accommodation” did not include websites. The court concluded that the circuit courts’ “physical structure” requirement was dicta, i.e., not essential to the decisions and therefore not entitled to any deference. Moreover, to the extent that those decisions were on point, the court respectfully disagreed with them for several reasons.
First, the court reasoned, by reading the ADA so narrowly, those courts had failed to consider the law’s broad, remedial nature. Second, the court noted that Congress had not expressly limited the law to places with physical structures. Third, the court rejected any reliance on dictionary definitions of “place,” finding those definitions to be inconclusive. In addition, the court noted that the legislative history of the ADA, which Congress enacted prior to the advent of the internet, indicated that lawmakers intended the act to “adapt to changes in technology.” Finally, the court found it insignificant that Congress has failed to amend the ADA to expressly include websites, noting that the lack of any amendment “could just as easily reflect Congress’ understanding that no amendment was necessary.”
Based on those considerations, the court agreed with those courts that have held that a stand-alone website falls within the meaning of a “place of public accommodation” as defined in Title III of the ADA. Therefore, the court denied the defendant’s motion to dismiss the case.
Key Takeaways
While district court decisions are not binding in other jurisdictions, or even district court judges in the same district for that matter, this Minnesota case is an example of what the court described as a “growing number” of district courts that have issued similar holdings. Certainly, the case sends a strong message to businesses whose goods or services are available to online shoppers in Minnesota, regardless of the business’ location, that if their websites do not function properly for visually-impaired consumers using screen-reader technology, they could be named as defendants in a class action lawsuit, particularly given the litigious nature of the certain “serial plaintiffs” in such Title III cases. Businesses that sell their products or services nationwide via websites may want to audit those sites to make sure they function smoothly with such technology, to try to avoid that risk.

FTC Requests Public Comments on Technology Platform Censorship

In one of its first actions after establishing new leadership, the United States Federal Trade Commission has issued a request for public comment “to better understand how technology platforms deny or degrade (such as by ‘demonetizing’ or ‘shadow banning’) users’ access to services based on the content of users’ speech or their political affiliations, including activities that take place outside the platform.” The request for comment is open now and closes on May 21, 2025.
The request, which expressly inquires about policies of platforms including social media, video sharing, ride sharing, event planning, and other internet services, follows an Executive Order (EO) entitled “Restoring Freedom of Speech and Ending Federal Censorship.” The EO responds to concerns raised about alleged pressure placed by the Biden Administration during COVID and other political events allegedly to coerce popular internet platforms into suppressing content with which they disagreed. In a June 2024 decision, however, the US Supreme Court rejected a legal claim brought by two states and seven individuals alleging that they were banned from social media platforms due to undue governmental influence.
The FTC’s request seeks to resurrect these claims by soliciting evidence that the platforms implemented either express or tacit political judgments in their decisions to restrict users’ access to certain content. Although it seems quite clear that platforms may implement contractual terms of service banning certain kinds of speech, it would be potentially problematic if those platforms changed the rules midstream by surreptitiously engaging in what is commonly called “shadow banning.” Shadow banning refers to the alleged practice of platforms downgrading or restricting access to putatively controversial posts – often without informing the original poster. This can cause the poster, who thinks their post is “live,” to see reduced engagement or reach, which can negatively impact their business prospects.
The truth is that social media platforms have created a new class of entrepreneurs who are compensated based on how many platform visitors see and engage with their original content. The more controversial the content, the more views it is likely to generate – and in the social media algorithms, this will typically cause the content to be elevated. For example, testimony and evidence in the Sandy Hook parents’ lawsuit against Alex Jones purportedly showed that his revenues increased by about 500% after he aired a show claiming that the Sandy Hook massacre was a hoax.
The platform will sell and place advertising alongside the more engaging content, promising advertisers that this will net greater viewership and, presumably, higher conversion rates. The platforms may compensate high-engagement-achieving posters. When a platform bans (either expressly or silently) their posts, however, this can harm their “business” of creating engagement.
Whether shadow banning actually happens is hard to pinpoint. No platform currently admits that it engages in the practice, even though this might be a less destructive way to handle inflammatory content than outright de-platforming. What the FTC is clearly after here, however, is evidence from the field regarding this practice and the entry of orders that would reverse such policies if they exist. Part 1 consists of gathering evidence – a prerequisite to further action. Part 2 may target the platforms.
The EO and FTC request are likely overzealous in their references to “censorship.” Speech bans that violate the First Amendment are typically governmental, not private. What this request seems to focus on, however, are possibly undisclosed restrictions on speech that are nevertheless imposed upon users without warning – a classic “unfair” practice if they actually contradict the contracts embodied in the terms of service or written platform policies. Thus, the FTC is looking more to fairness in contract implementation than to speech per se. The request for comment is likely to generate substantial interest from disgruntled, social media entrepreneurs who believe they were subject to unfair actions by the platforms.

U.S. Shifts AI Policy, Calls for AI Action Plan

Highlights

The U.S.’s cautious approach to AI policy and regulation is signaled by declining to enter a foreign agreement and the withdrawal of previous framework
A new request for information requests broad input from industry, academia, governmental, and other stakeholders

The U.S. has taken significant steps to reshape its artificial intelligence (AI) policy landscape. On Jan. 20, 2025, the administration issued an order revoking Executive Order 14110, originally signed on Oct. 30, 2023. This decision marks a substantial shift in AI governance and regulatory approaches. On Feb. 6, 2025, the government issued a request for information (RFI) from a wide variety of industries and stakeholders to solicit input on the development of a comprehensive AI Action Plan that will guide future AI policy.
As part of this initiative, the government is actively seeking input from academia, industry groups, private-sector organizations, and state, local, and tribal governments. These stakeholders are encouraged to share their insights on priority actions and policy directions that should be considered for the AI Action Plan. Interested parties must submit their responses by 11:59 p.m. ET on March 15, 2025.
Executive Order 14110 was designed to establish a broad regulatory framework for AI, emphasizing transparency, accountability, and risk mitigation. The revoked order required organizations engaged in AI development to adhere to specific reporting obligations and public disclosure mandates. The order affected a wide range of stakeholders, including technology companies, AI developers, and data center operators, all of whom had to align with the prescribed compliance measures. With the Jan. 23 Executive Order 14179, organizations must now reassess their compliance obligations and prepare for potential new frameworks that could take the place of the previous Executive Order 14110. 
However, given the RFI, there is an opportunity to participate in the formation of new AI policies and regulations. The new order and the RFI seek input into AI policies and regulations directed towards maintaining U.S. prominence in AI development. Consequently, potentially burdensome requirements seem unlikely to emerge in the near term.
On the international front, the U.S. administration’s decision not to sign the AI Safety Declaration at the recent AI Action Summit in Paris further avoids potential international barriers to AI development in the U.S. This, together with the issuance of the RFI, seems to signal caution in development of an AI Action Plan that will drive policy through stakeholder engagement and regulatory adjustments.
The AI Action Plan is intended to establish strategic priorities and regulatory guidance for AI development and deployment. It aims to ensure AI safety, foster innovation, and address key security and privacy concerns. The scope of the plan is expected to be broad, covering topics such as AI hardware and chips, data centers, and energy efficiency.
Additional considerations will include AI model development, open-source collaboration, and application governance, as well as explainability, cybersecurity, and AI model assurance. Data privacy and security throughout the AI lifecycle will also be central to discussions, alongside considerations related to AI-related risks, regulatory governance, and national security. Other focal areas include research and development, workforce education, intellectual property protection, and competition policies.
Takeaways
Given these policy indications, organizations should take proactive steps to adapt to, and potentially contribute to, the evolving AI regulatory landscape. It is essential for businesses to remain aware of developments policies and engage in the opportunities to help shape forthcoming AI policies. Furthermore, monitoring international AI governance trends will be crucial, as these developments may affect AI operations within the U.S.

Professionally Speaking February 2025

Professionally Speaking explores current topics of interest to general counsel, claims professionals and risk managers for various professional liability lines, including accountants, lawyers, design professionals, insurance brokers and others.
Read the Newsletter Here 

EDGAR Next: The Next Era in Filing

Introduction
On 27 September 2024, the Securities and Exchange Commission (SEC) adopted “EDGAR Next,” a collection of rule and form amendments intended to improve access to, and management of, accounts on the SEC’s filing portal, the Electronic Data Gathering, Analysis, and Retrieval system, or “EDGAR.” The collection of amendments includes amendments to Rules 10 and 11 of Regulation S-T, Form ID, and the EDGAR Filer Manual, Volume I. EDGAR Next is expected to have a disruptive effect on the SEC filing process, but ultimately result in a smoother overall filing system for all electronic filers, including public companies, investment funds, certain shareholders, Section 16 officers and directors, and filing agents. Compliance with EDGAR Next is required by 15 September 2025.
Background: EDGAR “Now”
EDGAR is the current system through which filers submit filings required by various federal securities laws to the SEC. Historically, EDGAR assigned each filer a set of access codes that could be used by different individuals to make submissions on the filer’s behalf. Specifically, filers are assigned central index keys, or CIKs, and a set of login credentials, including a password, passphrase, CIK confirmation code (CCC), and password modification authorization code (PMAC) (the EDGAR Codes). A first-time filer obtains EDGAR Codes by submitting a Form ID application through EDGAR, which sets up their account.
Looking Ahead: EDGAR Next
EDGAR Next aims to enhance the SEC’s investor protection mission by improving EDGAR’s security, enhancing management of EDGAR accounts by filers, and modernizing EDGAR connections.
Accordingly, EDGAR Next seeks to improve security by requiring individual account credentials to log in to EDGAR, allowing identification of the individual making each submission, and employing multifactor authentication. As a practical matter, EDGAR Next will require anyone attempting to act on behalf of a filer to (i) present individual account credentials obtained from Login.gov, a US government sign-in service, and (ii) complete multifactor authentication to access EDGAR accounts and submit filings. EDGAR Next’s access protocols will limit access to a filer’s account to only those individuals directly authorized by the filer and requiring such individuals to have their own personal EDGAR accounts.
Additionally, EDGAR Next will continue using CIKs and CCCs but will no longer assign passwords, PMACs, and passphrases. As such, in order to access an EDGAR account, filers, or individuals authorized to file on the filer’s behalf, will need to log in to EDGAR using the credentials obtained from Login.gov, complete multifactor authentication, and enter the filer’s CIK and CCC. 
Per the SEC, EDGAR Next is meant to enhance filers’ ability to manage their EDGAR accounts by requiring filers to authorize at least two individuals (or one if the filer is an individual or single-member company) to manage their accounts on a new EDGAR Filer Management dashboard (the EDGAR Next Dashboard) as “account administrators.” Their duties are as follows:

Manage the filer’s EDGAR account;
Confirm annually on EDGAR that all individuals and entities reflected on the EDGAR Next Dashboard for its EDGAR account are authorized by the filer to act on its behalf, and that all information about the filer on the dashboard is accurate;
Maintain accurate and current information on EDGAR concerning the filer’s account, including but not limited to accurate corporate and contact information; and
Securely maintain information relevant to the ability to access the filer’s EDGAR account, including but not limited to access through any EDGAR Application Programming Interfaces (APIs).

Additionally, EDGAR Next will roll out optional APIs, which will allow filers to make submissions, retrieve information, and perform account management tasks on a machine-to-machine basis. The optional APIs are meant to enhance the efficiency and speed of many filers’ interactions with EDGAR.
Key Dates
Adopting Beta: 30 September 2024–19 December 2025
It is never too early to start preparing to comply with EDGAR Next. The “Adopting Beta” launched on 30 September 2024, and will remain live until at least 19 December 2025, giving filers and authorized users ample time to get comfortable with EDGAR Next in a testing environment that is separate from the actual EDGAR system.
EDGAR Next Dashboard: 24 March 2025
The EDGAR Next Dashboard will go live on 24 March 2025 (while still allowing the submission of filings in accordance with the legacy EDGAR filing process until 12 September 2025). Existing filers will obtain access by enrolling in EDGAR Next on the EDGAR Next Dashboard while new filers (and existing filers unable to enroll) must apply for EDGAR access by completing the new amended Form ID (also on the EDGAR Next Dashboard), the application for access to EDGAR. Existing filers or authorized persons will need to use their current EDGAR Codes (those used for the legacy EDGAR system) to enroll in EDGAR Next. 
EDGAR Next Deadlines: 15 September 2025 and 19 December 2025
Beginning 15 September 2025, compliance with EDGAR Next is required in order to submit filings. The legacy EDGAR system will remain available for enrollment purposes until 19 December 2025, after which the legacy EDGAR system will be deactivated altogether. Thus, filers that have not enrolled in EDGAR Next or received access through submission of an amended Form ID by 19 December 2025 must submit a new amended Form ID to request access to their existing accounts.
Key Tips and Takeaways
Below is a checklist of action items as filers and account administrators assess and plan their compliance efforts over the coming months. While enrollment does not begin until late March, filers and account administrators are encouraged to prepare well in advance of EDGAR Next’s official inception, including: 

Take advantage of the Adopting Beta
Obtain Login.gov credentials
Gather current EDGAR Codes (CIKs / CCCs / Passphrases)
Determine your account administrators, users, and technical administrators
Identify individuals who have beneficial ownership reporting obligations with client entities (Section 16 and Form 144 filings, for example)
Contact financial printer (if applicable)
Review SEC guidance
Coordinate compliance efforts
Enroll on EDGAR Next (once live)

FTC COPPA Updates Provide New Protections for Children

In the waning days of the Biden administration, the FTC published an update to its COPPA Privacy Rule. The status of this update, however, is unclear. The revisions to the rule were posted on the FTC website prior to the Trump administration, but had not yet been published in the Federal Register.
Trump’s Presidential Memorandum freezing pending federal regulations means that it has not yet been published. And publication is the next step towards it going into effect. Second, and relatedly, the current FTC chair (Ferguson) had expressed concerns about the rule. It is thus likely that it will not be published, at least as currently drafted. As we wait for next steps, for those companies that offer websites directed to or appealing to children, a quick recap. First, the items that were not of concern for Ferguson (and thus likely to be implemented as are):

Website notice (privacy policy). The content of website notice for those subject to COPPA under the rule as revised will require new content. This includes steps a site takes to make sure persistent identifiers used for operational purposes are not used for behavioral advertising. Additionally, for sites collecting audio files, the privacy policy must indicate how the files are used and deleted.
Verifiable parental consent. The revised rules provide for new methods of parental verification. This includes comparing a parent’s authenticated government ID against their face (using a camera app, for example). It also includes a “dynamic, multiple-choice” question approach, if the questions would be too hard for a child 12 or under to complete. The revision also permits texting for what has been traditionally known as the “email-plus” verification process, which can be used when children’s information is not disclosed. Also added is another “one time use” exception to parental consent. Namely collecting and responding to a question submitted by a child through an audio file.
Security. The new rule will require sites to have a written information security program. This goes beyond the current obligation to have “reasonable measures” in place. The security obligations are detailed, and mirror security obligations that exist under various state data security laws.
Definitions. As revised the rule will add “biometric identifiers” to the list of personally identifiable information. These are elements like fingerprints or voiceprints that can be used to identify someone. The definition also includes someone’s “gait.” The rule will also include the definition of “mixed audience” site, a term currently used by the FTC in its COPPA FAQs.

Putting it into Practice: While we await the publication of the revised rules, whether in the format that they took before the new administration, or in a revised format, companies that operate websites subject to COPPA can keep in mind the parts of the new rule that were not of concern to Ferguson. These include new content in privacy policies.
 
Listen to this post

Waffles, Passports and Trustee Directors – Part Two

Part one of this blog covered the new requirement for company directors (including trustee directors) and persons with significant control to verify their identity with Companies House. They will be able to do this voluntarily from 25 March 2025 (the week during which national cocktail-making day, national cleaning week and international waffle day will be celebrated in the US). This requirement is part of measures introduced under the Economic Crime and Corporate Transparency Act 2023 (ECCTA). But are these measures proportionate? Surely there can’t be that many companies in England and Wales registered for fraudulent purposes?
In 2023, the BBC reported that between June and September of that year alone, over 80 companies had been set up using the residential addresses of unsuspecting people living in the same street in Essex. Experts speculated that these companies had been registered in order to launder money or to take out bank loans before closing down the companies and disappearing.
In another case in March 2024, one individual managed to file 800 false documents at Companies House in a short space of time, which recorded the false satisfaction of charges registered by lenders against a total of 190 different companies. Having an accurate register of charges at Companies House is important because it governs the order of priority of payment of debts and, if a company is in financial difficulties, it influences the route by which administrators are appointed and to whom notice must be given.
Meanwhile, Tax Policy Associates, a not for profit company, has published details of its many investigations into fraudulent entities that have been able to set up and use UK registered companies as cover. The investigations it has carried out provide a fascinating insight into the magnitude of the problem. In a few quick steps, Tax Policy Associates demonstrates on its website how it was able to identify a £100 trillion fake company registered at Companies House. It has also highlighted a new scam letter being sent to directors of newly incorporated UK companies from “Company Registry” requiring them to pay a fee, which is one of the ways in which your personal data, published by Companies House, is being used by criminals.
If all this talk of fraud, and the ready availability of personal data filed at Companies House, is making you feel a bit uncomfortable then there is some potentially good news.
An individual whose residential address is/has been used as a registered office address in the past (whether knowingly or unknowingly) can apply to have their residential address supressed on Companies House records.
From summer 2025, individuals will be able to apply to have their date of birth appearing in documents that were filed before 10 March 2015 supressed. (Since 10 March 2015, Companies House has only ever published the month and year of birth.) Documents containing personal data, such as directors’ appointment forms, continue to be publicly available even after you have ceased to act as a director of a company.
In a similar vein, from summer 2025, individuals will also be able to request that their business occupation and signature are supressed in documents appearing at Companies House.
We do not have the detail around this yet, so it may be that the process and costs involved with redacting public documents might prove disproportionate for the majority of people. By way of example, the process for seeking to suppress a residential address involves identifying each document that needs to be redacted, completing a form and paying a £30 fee for each document that you want to get amended. Nor can you submit a subject access request to Companies House asking it to identify all documents that contain your personal data, because Paragraph 5 of Schedule 2 Part 1 of the Data Protection Act 2018 would likely exempt Companies House from this requirement. You would need to do the trawl yourself through a company’s filing history at Companies House.
It is to be hoped that in the not too distant future, there will be some sort of AI tool that will facilitate this process, meaning that submission of one request would result in the redaction of all sensitive personal data from Companies House publicly available records. Until then, however, it might prove a bit of a challenge if you are seeking to suppress any personal data published at Companies House, even once that facility becomes available. If you are interested in pursuing this, or would like further information or assistance, please speak with your usual SPB contact.
So, what will you be doing during the third week in March? Perhaps you will be celebrating the first anniversary of TPR’s general code of practice, which came into force on 28 March 2024. 

EUROPE: National Regulators Announce Digital Operational Resilience Act Reporting Windows

EU national supervisory authorities will collect the Register of Information (ROI) pursuant to the EU’s Digital Operational Resilience Act (DORA) from in scope financial entities in April 2025, with the reference date set as 31 March 2025. ROIs are reports by in-scope EU financial entities on all contractual arrangements on the use of information and communication technology (ICT) services provided by ICT third-party service providers. The financial entity must differentiate between providers who are not critical and providers who are considered critical/important.
The Irish Central Bank has announced that it will collect the ROIs between 1-4 April 2025. The German BaFin has set 11 April as the deadline. In-scope financial entities across the EU should expect that there will be a similar process locally.
Under the Implementing Technical Standards on the Register of Information, information to be collected includes:
• Identification of ICT third-party service providers (will need to have either a valid LEI code or EU-ID for the files to pass validation);• Detail on the nature of the ICT services provided;• Detail on contractual arrangements;• Risk classification;• Monitoring and oversight mechanisms;• Sub-outsourcing arrangements; and• ICT-related incidents.
The European Supervisory Authorities have provided useful information on how to prepare to report the ROI which is available online. In Ireland, the Central Bank will publish a system guide to submitting the ROI in March 2025. The German BaFin has provided information here (in German).

Draft Measures for Personal Information Protection Certification for Cross-Border Data Transfers Released for Public Comment

On January 3, 2025, the Cyberspace Administration of China (the “CAC”) released the Draft Measures for Personal Information Protection Certification for Cross-Border Data Transfers (the “Draft Measures”) for public comment. Following the Implementation Rules for Personal Information Protection Certification (the “Implementation Rules”) and the Cybersecurity Standards Practice Guidelines – Security Certification Specifications for Cross-Border Processing of Personal Information V2.0 (TC260-PG-20222A) in 2022, the Draft Measures provides additional details with respect to key aspects of the certification process, including its applicability, evaluation criteria, implementation process, use of certification results, and post-certification supervision. 
Under China’s Personal Information Protection Law (“PIPL”), to transfer personal information (“PI”) abroad in a compliant manner requires the relevant data processor to (1) obtain certification; (2) conduct security assessment; or (3) execute standard contract in accordance with the requirements of the PIPL. The Draft Measures outlines details of the certification process. The Security Assessment for Cross-Border Data Transfers (effective September 2022) provides guidelines for conducting the security assessment. The Standard Contract for Cross-Border Transfers of Personal Information (effective June 2023) presents forms of the standard contract.
Below is a brief overview of the key provisions of the Draft Measures.
1. When a Data Processor Should Obtain Certification
According to Article 4 of the Draft Measures, if the following conditions are met, a data processor can transfer PI abroad in a compliant manner by obtaining certification:

The data processor is not a critical information infrastructure operator (the “CIIO”);
The data being transferred does not involve important data;
Since January 1 of the current year, the cumulative volume of PI transferred overseas:

exceeds 100,000 individuals but is less than 1 million (excluding sensitive PI); or
involves less than 10,000 individuals of sensitive PI.

 A notable addition in the Draft Measures is the explicit inclusion of foreign personal information processors under Article 3(2) of PIPL as eligible entities for the certification mechanism. This means when a foreign entity collects PI directly from individuals within China and wants to transfer and store such PI overseas, it can apply for the certification. Specifically such entity can authorize a designated representative or establish a specialized entity in China to assist with the certification process.
However, the Draft Measures do not clarify the specific requirements for these designated representatives or specialized entities, such as whether they must be an affiliate of the foreign PI processor.
We have prepared the following table to help a data processer/ exporting entity to determine which one of the three mechanism it needs to undergo to stay compliant when transferring PI overseas:

2. Certification Standards and Rules
Article 7 of the Draft Measures stipulates that CAC, in coordination with relevant authorities, will formulate standards, technical rules, and assessment procedures for PI protection certification for cross-border data transfers.
According to the Implementation Rules, currently such standards and technical rules include:

Information Security Technology—Personal Information Security Specification (GB/T 35273-2020)
Cybersecurity Standards Practice Guidelines – Security Certification Specifications for Cross-Border Processing of Personal Information V2.0 (TC260-PG-20222A)

3. Key Certification Requirements 
Article 10 of the Draft Measures outlines the key assessment criteria for PI protection certification for cross-border data transfers. These criteria fall into three categories: 

Compliance of Cross-Border PI Transfers – Evaluating whether the transfer of PI aligns with applicable laws and regulations. 
PI Protection Level of Overseas Processors and Recipients – Assessing the data protection capabilities of overseas PI processors and recipients, as well as the legal, policy, and cybersecurity environment in their respective countries or regions. 
Legally Binding Agreements and Organizational Safeguards – Reviewing the legally binding agreements between the PI processor and the overseas recipient, as well as their organizational structure, management systems, and technical measures to ensure PI protection. 

4. Certification Bodies 
Under Article 8 of the Draft Measures, professional certification bodies that meet the required qualifications to conduct PI protection certification for cross-border data transfers must complete a record-filing procedure with CAC. 
Currently, China Cybersecurity Review, Certification and Market Regulation Big Data Center (the “CCRC”) is the only officially recognized PI protection certification body in China. However, as the regulatory framework continues to develop, more certification bodies may become available in the future. 
According to a report issued by CCRC, as of February 2025, CCRC had received over 100 certification applications and had issued PI protection certification certificates to 7 entities. [i]
The Draft Measures are still open for public comment. We will continue monitoring regulatory developments with respect to the certification mechanism.
FOOTNOTES
[i] https://www.isccc.gov.cn/xwdt/tpxw/12/909546.shtml