CISA and FDA Sound Alarm on Backdoor Cybersecurity Threat with Patient Monitoring Devices

Last week, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) and the U.S. Food and Drug Administration (“FDA”) released warnings about an embedded function they found in the firmware of the Contec CMS8000, which is a patient monitoring device used to provide continuous monitoring of a patient’s vital signs, including electrocardiogram, heart rate, temperature, blood oxygen and blood pressure.1 Healthcare organizations utilizing this device should take immediate action to mitigate the risk of unauthorized access to patient data, to determine whether or not such unauthorized access has already occurred, and to prevent future unauthorized access.
Contec Medical Systems (“Contec”), a global medical device and healthcare solutions company headquartered in China, sells medical equipment used in hospitals and clinics in the United States. The Contac CMS800 has also been re-labeled and sold by resellers, such as with the Epsimed MN-120.
The three cyber security vulnerabilities identified by CISA and FDA include:

An unauthorized user may remotely control or modify the Contec CMS8000, and it may not work as intended.
The software on the Contec CMS8000 includes a “backdoor,” which allows the device or network to which the device has been connected to be compromised.
The Contec CMS8000, once connected to the internet, will transmit the patient data it collects, including personally identifiable information (“PII”) and protected health information (“PHI”), to China.

Mitigation Strategies
Healthcare organizations should take an immediate inventory of their patient monitoring systems and determine whether their enterprise uses any of the impacted devices. Because there is no patch currently available, FDA recommends disabling all remote monitoring functions by unplugging the ethernet cable and disabling Wi-Fi or cellular connections if used. FDA further recommends that the devices in question be used only for local in-person monitoring. Per the FDA, if a healthcare provider needs remote monitoring, a different patient monitoring device from a different manufacturer should be used.
Healthcare providers that are not using impacted devices should still take the time to conduct an audit of their patient monitoring and other internet-connected devices to determine the risk of potential security breaches. Organizations should use this opportunity to evaluate, once again, their incident response plans, continue to conduct periodic risk assessments of their technologies, and evaluate whether their organization’s policies, procedures, and plans enable them to fulfill cybersecurity requirements.2 
[1] See CISA, Contec CMS800 Contains a Backdoor (January 30, 2025); FDA, Cybersecurity Vulnerabilities with Certain Patient Monitors from Contec and Epsimed: FDA Safety Communication (January 30, 2025).
[2] See e.g., Polsinelli’s discussion of cybersecurity compliance in 2025.

FINRA Facts and Trends: February 2025

Welcome to the latest issue of Bracewell’s FINRA Facts and Trends, a monthly newsletter devoted to condensing and digesting recent FINRA developments in the areas of enforcement, regulation and dispute resolution. We dedicate this month’s issue to FINRA’s 2025 Annual Regulatory Oversight Report. Read about the Report’s findings and observations, below.
FINRA Issues 2025 Regulatory Oversight Report
On January 28, 2025, FINRA published its 80-page 2025 Regulatory Oversight Report (the Report), offering insights and observations on key regulatory topics and emerging risks that firms should consider when evaluating their compliance programs and procedures. Broadly speaking, the Report identifies relevant rules, summarizes noteworthy findings, highlights key considerations for member firms’ compliance programs, and provides helpful and practical considerations as member firms analyze their existing procedures and controls.
The 2025 Report discusses 24 topics relevant to the securities industry. While many of these are perennially important topics, the Report also includes two new sections: third-party risk landscape and extended hours trading. Below, we provide an overview of the Report’s new priorities, together with certain continuing priorities highlighted in the Report.
A FINRA Unscripted podcast episode about the report — featuring Executive Vice President and Head of Member Supervision, Greg Ruppert, Executive Vice President and Head of Market Regulation and Transparency Services, Stephanie Dumont, and Executive Vice President and Head of Enforcement, Bill St. Louis — is available on FINRA’s website.
Newly Identified Priorities

Third-Party Risk Landscape: The most significant addition to the Report is a new top-level section on Third-Party Risk Landscape. Firms’ reliance on third parties for many of their day-to-day functions create risks, and, as the Report indicates, this new section was prompted by “an increase in cyberattacks and outages at third-party vendors” firms use.
As the broad heading indicates, the newly added material outlines effective practices and general steps to be taken by firms, including: 

maintaining a list of all third-party vendor-provided services, systems and software components that the firm can leverage to assess the impact on the firm in the event of a cybersecurity incident or technology outage at a third-party vendor;
adopting supervisory controls and establishing contingency plans in the event of a third-party vendor failure;
affirmatively inquiring if potential third-party vendors incorporate generative AI into their products or services, and evaluating and reviewing contracts with these third parties to ensure they comply with the firms’ regulatory obligations, i.e., adding contractual language that prohibits firm or customer information from being ingested into the vendor’s open-source generative AI tool;
assessing third-party vendors’ ability to protect sensitive firm and customer non-public information and data;
ensuring that a vendor’s access to a firm’s systems and data is revoked when the relationship ends; and
periodically reviewing the third party’s vendor tool default features and settings.
 

Extended Hours Trading: In recent years, trading in National Market System stocks and other securities has extended beyond regular trading hours. In its other new section, FINRA reminds firms that offer extended hours trading that they must comply with FINRA Rule 2265, which requires that these firms provide their customers with a risk disclosure statement. Importantly, if a firm allows its customers to participate in extended hours trading online, the firm must be sure to post a risk disclosure statement on the firm’s website “in a clear and conspicuous manner.” In addition to Rule 2265, firms participating in extended hours trading must also comply with FINRA Rule 5310 (Best Execution and Interpositioning) and Rule 3110 (Supervision).
The Report recommends the following best practices to address any perceived risks associated with extended hours trading: 

conducting best execution reviews geared toward evaluating how extended hours orders are handled, routed and executed;
reviewing customer disclosures to ensure they address the risks associated with extended hours trading;
establishing and maintaining supervisory processes designed to address the “unique characteristics or risks” of extended hours trading; and
evaluating the operational readiness and customer support needs during extended hours trading.

Continuing Priorities
In addition to the Report’s new topics, each of the Report’s sections — Financial Crimes Prevention, Firm Operations, Member Firms’ Nexus to Crypto, Communications and Sales, Market Integrity, and Financial Management — places special emphasis on certain continuing priorities that will remain key focus areas for FINRA in 2025:

Reg BI and Form CRS: Reg BI and Form CRS have been perennial areas of focus for FINRA since they first became effective in 2020. The 2025 Report details a number of new findings and observations for each of the four component obligations of Reg BI (Care, Conflict of Interest, Disclosure, and Compliance).
With respect to the Care Obligation, many of FINRA’s latest findings and observations center around firms’ obligations with respect to recommendations of complex or risky products. FINRA reminds firms making such recommendations to consider whether the investments align with the customer’s overall investment profile, and whether the investment would result in concentrations that exceed the firm’s policies or the customer’s risk tolerance, or that represent an inappropriate portion of a retail customer’s liquid net worth.
The primary addition to the Report concerning firms’ Conflict of Interest Obligation is a finding that firms may violate Reg BI by failing to identify all material conflicts of interest that may incentivize an associated person to make a particular recommendation, such as a financial incentive to recommend the opening of an account with the firm’s affiliate, or to invest in securities tied to a company in which the associated person has a personal ownership stake.
The Report also contains a new finding related to the Compliance Obligation, noting that firms must have written policies and procedures that address account recommendations (as distinct from investment recommendations), including transfers of products between brokerage and advisory accounts, rollover recommendations, and potentially fraudulent patterns of account switches by the same associate person. 
While the Report contains no new findings or observations related to the Disclosure Obligation, FINRA continues to remind firms of their obligation to provide customers “full and fair” disclosures of all material facts related to the scope of their relationship and any conflicts of interest.
As it relates to Form CRS, the Report’s findings included failures to properly deliver Form CRS and to properly post Form CRS — including posting Form CRS on any websites maintained by financial professionals who offer the firm’s services through a separate “doing business as” website.
 
Cybersecurity and Cyber-Enabled Fraud: The Report’s section on Cybersecurity and Cyber-Enabled Fraud — titled Cybersecurity and Technology Management in previous years’ reports — includes several important additions in 2025.
Most prominently, the Report highlights the emerging risks associated with quantum computing, a new technology that relies on quantum mechanics to perform functions not possible for more traditional forms of technology. Noting that many financial institutions have recently begun exploring use of quantum computing in their business operations, the Report warns that these technologies could be exploited by threat actors. Among other things, quantum computing has the potential to quickly break current encryption methods utilized by firms in the financial services industry. FINRA recommends that firms considering the use of quantum computers place a particular emphasis on ensuring cybersecurity, third-party vendor management, data governance and supervision.
The Report also discusses a variety of cybersecurity threats and attacks that financial institutions must be prepared to counter. First, the Report observes an increase in the variety, frequency and sophistication of many common threats, including new account fraud, account takeovers, data breaches, imposter sites, and “quishing” (an attack that uses QR codes to redirect victims to phishing URLs). In addition to these more conventional threats, the Report also describes several emerging threats, including: Quasi-Advanced Persistent Threats (Quasi-APTs) (sophisticated cyberattacks intended to gain prolonged network or system access); Generative AI-Enabled Fraud (attacks that make use of emerging generative AI technology to enhance cyber-related crimes); and Cybercrime-as-a-Service (attacks perpetrated by criminals with technical expertise on a for-hire basis, or by selling cyber-attack tools to third parties).
Among the effective practices recommended by FINRA to combat these threats, the Report highlights two new practices: tabletop exercises, in which firms bring internal and external stakeholders together to ensure cyber threats are appropriately identified, mitigated and managed; and lateral movement, a method of subdividing a firm’s networks into various sections to make it more difficult for threat actors to gain access to a network in its entirety.
 
Senior Investors and Trusted Contact Persons: FINRA remains keenly focused on preventing the financial exploitation of senior investors. The Report reminds members of their regulatory obligations under FINRA Rule 4512 with respect to “Trusted Contact Persons” (TCPs) and FINRA Rule 2165 (Financial Exploitation of Specified Adults).
FINRA Rule 4512(a)(1)(F) requires FINRA members to make reasonable efforts to obtain the name of and contact information for a TCP for non-institutional customer accounts to address possible financial exploitation, to confirm the specifics of the customer’s current contact information, health status, or the identity of any legal guardian, executor, trustee, or holder of a power of attorney; or take other steps permitted by Rule 2165. In particular, Rule 2165 permits firms to place temporary holds on securities transactions and account disbursements if the member reasonably believes that financial exploitation of a Specified Adult has occurred, is occurring, has been attempted, or will be attempted. “Specified Adult” means (A) a natural person age 65 and older; or (B) a natural person age 18 and older who the member reasonably believes has a mental or physical impairment that renders the individual unable to protect his or her own interests.
In the “Findings and Effective Practices” section of the Report, FINRA notes that recent examinations and investigation focus on firms not making reasonable attempts to obtain the name and contact information of a TCP; not providing written disclosures explaining when a firm may contact a TCP; not developing training policies reasonably designed to ensure compliance with the requirement of Rule 2165; and not retaining records that document the firm’s internal review underlying any decision to place a temporary hold on a transaction.
As for suggested effective practices, the Report recommends, among other things: implementing a process to track whether customer accounts have designated TCPs, establishing specialized groups to handle situations involving elder abuse or diminished capacity, and hosting conferences or participating in industry groups focused on the protection of senior customers.
 
Anti-Money Laundering (AML) and Fraud: FINRA Rule 3310 requires that each member firm develop and implement a written AML program that is approved in writing by senior management and is reasonably designed to achieve and monitor the firm’s compliance with the Bank Secrecy Act and its implementing regulations.
As for recommended effective practices, the Report recommends:

conducting thorough inquiries when customers — particularly the elderly — request an unusually significant amount of funds to be disbursed to a personal bank account;
conducting formal, written AML risk assessments;
incorporating additional methods for verifying customer identities when establishing online accounts;
delegating AML duties to specific business units that are best positioned to monitor and identify suspicious activity; and
establishing an AML training program for personnel that is tailored to the individuals’ roles and responsibilities.
The Report highlights one emerging risk: FINRA has observed an increase in investment fraud committed by those that engage directly with investors. This can include persuading victims to withdraw funds from their accounts as part of a fraudulent scheme. The FBI’s Internet Crime Report notes that “investment fraud is the costliest type of crime tracked by the FBI’s Internet Crime Complaint Center.” To help mitigate this threat, FINRA recommends: monitoring for sudden changes in a customer’s behavior, including withdrawal requests that are out of character for the customer; educating firm personnel that are in contact with customers on how to recognize red flags; and developing clear response plans for when the firm identifies a customer that has been victimized.
 

Private Placements: The Report’s section on private placements does not stray far from previous years’ reports, and primarily re-emphasizes a key area of focus for FINRA’s Enforcement division over the past two years, first highlighted in Regulatory Notice 23-08. As we reported at the time, Regulatory Notice 23-08 reminded member firms of their obligation to conduct a reasonable investigation of private placement investments prior to making any recommendation — including, most particularly, conducting an investigation of the issuer, its management and its business prospects, the assets held or to be acquired by the issuer, and the issuer’s intended use of proceeds from the offering. In its discussion of findings from targeted exams, FINRA further notes that firms fail to satisfy this obligation when, among other things, they do not conduct adequate research into issuers that have a lack of operating history, or where they rely solely on the firm’s past experience with an issuer based on previous offerings. FINRA’s findings offer a reminder to firms to apply scrutiny to all offerings, whether or not the issuer is a known quantity — and to be especially vigilant when an issuer is new to the space.
The Report’s findings also provide another cautionary tale: FINRA warns that firms fail to comply with Reg BI’s care obligation when they take the position that the firm is not making recommendations, even though the firms’ representatives have made communications to customers that include a “call to action” and are individually tailored to the customer. Firms should remain aware that these types of communications are likely to be viewed as investment recommendations, and ensure that they conduct reasonable diligence before making any such communication to a customer.
The Report also discusses an emerging trend concerning firms that have made material misrepresentations and omissions related to recommendations of private placement offerings of pre-IPO securities. As examples, FINRA cites firms that have failed to disclose potential selling compensation, and that have failed to conduct reasonable due diligence to confirm that the issuer actually held or had access to the shares it purported to sell.
 
Manipulative Trading: Member firms are prohibited, pursuant to a series of FINRA Rules, from engaging in impermissible trading practices. The relevant rules include FINRA Rule 2010 (Standards of Commercial Honor and Principles of Trade); FINRA Rule 5230 (Payments Involving Publications that Influence the Market Price of a Security); and FINRA Rule 5210 (Publication of Transactions and Quotations), which FINRA has relied on in pursuing enforcement actions accusing member firms of publicizing or circulating inflated trading activity.
The Report highlights certain recent findings, including firms having inadequate WSPs, not establishing surveillance controls designed to capture manipulative trading, and not establishing and maintaining a surveillance system reasonably designed to monitor for potentially manipulative trading.
 
Communications With the Public: As in previous years, the Report details the content standards prescribed for three categories of firm written communications: correspondence, retail communications and institutional communications. 
The Report also presents findings on an emerging trend: retail communications focused on registered index-linked annuities (RILAs). FINRA’s findings concerning firms’ communications related to RILAs mirror many of the common findings in connection with other types of investments. For example, FINRA has found that firms have failed to adequately explain how RILAs function and the meaning of specialized terms that are specific to RILAs, as well as finding that firms have made inadequate disclosures of the risks, fees and charges associated with RILAs.
The Report also contains a new focus on firms’ communications made through social media and generative AI. In particular, it recommends that firms ensure that communications made with the assistance of generative AI (including chatbot communications used with investors) are appropriately supervised and retained. Similarly, the Report cautions that firms must maintain systems, including WSPs, reasonably designed to supervise communications disseminated on the firm’s behalf by influencers on social media.
The Report’s findings and observations are intended to serve as a guide for member firms to assess their current compliance, supervisory, and risk management programs and note any perceived deficiencies that could result in scrutiny by FINRA. Member firms are encouraged to focus on the findings, observations and effective practices relevant to their respective business models.

Australia’s Proposed Scams Prevention Framework

In response to growing concerns regarding the financial and emotional burden of scams on the community, the Australian government has developed the Scams Prevention Framework Bill 2024 (the Bill). Initially, the Scams Prevention Framework (SPF) will apply to banks, telecommunications providers, and digital platform service providers offering social media, paid search engine advertising or direct messaging services (Regulated Entities). Regulated Entities will be required to comply with obligations set out in the overarching principles (SPF Principles) and sector-specific codes (SPF Codes). Those failing to comply with their obligations under the SPF will be subject to harsh penalties under the new regime.
Why Does Australia Need a SPF?
Australian customers lost AU$2.7 billion in 2023 from scams. Whilst the monetary loss from scams is significant, scams also have nonfinancial impacts on their victims. Scams affect the mental and emotional wellbeing of victims—victims may suffer trauma, anxiety, shame and helplessness. Scams also undermine the trust customers may have in utilising digital services. 
Currently, scam protections are piecemeal, inconsistent or non-existent across the Australian economy. The SPF is an economy-wide initiative which aims to:

Halt the growth in scams;
Safeguard the digital economy; 
Provide consistent customer protections for customers engaging with Regulated Entities; and
Be responsive and adaptable to the scams environment. 

What is a Scam?
A scam is an attempt to cause loss or harm to an individual or entity through the use of deception. For example, a perpetrator may cause a target to transfer funds into a specified bank account by providing the target with what appears to be a parking fine. However, financial loss caused by illegal cyber activity such as hacking would not be a scam as it does not involve the essential element of deception.
SPF Principles
The Bill sets out six SPF Principles which Regulated Entities must comply with. The SPF Principles will be enforced by the Australian Competition and Consumer Commission (ACCC) as the SPF General Regulator. 
The SPF Principles are outlined in table 1 below.

SPF Principle
Description

1. Governance
Regulated Entities are required to ‘develop and implement governance policies, procedures, metrics and targets to combat scams’. In discharging their obligations under this principle, entities must develop and implement a range of policies and procedures which set out the steps taken to comply with the SPF Principles and SPF Codes. The ACCC is expected to provide guidance on how an entity can ensure compliance with their governance obligations under the SPF.

2. Prevent
Regulated Entities must take reasonable steps to prevent scams on or relating to the service they provide. Such steps should aim to prevent people from using the Regulated Entity’s service to commit a scam, as well as prevent customers from falling victim to a scam. This includes publishing accessible resources which provide customers with information on how to identify scams and minimise their risk of harm.

3. Detect
Regulated Entities must take reasonable steps to detect scams by ‘identifying SPF customers that are, or could be, impacted by a scam in a timely way’. 

4. Report

Where a Regulated Entity has reasonable grounds to suspect that a ‘communication, transaction or other activity on, or relating to their regulated service, is a scam’, it must provide the ACCC with a report of any information relevant to disrupting the scam activity. Such information is referred to as ‘actionable scam intelligence’ in the SPF.
Additionally, if requested by an SPF regulator, an entity will be required to provide a scam report. The appropriate form and content of the report is intended to be detailed in each SPF Code.

5. Disrupt

A Regulated Entity is required to take ‘reasonable steps to disrupt scam activity on or related to its service’. Any such steps must be proportionate to the actionable scam intelligence held by the entity. As an example, for banks, appropriate disruptive activities may include:

Contacting customers to warn them of popular scams;
Introducing confirmation of payee features on electronic banking services; and
Placing a hold on payments directed to an account associated with scam activity to allow the bank time to contact the customer and provide them with information about the suspected scam. 

6. Respond
Regulated Entities are required to implement accessible mechanisms which allow customers to report scams and establish accessible and transparent internal dispute resolution processes to deal with any complaints. Additionally, Regulated Entities must be a member of an external dispute resolution scheme authorised by a Treasury Minister for their sector. The purpose of such an obligation is to provide an independent dispute resolution mechanism for customers whose complaints have not been resolved through initial internal dispute resolution processes, or where the internal dispute resolution outcome is unsatisfactory.

Table 1
What are ‘Reasonable Steps’?
We expect that SPF Codes will provide further clarification regarding what will be considered ‘reasonable steps’ for the purposes of discharging an obligation under the SPF Principles. From the explanatory materials, it is evident that whether reasonable steps have been taken will depend on a range of entity-specific factors including, but not limited to:

The size of the Regulated Entity;
The services of the Regulated Entity;
The Regulated Entity’s customer base; and
The specific types of scam risk faced by the Regulated Entity and their customers.

Disclosure of Information Under the Reporting Principle
As indicated in table 1 above, the SPF reporting principle requires disclosure of information to the SPF regulator. It is clear from the explanatory materials that, to the extent this reporting obligation is inconsistent with a legal duty of confidence owed under any ‘agreement or arrangement’ entered into by the Regulated Entity, the SPF obligation will prevail. However, it is not expressly stated how this obligation will interact with statutory protections of personal information.
The Privacy Act 1988 (Cth) (Privacy Act) imposes obligations regarding the collection, use and disclosure of personal information. Paragraph 6.2(b) of Schedule 1 to the Privacy Act allows an entity to use or disclose information for a purpose other than which it was collected where the use or disclosure is required by an Australian law. Arguably, once the SPF is enacted, disclosure of personal information in accordance with the obligations under the reporting principle will be ‘required by an Australian law’ and therefore not in breach of the Privacy Act. 
Safe Harbour Protection for Disruptive Actions
As noted in table 1, SPF Principle 5 requires entities to take disruptive actions in response to actionable scam intelligence. This may leave Regulated Entities vulnerable to actions for breach of contractual obligations. For example, where a bank places a temporary hold on a transaction, the customer might lodge a complaint for failure to follow payment instructions. To prevent the risk of such liability from deterring entities from taking disruptive actions, the SPF provides a safe harbour protection whereby a Regulated Entity will not be liable in a civil action or proceeding where they have taken action to disrupt scams (including suspected scams) while investigating actionable scam intelligence. 
In order for the safe harbour protection to apply, the following requirements must be met:

The Regulated Entity acted in good faith and in compliance with the SPF;
The disruptive action was reasonable and proportionate to the suspected scam;
The action was taken during the period starting on the day that the information became actionable scam intelligence, and ending when the Regulated Entity identified whether or not the activity was a scam, or after 28 days, whichever was earlier; and
The action was promptly reversed if the Regulated Entity identified the activity was not a scam and it was reasonably practicable to reverse the action.

The assessment of whether disruptive actions were proportionate will be determined on a case-by-case basis. However, relevant factors may include:

The volume of information received or available;
The source of that information; and
The apparent likelihood that the activity is associated with a scam.

SPF Codes
As a ‘one-size-fits-all’ approach across the entire scams ecosystem is not appropriate, the SPF provides for the creation of sector-specific codes. These SPF Codes will set out ‘detailed obligations’ and ‘consistent minimum standards’ to address scam activity within each regulated sector. The SPF Codes are yet to be released.
It is not clear whether the SPF Codes will interact with other industry codes and, if so, how and which codes will prevail. 
It appears from the explanatory materials that the SPF Codes are intended to impose consistent standards across the regulated sectors. It is unclear whether this will be achieved in practice or whether there will be a disproportionate compliance burden placed on one regulated sector in comparison to other regulated sectors. For example, because banks are often the ultimate sender/receiver of funds, will they face the most significant compliance burden? 
SPF Regulators
The SPF is to be administered and enforced through a multiregulator framework. The ACCC, as the General Regulator, will be responsible for overseeing the SPF provisions across all regulated sectors. In addition, there will be sector-specific regulators responsible for the administration and enforcement of SPF Codes. 
Enforcement
The proposed Bill sets out the maximum penalties for contraventions of the civil penalty provisions of the SPF. 
There are two tiers of contraventions, with a tier 1 contravention attracting a higher maximum penalty in order to reflect that some breaches would ‘be the most egregious and have the most significant impact on customers’. A breach will be categorised based on the SPF Principle contravened as indicated in table 2 below.

Tier 1 Contravention
Tier 2 Contravention

SPF principle 2: prevent
SPF principle 4: detect
SPF principle 5: disrupt
SPF principle 6: respond

An SPF Code
SPF principle 1: governance
SPF principle 3: report

Table 2
In addition to the civil penalty regime, other administrative enforcement tools will be available including:

Infringement notices;
Enforceable undertakings;
Injunctions;
Actions for damages;
Public warning notices;
Remedial directions;
Adverse publicity orders; and
Other punitive and nonpunitive orders.

Key Insights on President Trump’s New AI Executive Order and Policy & Regulatory Implications

On January 23, 2025, President Trump issued a new Executive Order (EO) titled “Removing Barriers to American Leadership in Artificial Intelligence” (Trump EO). This EO replaces President Biden’s Executive Order 14110 of October 30, 2023, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (Biden EO), which was rescinded on January 20, 2025, by Executive Order 14148.
The Trump EO signals a significant shift away from the Biden administration’s emphasis on oversight, risk mitigation and equity toward a framework centered on deregulation and the promotion of AI innovation as a means of maintaining US global dominance.
Key Differences Between the Trump EO and Biden EO
The Trump EO explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It criticizes the influence of “engineered social agendas” in AI systems and seeks to ensure that AI technologies remain free from ideological bias. By contrast, the Biden EO focused on responsible AI development, placing significant emphasis on addressing risks such as bias, disinformation and national security vulnerabilities. The Biden EO sought to balance AI’s benefits with its potential harms by establishing safeguards, testing standards and ethical considerations in AI deployment and deployment.
Another significant shift in policy is the approach to regulation. The Trump EO mandates an immediate review and potential rescission of all policies, directives and regulations established under the Biden EO that could be seen as impediments to AI innovation. The Biden EO, however, introduced a structured oversight framework, including mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols and monitoring requirements for AI used in critical infrastructure. The Biden administration also directed federal agencies to collaborate in the development of best practices for AI safety and reliability efforts that the Trump EO effectively halts.
The two EOs also diverge in their treatment of workforce development and education. The Biden EO dedicated resources to attracting and training AI talent, expanding visa pathways for skilled workers and promoting public-private partnerships for AI research and development. The Trump EO, however, does not include specific workforce-related provisions. Instead, the Trump EO seems to assume that reducing federal oversight will naturally allow for innovation and talent growth in the private sector.
Priorities for national security are also shifting. The Biden EO mandated extensive interagency cooperation to assess the risks AI poses to critical national security systems, cyberinfrastructure and biosecurity. It required agencies such as the Department of Energy and the Department of Defense to conduct detailed evaluations of potential AI threats, including the misuse of AI for chemical and biological weapon development. The Trump EO aims to streamline AI governance and reduce federal oversight, prioritizing a more flexible regulatory environment and maintaining US AI leadership for national security purposes.
The most pronounced ideological difference between the two executive orders is in their treatment of equity and civil rights. The Biden EO explicitly sought to address discrimination and bias in AI applications, recognizing the potential for AI systems to perpetuate existing inequalities. It incorporated principles of equity and civil rights protection throughout its framework, requiring rigorous oversight of AI’s impact in areas such as hiring, healthcare and law enforcement. Not surprisingly, the Trump EO did not focus on these concerns, reflecting a broader philosophical departure from government intervention in AI ethics and fairness – perhaps considering existing laws that prohibit unlawful discrimination, such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act, as sufficient.
The two orders also take fundamentally different approaches to global AI leadership. The Biden EO emphasized the importance of international cooperation, encouraging US engagement with allies and global organizations to establish common AI safety standards and ethical frameworks. The Trump EO, in contrast, appears to adopt a more unilateral stance, asserting US leadership in AI without outlining specific commitments to international collaboration.
Implications for the EU’s AI Act, Global AI and State Legal Frameworks
The Trump administration’s deregulatory approach comes at a time when other jurisdictions, particularly the EU, are moving toward stricter regulatory frameworks for AI. The EU’s Artificial Intelligence Act (EU AI Act), which was adopted by the EU Parliament in March 2024, imposes comprehensive rules on the development and use of AI technologies, with a strong emphasis on safety, transparency, accountability and ethics. By categorizing AI systems based on risk levels, the EU AI Act imposes stringent requirements for high-risk AI systems, including mandatory third-party impact assessments, transparency standards and oversight mechanisms.
The Trump EO’s emphasis on reducing regulatory burdens stands in stark contrast to the EU’s approach, which reflects a precautionary principle that prioritizes societal safeguards over rapid innovation. This divergence could create friction between the US and EU regulatory environments, especially for multinational companies that must navigate both systems. Although the EU AI Act is being criticized as impeding innovation, the lack of explicit ethical safeguards and risk mitigation measures in the Trump EO also could weaken the ability of US companies to compete in European markets, where compliance with the EU AI Act’s rigorous standards is a legal prerequisite for EU market access.
Globally, jurisdictions such as Canada, Japan, the UK and Australia are advancing their own AI policies, many of which align more closely with the EU’s focus on accountability and ethical considerations than with the US’s pro-innovation stance under the Trump administration. For example, Canada’s Artificial Intelligence and Data Act emphasizes transparency and responsible development, while Japan’s AI guidelines promote trustworthy AI principles through multistakeholder engagement. While the UK has a less regulated approach than the EU, it has a strong accent on safety through the AI Safety Institute.
The Trump administration’s decision to rescind the Biden EO and prioritize a “clean slate” for AI policy also may complicate efforts to establish global standards for AI governance. While the EU, the G7 and other multilateral organizations are working to align on key principles such as transparency, fairness and safety, the US’s unilateral focus on deregulation could limit its influence in shaping these global norms. Additionally, the Trump administration’s pivot toward deregulation risks creating a perception that the US prioritizes short-term innovation gains over long-term ethical considerations, potentially alienating allies and partners.
A final consideration is the potential for the Trump EO to widen the gap between federal and state AI regulatory regimes, inasmuch as it presages deregulation of AI at the federal level. Indeed, while the EO signals a federal shift toward prioritizing innovation by reducing regulatory constraints, the precise contours of the new administration’s approach to regulatory enforcement – including on issues like data privacy, competition and consumer protection – will become clearer as newly appointed federal agency leaders begin implementing their agendas. At the same time, states such as Colorado, California and Texas have already enacted AI laws with varying scope and degrees of oversight. As with state consumer privacy laws, increased state-level activity in AI also would likely lead to increased regulatory fragmentation, with states implementing their own rules to address concerns related to high-risk AI applications, transparency and sector-specific oversight.
Thus, in the absence of clear federal guidelines, leaving businesses with a growing patchwork of state AI regulations will complicate compliance across multiple jurisdictions. Moreover, if Congress enacts an AI law that prioritizes innovation over risk mitigation, stricter state regulations could face federal preemption. Until then, organizations must closely monitor both federal and state developments to navigate this evolving and increasingly fragmented AI regulatory landscape.
Ultimately, a key test for the Trump administration’s approach to AI is whether it preserves and enhances US leadership in AI or allows China to build a more powerful AI platform. The US approach will undoubtedly drive investment and innovation by US AI companies. But China may be able to arrive at a collaborative engagement with international AI governance initiatives, which would position China strongly as an international leader in AI. Alternatively, is DeepSeek a flash in the pan, a stimulus for US competition or a portent for the future?
Conclusion
Overall, the Trump EO reflects a fundamental shift in US AI policy, prioritizing deregulation and freemarket innovation while reducing oversight and ethical safeguards. However, this approach could create challenges for US companies operating in jurisdictions with stricter AI regulations, such as the EU, the UK, Canada and Japan – as well as some of those states in the US that have already enacted their own AI regulatory regimes. The divergence between the US federal government’s pro-innovation strategy and the precautionary regulatory model pursued by the EU and these US states underscores the need for companies operating across these jurisdictions to adopt flexible compliance strategies that account for varying regulatory standards.
Pablo Carrillo also contributed to this article.

Insurance Premium Finance Exemption — Maryland Commercial Finance Disclosure Legislation

Maryland recently introduced Commercial Finance Disclosure Law (“CFDL”) legislation in both the House (HB 693) and Senate (SB 754), following a path of other states with laws requiring consumer-like disclosures in certain commercial loans. Maryland has introduced similar legislation in the past but has not yet garnered sufficient support to reach the Governor’s desk.
This legislative session, the sponsors of these bills have added an additional exemption from the law’s application should it be enacted. The bills include an exemption for, among other types of loan products, commercial financing transactions that are insurance premium finance loans. Insurance premium financing loans are short-term, secured loans that enable businesses to purchase insurance coverage. Businesses of all sizes obtain commercial, property, casualty, and liability insurance policies to mitigate operational risk and to protect their interests and those of their customers. While some businesses may choose to pay insurance premiums in full at the time of purchase, others either do not have sufficient funds to pay the premiums in full up front or prefer to finance the premiums permitting other uses of capital. The majority of states regulate insurance premium financing transactions, including Maryland.
This additional CFDL exemption appears appropriate. Insurance premium finance transactions are extensively regulated by the Maryland Department of Insurance and subject to laws that mandate the disclosure of financial terms. (Md. Code Ann., Ins., §§ 23-101 et seq.) Current insurance premium finance law in Maryland requires the disclosure of loan related information in the insurance premium finance agreement itself, including: (i) the total amount of the premiums under the policies purchased; (ii) the amount of the down payment on the loan; (ii) the principal balance; (iii) the amount of the finance charge; (iv) the balance payable by the insured; (v) the number of installments required, the amount of each installment expressed in dollars, and the due date or period of each installment; (vi) any electronic payment fee; and (vii) prepayment particulars. Substantially similar disclosures contemplated under the proposed CFDL bills are required under existing Maryland law regulating insurance premium finance loans. Imposing CFDL standards for insurance premium finance transactions, when already required by other Maryland law, appears redundant and unnecessary. Further, application of multiple disclosure laws could potentially present conflicting obligations for insurance premium finance companies, duplicative regulation by multiple administrative departments, and inconsistent information for borrowers when comparing insurance premium finance loans.

Vought’s Transformational First Few Days at the CFPB

Within approximately 48 hours, starting on the evening of Friday, February 7, 2025, the trajectory of the Consumer Financial Protection Bureau (CFPB) was significantly altered. Among other things, a new acting director was installed, a more comprehensive internal pause – one that now explicitly covers supervision, examination, and enforcement activities – was put in place, the new acting director publicly stated that the CFPB would not be seeking any additional funding for the third quarter of the 2025 fiscal year and the CFPB’s Washington, D.C. headquarters were closed for a week.
President Donald Trump initially tapped newly confirmed Treasury Secretary Scott Bessent to also temporarily replace Rohit Chopra as the director of the CFPB. In a surprising move, exactly one week later, the Wall Street Journal reported that Bessent had been replaced by Russell Vought to serve as acting director. Like Bessent before him, Vought is simultaneously serving two roles in the Trump administration. On Thursday, February 6, 2025, the day before being designated as the acting director of the CFPB, Vought was confirmed by the Senate to lead the Office of Management and Budget (OMB). When Bessent was designated to be the CFPB’s acting director, we explained how the Federal Reform Vacancies Act imposes limits on who can serve in an acting capacity as the director of an executive agency. Like Bessent, Vought now checks the right boxes by having been confirmed by the Senate to lead the OMB.
Although Trump has not yet issued any official statements regarding the appointment, Vought sent an email to all CFPB staff Saturday evening notifying them of the move. That email echoed what Bessent reportedly told the CFPB staff a week earlier, directing an internal pause on most, if not all, activities at the agency. However, Vought’s email goes even further than Bessent’s, purportedly including specific instructions for the CFPB to “cease any pending investigations,” and “[c]ease all supervision and examination activity.” Once again, it will be important to monitor whether these are temporary pauses or whether they signal a more permanent wind-down of the CFPB’s activities.
Next, on the evening of Saturday, February 8, 2025, acting director Vought published a message on X (formerly known as Twitter) summarizing a letter that he sent the same day to Federal Reserve Chairman Jerome Powell regarding the CFPB’s funding. His post explained that “the CFPB will not be taking its next draw of unappropriated funding,” and that the “Bureau’s current balance of $711.6 million is in fact excessive in the current fiscal environment.” His letter to Powell also states that he has also “determined that no additional funds are necessary to carry out the authorities of the Bureau for Fiscal Year 2025.”
As a quick refresher, the CFPB’s funding structure is set forth in the Consumer Financial Protection Act. Specifically, the director of the CFPB is to request from the Federal Reserve an “amount determined . . . to be reasonably necessary to carry out the authorities of the Bureau under Federal consumer financial law, taking into account such other sums made available to the Bureau from the preceding year (or quarter of such year).” The CFPB spent a total of $729.4 million in fiscal year 2024, and former director Chopra requested and received $248.9 million and $245.1 million for the first and second quarters of fiscal year 2025, respectively. Vought’s social media post suggests that the $711.6 million the CFPB currently has is enough for it to fulfill its duties for the remainder of fiscal year 2025.
Interestingly, the letter from Vought to Powell also notes that prior CFPB administrations have chosen to maintain a “reserve fund,” though no such fund is required by law. Vought’s letter commits to ceasing that practice and states that “[t]he Bureau’s new leadership will run a substantially more streamlined and efficient bureau . . . and do its part to reduce the federal deficit.”
As if that all weren’t enough, on the evening of Sunday, February 9, 2025, acting director Vought reportedly emailed all CFPB staff again and this time informed them that the agency’s headquarters in Washington, D.C. would be closed this upcoming week and that all employees should work remote. This has led many to wonder whether the CFPB’s headquarters are being permanently shuttered, whether the closure is somehow related to the ongoing review of the CFPB that is being conducted by the Department of Government Efficiency (DOGE) or whether there is some other reason to justify a temporary closure. After a hectic weekend filled with CFPB-related developments, the future of the agency is uncertain at best. We will continue to monitor for future developments that impact our clients and report on them here.
Listen to this post

$10.00 CAR INSURANCE?: Quote Wizard Draws Complaint Over Advertisement that Does Not Comport With “Basic Common Sense”

Is this real? 
So Lending Tree hasn’t apologized yet. 
But I am over it.
Unrelated, picked up this odd complaint in Michigan that I thought was interesting.
Apparently Quote Wizard was running ads suggesting they could provide full auto insurance coverage for $10.00.
At least that’s the gist of the complaint I was provided.
The consumer says:
QuoteWizard.com, LLC is running at least 29 illegal advertisements to solicit insurance in the State of Michigan in violation of Michigan Compiled Law (MCL) 500.2003, 500.2005, 500.2005a, 500.2007. The Michigan Insurance Code states that unfair methods of competition and unfair and deceptive acts include the making, publishing, disseminating, circulating, etc. of any assertion with respect to the business of insurance or with respect to any person in the conduct of his insurance business, which is untrue, deceptive or misleading. MCL § 500.2007. The Michigan Insurance Code further prohibits the use of marketing that fails to disclose in a conspicuous manner that its purpose is solicitation of insurance and that contact will be made by an insurance agent or insurance company. MCL § 500.2005a. Quotewizard.com, LLC runs a variety of advertisements on Meta’s Facebook platform. These ads, which I have copied links to view in Meta’s Ad Library, are untrue, deceptive, and misleading. Quotewizard.com, LLC advertises a new insurance rate as ” New Rate $10 Full Coverage”. As a licensed insurance agency in the State of Michigan Quotewizard.com, LLC must follow the law. Based on information, belief, and the application of basic common sense, Quotewizard.com, LLC cannot offer an automobile insurance policy with “full coverage (which in common parlance generally means to include both collision and comprehensive coverage) for $10. If Quotewizard.com, LLC is in fact selling $10 auto insurance policies we have an even bigger problem because based on a search of DIFS website QuoteWizard.com, LLC is not appointed by a single insurance carrier to transact business in the state. Quotewizard.com, LLC appears to be preying on Michigan’s financially venerable [editor’s note: probably means vulnerable] population that can barely afford their car insurance and is trying to entice them to click their advertisement in hopes of financial relief. Instead clicking the advertisement will simply forward you information to dozens of insurance agents that will call you over and over trying to sell you insurance at rates that we would customarily expect to receive not $10. 
Just because a consumer says this is true doesn’t make it true. But the ads library looks pretty legit. So maybe Quote Wizard was knowingly or unknowingly tricking people into visiting its website. Or maybe somebody is submitting false stuff to a Michigan regulator. *Shrug.*
Regardless, I am sharing this because it does raise a pretty important issue for folks buying leads– you need to understand your entire funnel.
If you are accepting clicks–or even inbound calls–from social media ads that contain false content you may end up being pursued by a state agency. (That hasn’t happened here, BTW, just a complaint– but one everyone can learn from.)
And I know Musk may have just killed the CFPB and the feds look unlikely to regulate anyone or anything–at least for a while– but the states can be plenty aggressive. So watch out!

Massachusetts AG Unveils Internal TikTok Documents in Lawsuit Alleging Child Addiction Strategies

On February 3, 2025, the Massachusetts Attorney General revealed information about internal TikTok documents as part of the AG’s lawsuit in Massachusetts state court alleging that TikTok designed its platform to maximize children’s engagement while downplaying associated risks through unfair and deceptive practices prohibited under Massachusetts law. The information, revealed in a less-redacted complaint, highlights internal discussions and strategic choices made by TikTok to increase the time young users spend on the app.
The complaint alleges that TikTok’s internal metrics prioritize children’s engagement, with teenagers offering the highest “Life Time Value” to the company. According to the complaint, internal data showed that in 2020, TikTok had achieved 95% market penetration among U.S. teens aged 13 to 17. A 2019 presentation allegedly stated that the platform’s “ideal user composition” would be for 82% of its users to be under the age of 18.
TikTok executives allegedly were aware of the potential negative effects of its algorithm on children, including sleep disruption and compulsive use. Internal communications cited in the lawsuit include a statement from TikTok’s Head of Child Safety Policy acknowledging that the app’s algorithm keeps children engaged at the expense of other essential activities.
TikTok’s leadership also allegedly blocked proposed changes aimed at reducing compulsive use among minors due to concerns about negative business impacts. One example cited in the complaint involves a proposed “non-personalized feed” that could have mitigated compulsive behaviors but was ultimately rejected.
The complaint also alleges that TikTok misrepresented the effectiveness of its content moderation policies. While the company has publicly claimed high proactive removal rates for harmful content, internal data allegedly shows significant leakage of inappropriate material, including content related to child safety violations.
The Massachusetts case is one of the first to publicly disclose internal TikTok documents related to its user engagement strategies. Its outcome could impact how social media companies design their platforms and address concerns regarding child safety.

Disappearing CFPB? What’s Happened And What’s Next

These are interesting times we live in.
During the transition period between Trump’s election and inauguration, Elon Musk stated that the there were “too many duplicative regulatory agencies” and he would “delete” the Consumer Financial Protection Bureau (CFPB).
Well, it looks like that process has been begun. And if not deleted, then erased significantly.
On Friday afternoon, Elon Musk tweeted out “CFPB RIP”. Subsequently, people noticed the CFPB’s website and X page went dark.
On Saturday night, Russell Vought, the director of the Office of Management and Budget tweeted that the CFPB will NOT be taking its “next draw of unappropriated funding because it is not ‘reasonably necessary’ to carry out its duties.” And the CFPB’s current balance of over $700 million is “excessive in the current fiscal environment”.
Then on Sunday, Mr. Vought, sent an email to all CFPB employees essentially telling them all to go pencils down and they must get approval from the Chief Legal Officer IN WRITING before performing any work task. The letter was signed by Mr. Vought as “Acting Director”.
So, a lot going on. But, what does this mean?
It is helpful to remember the CFPB is funded through the Federal Reserve without Congressional approval. This was the basis of the challenge which the Supreme Court ruled on last year.  The Supreme Court found that the CFPB’s funding scheme fell squarely within the definition of a Congressional “appropriation” in a vote of 7-2.
Therefore, since the CFPB is funded by Treasury, Mr. Vought declining to ask for more money from Treasury is the first step to defund the CFPB. However, it would not be that simple to eliminate the CFPB, the Bureau’s power could be significantly limited.
The legal maneuvers required to completely eliminate the CFPB could require a supermajority vote in the Senate, which seems unlikely, but how could a massive cutback in force affect lead generators?
First, does it stop all new enforcement and rulemaking? This is an organization that has filed over 140 enforcement actions in the last 5 years and has over 20 proposed rules in various stages.
Second, what actions does the new administration take regarding prior actions. Does it rollback all guidance including the guidance around lead generation and pay to play? Does it pause all litigation? This could radically change the playing field for comparison shopping websites and the lenders that rely on them.
These are big questions with many downstream effects in the ecosystem.
My initial suggestion:
Don’t let this change your current business practices.
Like the 1:1 rule being vacated, the cutback of the CFPB doesn’t eliminate existing laws or regulations. And also like the 1:1 rule, don’t be surprised if other parties step into the vacuum created by the CFPB’s sudden diminishing stature.

PFAS and Consumer Class Actions: The New Wave of PFAS Litigation

With a new year has come a new wave of litigation involving PFAS (per- and poly-fluoroalkyl substances), also known as “forever chemicals.” While PFAS litigation up to this point has often involved either claims of personal injury or those concerning damage to natural resources and municipal water systems, an increasing trend of class actions is emerging implicating consumer protection laws. Recently, several large companies in the United States dealing in consumer goods have found themselves the targets of class action suits brought by plaintiffs asserting claims of consumer fraud involving PFAS.[1] The allegations asserted in these suits have a common thread in that the plaintiffs are arguing that the presence of PFAS in certain of the companies’ products was never disclosed to the consumer. The plaintiffs are, therefore, seeking to prohibit these companies from allegedly making misleading advertisements or selling these products without proper disclosures in the future.
“Forever Chemicals” Everywhere
These cases result from an increased awareness of PFAS, their wide range of uses and presence in daily life, and their alleged association with negative health and environmental effects. However, it is the widespread use of PFAS that could lead to a significant increase in consumer protection claims, as PFAS can be found in everything from clothing to furniture, pizza boxes and food wrappers, our cellphones, pots and pans, mattress pads, household dust, and even in every drop of rain.
A New Trend
Over the past couple of years, these types of lawsuits have been growing and diversifying in terms of the targeted industries. Since 2022, class action consumer lawsuits involving PFAS have been brought in courts across the country (i.e., New York, New Jersey, Illinois, California, etc.) against numerous entities in the cosmetics industry; the food, beverage and packaging industries; apparel companies; and those dealing with both regular and feminine hygiene products.[2]
Furthermore, a recent increase in legislation aimed at eliminating or at least limiting PFAS from consumer products is likely to fuel continued litigation. Over the past few years, more than 20 states, including Maine, Minnesota, and California, have enacted or are in the process of enacting consumer protection legislation addressing PFAS. At the federal level, the Toxic Substances Control Act (“TSCA”) now imposes record-keeping and reporting requirements on companies that manufacture, import, and sell products containing PFAS in the United States.
Although there has been an increase in these types of cases, many of the class actions related to PFAS consumer fraud claims have been dismissed by different courts, often either on the basis of a failure to state a claim when the specific PFAS compound at issue was not identified, or because the complaint did not establish the plaintiff’s reasonable reliance on the alleged deceptive representations.
Despite these dismissals, however, plaintiffs are not being deterred. With each dismissal, the plaintiffs’ bar is adapting and becoming more sophisticated and, thus, new lawsuits with more facts and more specific representations concerning PFAS are more likely to survive early dismissals as has been seen recently with several California cases where the courts denied motions to dismiss on the basis that consumers rely on a manufacturer’s health and wellness statements when making purchasing decisions.[3] Therefore, while early losses for plaintiffs may have originally stemmed the tide of mass PFAS consumer class actions, numerous cases are now working their way through the courts and are far more prepared for defensive challenges.
Conclusion
Considering that many states are in the process of enacting legislation responsive to PFAS, and that there is still no federal law banning the manufacture or sale of consumer products containing PFAS, the increase in related consumer class actions is all but guaranteed to continue. Therefore, those in the consumer goods industries, including their insurers and investment companies, will need to keep a close eye on this emerging trend and potentially put litigation risk mitigation plans in place in the event the trend continues to grow. These mitigation plans could include providing notice of PFAS in product labeling and any marketing strategies to ensure appropriate disclosure, finding suitable substitutes for PFAS where possible, and retaining firms like Blank Rome with significant experience in PFAS matters and defending class action suits. There is still much to be determined, but we will continue monitoring this emerging litigation trend closely.

[1] See Brown v. Cover Girl; Davenport v. L’Oreal; Azman Hussain v. Burger King; Bedson v. Biosteel; Esquibel v. Colgate-Palmolive Co.; Gemma Rivera v. Knix Wear Inc.; Anthony Ray Gonzalez v. Samsung Electronics America, Inc.; Dominique Cavalier and Kiley v. Apple Inc.
[2] See natlawreview.com/article/apples-pfas-consumer-fraud-lawsuit-latest-growing-trend
[3] See id.

Insurtech in 2025: Opportunity and Risk

The explosion in artificial intelligence (AI) capability and applications has increased the potential for industry disruptions. One industry experiencing recent material disruption is about as traditional as it gets: insurance. While some level of disruption in the insurance industry is nothing new, AI has been accelerating more significant changes to industry fundamentals. This is the first advisory in a series exploring the legal risks and strategies surrounding disruptive insurance technologies, particularly those leveraging AI, known as Insurtech.
What is Insurtech?
Insurtech is a broad term that encompasses every stage of the insurance lifecycle. Cutting-edge technology can be instrumental in advertising, lead generation, sales, underwriting, claims processing and fraud detection, among others. Generative AI can assist in client management and retention. Insurtech can augment traditional forms of insurance such as car and health insurance, and facilitate less traditional forms of insurance, such as parametric insurance or microinsurance at scale.
Legal and Regulatory Risks of Insurtech
As Insurtech continues to evolve, designers, providers and deployers must be aware of the legal and regulatory risks inherent in the use of Insurtech at all stages. These risks are particularly heightened in the insurance world, where vendors and carriers process an enormous amount of personal information in the course of decision-making that impacts individuals’ rights, from advertising to product pricing to coverage decisions. 
The heavily regulated nature of the traditional industry is also enhanced in the Insurtech context, given overlapping regulatory interests in regulating new technology applications. These additional layers of oversight – which in traditional applications may not be as much of a primary concern – include the Federal Trade Commission, states’ Attorneys’ General and in some jurisdictions, state-level privacy regulators.
Building Compliance for Insurtech Solutions
Designing, providing and deploying Insurtech solutions requires a multifaceted, customized approach to position agents, vendors, carriers and indeed any entity in the insurance stack for compliance. Taking early action to build appropriate governance for your Insurtech product or application is critical to building a defensive regulatory position. For entities that have an eye on raising capital, engaging in mergers or acquisitions, or other collaborative marketplace activity, such governance will minimize friction that can impede success. 
Additionally, consumers are increasingly attentive to data privacy and AI governance standards. Incorporating proper data privacy and AI governance regimes from day one is not only a forward-thinking business decision to mitigate risk and facilitate success; it is also a market imperative. 
Looking Ahead: Risks and Opportunities in 2025
Over the next few months, we will take a closer look into more discrete risks and opportunities that Insurtech providers and deployers need to keep in mind throughout 2025. Follow along as we explore this exciting area that in recent years has demonstrated enormous potential for continued growth.

Geolocation Takes the Day at Churchill Downs

Like the thoroughbred Rich Strike at the 2022 Kentucky Derby, one category of personal data recently broke from the rear and galloped its way to the forefront of awareness, astonishing the grandstands. You may hold its source in the palm of your hand. It is precise geolocation data1, collected from mobile devices.
The analogy presumes that the grandstands are packed with privacy nerds. For the rest of you, here’s a quick setup: Modern privacy laws2 define personal information very broadly3. Examples are given, including the physical location of an identifiable human being4 (“location” or “geolocation” data). Certain categories of personal info are deemed to be riskier to handle than others5. An increase in the level of risk attributed to precise geolocation data is the topic of this article.
Also presumed is a memory of Rich Strike’s epic victory. Picture a horse making moves like a running back, cutting a path through the field like he’s the only steed with a sense of urgency. Then he’s over the line and like: Whoa, what just happened?
But we’re getting ahead of ourselves. 
Upon entering the gates at post time, geolocation data seemed to merit the same odds as Rich Strike (80:1) of what was about to transpire. After all, GDPR6 itself (the OG of privacy laws) deemed it to be nothing special.7 
Let’s trace its path as it makes its astonishing run. Then we’ll circle back to GDPR and answer the obvious question: did it really (as it appears) fail to back the right horse? (Spoiler alert: the answer is no.) Finally, we’ll explore whether a silver bullet might exist to address the core concern underlying the discussion. (Spoiler alert: the answer is yes.)
A Word About Geolocation Data 
Normally, geolocation data collected from cell phones is used to serve targeted ads to consumers who have consented to the process. The ideal recipient delights in getting a coupon for the precise cup of joe (for example) that happens to be his favorite, just as he happens to pass a store that happens to offer it.8 Yay to that. 
But unfortunately, a sketchier use came to light at about the same time that GDPR was published (2016). It seemed like a niche concern at the time, more of a culture-war skirmish than anything broader. The story appeared in the Rewire News Group, a special interest publication with a narrowly focused readership9: 
Anti-Choice Groups Use Smartphone Surveillance to Target ‘Abortion-Minded Women’ During Clinic Visits.10
It garnered little attention.11 Following in GDPR’s footsteps, the 1.0 version of CCPA12 (2018) mentions “geolocation data” as one example of personal information, but declines to single it out as anything special. 
That changed in 2020 when CCPA 2.0 was adopted.13 Among the amendments, a newly-created category of “sensitive personal data” debuted, including a “consumer’s precise geolocation.” However the added protections afforded were limited.14 
The Sprint to Prominence
The day that corresponds (in our analogy) to the sixth furlong at Churchill Downs, and the start of the homestretch, is May 2, 2022. 
That’s when the SCOTUS decision in Dobbs v. Jackson15 was leaked to the press. The very next day, Vice Media published a story entitled Data Broker Is Selling Location Data of People Who Visit Abortion Clinics.16 The article warned of “an increase in vigilante activity or forms of surveillance and harassment against those seeking or providing abortions.”17 A cascade of similar reporting ensued.18 
Following the lead of the fourth estate, the other three soon got involved.19 A handful of pro-choice states quickly passed laws restricting the use of geolocation data associated with family planning centers.20
Meanwhile, the Federal Trade Commission entered the fray, deeming certain uses of geolocation data to be unfair.21 In 2022, it floated a novel position: that using precise geolocation data associated with sensitive locations is an invasion of privacy22 prohibited by law.23 By 2024, it had firmed up a list of locations it deemed in the scope of the prohibition, including medical facilities, religious organizations, ethnic/religious group social services, etc. (The full list appears in the table below.) 
Effectively, the FTC consigned “Sensitive Location Data” to the highest rank of sensitivity: personal data so sensitive that even informed consent can’t sanction its processing. Other rule-makers would go even further, proposing to ban the sale of precise geolocation altogether (sensitive or not)24, which brings us to the present day – and to a present-day head-scratcher: 
Are the risks so dire that our hypothetical coffee consumer must be denied the targeted coupon that so delights him?
Circling back to GDPR provides a helpful perspective.
Did GDPR Really Back the Wrong Horse?
GDPR deems certain types of personal data to be sensitive25 including data concerning a person’s health, religion, political affiliation, etc. (The full list appears in the table below.) Location data isn’t included. 
Nevertheless, if and when location data reveals or concerns sensitive data, it transforms into sensitive data ipso facto. 
For example, data that locates a patient within a hospital is sensitive data, because it concerns their health. But data that locates an attending physician within the same hospital is not sensitive data, because it doesn’t.
That’s one difference between GDPR and the FTC rule: the latter deems all location data associated with a Sensitive Location to be sensitive, whereas GDPR deems location data sensitive only if it actually reveals the sensitive data of a consumer.
Here’s another difference:
Even when GDPR deems personal data to be sensitive, it doesn’t prohibit its use altogether. Rather, sensitive data may be used in accordance with a consumer’s explicit consent. 
If that just caused you to raise an eyebrow, you’re probably not alone. GDPR isn’t known for permissive standards. And indeed, there’s a catch. The permissiveness comes at a cost in the form of rigorous duties imposed on businesses wishing to use sensitive data. 
A threshold duty is to check local laws. GDPR hedges on its permissiveness by granting member-state lawmakers the right to raise the bar; to outlaw particular uses of sensitive data altogether (like the FTC did with Sensitive Location Data).26 
Furthermore, it falls to the business to adjudge whether the risks of using the sensitive data outweigh the benefits.27 A formal Data Protection Impact Assessment is required, which is no small feat. Any green light to the use of sensitive data is likely to be closely scrutinized, should it catch the attention of a Supervisory Authority. Businesses must avoid using the rope provided by GDPR to hang themselves with – that’s the takeaway. 
Finally, a heightened standard is likely to govern the validity of any consents purported to authorize the use of sensitive data,28 which brings us to the crux of the matter:
A Crisis of Confidence in Consents
Modern privacy laws set a high bar for what constitutes valid consent. In a nutshell, the person providing it must understand – really and truly – what they’re saying “yes” to.29 
If the high bar is met, targeted ads may properly be served to consenting consumers, assuming any applicable red lines regarding sensitive data are respected.30 No current privacy framework31 rejects this principle. Rather, what’s been called into question, in particular cases, is the proviso – i.e., whether purported consents are valid in the first place.32
Some rule makers are skeptical to the extreme. They would dispense with consent as a legal basis for using location data in targeted advertising altogether. So flawed is the system, in their view, that consumers – for their own protection – must be denied the agency to proffer consent. Sorry coffee lover, no just-in-time coupon for you! 
There are reasons to think that position would go too far.
Why Consent Matters in Principle
Here’s a reality check: the right to privacy is not absolute. Even under GDPR, it must be balanced against other fundamental rights, including the freedom to conduct a business.33 This may be why GDPR stops short of an outright ban on the use of sensitive data, consent notwithstanding. Taken too far, such a ban might infringe on the rights of individuals to determine how their personal data (which they own) may be used, and the rights of businesses to use personal data in accordance with the wishes of consenting adults.
Big Improvements in Managing Consents
A protocol is currently being rolled out by a nonprofit consortium of digital advertising businesses, the IAB Tech Lab.34 Known as the Global Privacy Platform (GPP), it establishes a method for digitally recording a consumer’s consent to the use of their data. The resulting “consent string” attaches to the personal data, accompanying it on its journey through the auction houses of cyberspace. Businesses that receive the data also receive the consent string, so there’s little excuse for exceeding consumer permissions.
Universal adoption of the GPP would establish the state-of-the-art in consent management for digital advertising businesses. It would be a significant milestone.
Give Consent a Chance
Thereafter, improvements in the granularity of consent, and the effectiveness of consent management processes, might soon blow our minds. Or so we are led to expect, at this point in history, the dawn of the AI era. Consent-management “copilot” bots nestled in our pockets like Tinkerbell – only a Luddite would doubt it. Or so it seems. 
This is the promised silver bullet: consents so robust and manageable that even the most privacy-conscious consumer might have the confidence to grant them – present company included. 
* * * *
When is Location Data Deemed Sensitive?

FTC
“Sensitive Location Data” is precise geolocation data associated with35:

GDPR
“Location Data” becomes Sensitive Data when it reveals or concerns an individual’s:

Medical facilities
Health

Religious organizations
Religious or philosophical beliefs

Correctional facilities
Data relating to criminality is not Special Category data under Art.9, but might be effectively bucketed into this column. See Art.10.

Labor union offices
Trade union membership

Locations held out to the public as predominantly providing education or childcare services to minors
The personal data of children is not Special Category data under Art.9, but might be effectively bucketed into this column. See Art.8 and Recital 75.

Locations held out to the public as predominantly providing services to LGBTQ+ individuals such as service organizations, bars and nightlife
Sex life or orientation

Locations held out to the public as predominantly providing services based on racial or ethnic origin
Racial or ethnic origin

Locations held out to the public as predominantly providing temporary shelter or social services to homeless, survivors of domestic violence, refugees, or immigrants
No direct corollary. But the ordinary risk assessment required for non-sensitive data may result in adding data about homelessness, etc. to this column. See also the previous row, which may apply to data of refugees and immigrants.

Locations of public gatherings of individuals during political or social demonstrations, marches, and protests
Political opinions

Military installations, offices, or buildings
No direct corollary. But the ordinary risk assessment required for non-sensitive data may result in adding data about military installations, etc. to this column.

Similar protections are accorded to the location of an individual’s private residence
No direct corollary, though the ordinary risk assessment required for non-sensitive data may result in adding domicile data to this column. 

1 Typically defined as latitude & longitude coordinates derived from a device such as a cellphone, which place the device at a physical location with an accuracy of