CJEU Upholds EDPB’s Authority to Order Broader Investigations in Cross-Border Cases

In a landmark judgment delivered on 29 January 2025, the General Court of the European Union has affirmed the European Data Protection Board‘s (EDPB) authority to require national supervisory authorities to broaden their investigations in cross-border data protection cases.
The case arose from the Irish Data Protection Commission’s (DPC) challenge to three EDPB binding decisions concerning Meta’s data processing practices for Facebook, Instagram, and WhatsApp. The EDPB had instructed the DPC to conduct new investigations into Meta’s processing of special categories of personal data and issue new draft decisions.
The Court’s ruling emphasizes that the EDPB’s power to order broader investigations is subject to specific safeguards. Such orders can only be issued following a “relevant and reasoned objection” from another supervisory authority that demonstrates significant risks to data subjects’ rights. Additionally, these decisions require approval from a two-thirds majority of EDPB members.
Notably, the Court rejected the DPC’s argument that this authority undermines national supervisory authorities’ independence. Instead, it found that the EDPB’s role supports the consistent application of the GDPR across the EU while respecting national authorities’ operational autonomy in conducting investigations.
This decision reinforces the EDPB’s role as a central authority in ensuring comprehensive data protection investigations, particularly in cases involving major tech platforms. It clarifies that while the one-stop-shop mechanism aims for procedural efficiency, this cannot override the fundamental right to data protection.
The ruling sets a significant precedent for future cross-border enforcement actions, emphasizing that national supervisory authorities must be prepared to expand their investigations when legitimate concerns are raised by their counterparts in other EU member states.

EDPB Release Pseudonymization Guidelines to Enhance GDPR Compliance

On Jan. 16, 2025 the European Data Protection Board (EDPB) published guidelines on the pseudonymization of personal data for public consultation. The Berlin Data Protection Commissioner (BlnBDI) played a leading role in drafting these guidelines (see the German-language BlnBDI press release). The consultation is ongoing, and comments can be submitted until Feb. 28, 2025, via the EDPB form.
Pseudonymization v. Anonymization
The proposed guidelines provide an overview of pseudonymization techniques and their benefits in business. Under the General Data Protection Regulation (GDPR), pseudonymization means processing personal data so that it can’t be attributed to a specific person without the use of additional information. Unlike anonymization, where data can’t be traced back to an individual even with additional information, pseudonymized data is still considered personal and subject to GDPR.
Advantages of Pseudonymization
The guidelines emphasize that the GDPR does not mandate pseudonymization. Nevertheless, using pseudonymization techniques can enhance GDPR compliance and lower data breach risks. It also supports using legitimate interests as a legal basis for data processing and ensures compatibility with original data collection purposes. Accordingly, companies can use pseudonymization to develop privacy-enhancing applications for data use and analysis that appropriately considers the rights of data subjects. This is particularly relevant in data-heavy sectors like finance, human resources, and health care.
Pseudonymization Procedures
According to the guidelines, effective pseudonymization involves three steps: 

1.
Transform personal data by removing or replacing identifiers using methods like cryptographic algorithms (e.g., message authentication codes or encryption algorithms) or lookup tables, where pseudonyms are matched with identifiers. 

2.
Store separately and protect additional information, such as cryptographic keys or lookup tables, for subsequent re-identification (“pseudonymization secrets”). Information beyond the controller’s immediate control, which can reasonably be expected to be available to the controller, should be considered when assessing the effectiveness of pseudonymization. 

3.
Implement technical and organizational measures (TOMs) to safeguard against unauthorized re-identification. TOMs include access restrictions, decentralized storage of pseudonymization secrets, and random generation of pseudonyms.

These measures enhance data security and reduce data breach risks. The guidelines provide practical scenarios to illustrate these procedures.
Outlook
Although not legally binding, the EDPB guidelines often influence courts and regulators. They help interpret the GDPR and guide companies in developing compliant processes for data protection. Companies should view these guidelines as important advice for designing their privacy practices, which can minimize legal risks and support arguments during official audits or legal disputes.
The guidelines assist businesses in balancing data protection with operational needs. Pseudonymization can offer competitive advantages by safeguarding customer data and boosting customer trust through privacy-focused practices.

Loper Bright Strikes Again: Eleventh Circuit Hangs Up on FCC’s One-to-One Consent Rule, Calling the Validity of Other TCPA Rules Into Question

The Eleventh Circuit Court of Appeals recently vacated the Federal Communications Commission’s 2023 “one-to-one consent rule” under the Telephone Consumer Protection Act (TCPA). In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission,1 the Court struck down the order that (1) would have limited businesses’ ability to obtain prior express consent from consumers to a single entity at a time, and (2) would have restricted the scope of such calls to subjects logically and topically related to “interaction that prompted the consent.”2 In particular, the Court held that the FCC exceeded its authority under the plain language of the statute.3 In the wake of the IMC decision, other TCPA regulations may well face the chopping block.
The FCC’s order sought to curtail the practice of “lead generation,” which offers consumers a “one-stop means of comparing [for example] options for health insurance, auto loans, home repairs, and other services.”4 In its ruling, the Court looked to the authority Congress had extended to the FCC through the TCPA. In general, the TCPA prohibits calls made “using any automatic telephone dialing system or an artificial or prerecorded voice” without “the prior express consent of the called party.”5 The statute does not define “prior express consent.”6 Congress gave the FCC authority to “prescribe regulations to implement” the TCPA, and to exempt certain calls from the TCPA’s prohibitions.7 In its 2023 order, the FCC sought to restrict telemarketing calls by imposing the one-to-one consent restriction and the logically-and-topically related restriction.8
Applying the Supreme Court’s decision in Loper Bright Enters. v. Raimondo,9 the Eleventh Circuit ruled that in promulgating the 2023 order, the FCC exceeded its statutory authority. The Court found that the plain meaning of the term “prior express consent” nowhere suggests that a consumer can only give consent to one entity at a time and only for calls that are “logically and topically related” to the consent. Rather, the Court ruled, in the absence of a statutory definition, the common law provides that the elements of “prior express consent” are “permission that is clearly and unmistakably granted by actions or words, oral or written,” given before the challenged call occurs.10 Nothing under the common law restricts businesses from obtaining consent from consumers to receive calls from a variety of entities regarding a variety of subjects in which they are interested.
In the wake of the IMC decision, other TCPA regulations may be ripe for challenge, including the FCC’s 2012 determination that calls introducing telemarketing or solicitations require prior express written consent. For example, in IMC, the Court held that the FCC cannot create requirements for obtaining prior express consent beyond what the plain language of that term will support. And the Court delineated the common law elements of prior express consent, which the Court found can be “granted by actions or words, oral or written.”11 Under this reasoning, the Court held that “the TCPA requires only ‘prior express consent’—not ‘prior express consent’ plus.”12 This reasoning may well support a challenge to the prior express written consent rules. After all, nothing in the plain meaning of the term “prior express consent” requires a writing versus oral consent, and the common law does not appear to support such a distinction. Rather, the requirement of written consent clearly adds to the statutory requirement and for that reason, appears to exceed the FCC’s authority.
Notwithstanding the fact that the FCC imposed the prior express written consent rule more than 10 years ago, another recent decision from the Supreme Court suggests that new entrants to the lead-generation industry have standing to file a challenge. In Corner Post, Inc. v. Board of Governors of Federal Reserve System,13 the Supreme Court ruled that new market entrants impacted by federal rules have standing to challenge those rules within the statutory period that runs from the date of market entry. The firm will continue to follow challenges to the FCC’s rulemaking authority, including any challenges to the prior express written consent rule.
Footnotes

1 No. 24-10277, — F. 4th —, 2025 WL 289152 (11th Cir. Jan. 24, 2025) (IMC decision).
2 See Second Report and Order, In the Matter of Targeting and Eliminating Unlawful Text Messages, Rules and Regs. Implementing the Tel. Consumer Prot. Act of 1991, Advanced Methods to Target and Eliminate Unlawful Robocalls, 38 FCC Rcd. 12247 (2023).
3 FCC orders have perennially exceeded the agency’s authority under the TCPA. For instance, beginning in 2003, the FCC took the position that a predictive dialer––a common tool used by business customer service centers––was an “automatic telephone dialing system,” even if the technology in question did not have the characteristics described in the statutory definition, namely the “the capacity (A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C. § 227(a)(1). The United States Supreme Court threw out the FCC’s flawed rulings in Facebook, Inc. v. Duguid, 592 U.S. 395 (2021).
4 IMC, 2025 WL 289152, at *2.
5 47 U.S.C. § 227(b)(1)(A), (B).
6 See id.
7 47 U.S.C. § 227(b)(2)(B) and (C).
8 The Court observed that since 2012, the FCC has distinguished non-telemarketing calls for which the FCC has required prior express consent from calls that introduce telemarketing or solicitations for which the FCC has required prior express written consent. 47 C.F.R. § 64.1200(a)(2), (3); see also In re Rules and Regs. Implementing the Tel. Consumer Prot. Act of 1991, 27 FCC Rcd. 1830, 1838 (2012). The regulations define “prior express written consent” as an agreement, in writing, bearing the signature of the person called that clearly authorizes the seller to deliver or cause to be delivered to the person called advertisements or telemarketing messages using an automatic telephone dialing system or an artificial or prerecorded voice, and the telephone number to which the signatory authorizes such advertisements or telemarketing messages to be delivered. 47 C.F.R. § 64.1200(f)(9). The written agreement must “include a clear and conspicuous disclosure informing” the signing party that he consents to telemarketing or advertising robocalls and robotexts. 47 C.F.R. § 64.1200(f)(9)(i)(A).
9 603 U.S. 369, 391–92 n.4, 413 (2024).
10 IMC, 2025 WL 289152, at *6.
11 Id.
12 Id.
13 603 U.S. 799 (2024).

Federal Court Applies Antitrust Standard of Per Se Illegality to “Algorithmic Pricing” Case

A federal district court in Seattle recently issued an important antitrust decision on “algorithmic pricing.” Algorithmic pricing refers to the practice in which companies use software to help set prices for their products or services. Sometimes this software will incorporate pricing information shared by companies that may compete in some way. In recent years, both private plaintiffs and the government have filed lawsuits against multifamily property owners, hotel operators, and others, claiming their use of such software to set prices for rentals and rooms is an illegal conspiracy under the antitrust laws. The plaintiffs argue that, even without directly communicating with each other, these companies are essentially engaging in price-fixing by sharing pricing information with the algorithm and knowing that others are doing the same, which allegedly has led to higher prices for consumers. So far, these cases have had mixed outcomes, with at least two being dismissed by courts.
Duffy v. Yardi Systems, Inc.
Previously, courts handling these cases have applied, at the pleadings stage, the “rule-of-reason” standard for reviewing the competitive effects of algorithmic pricing. Under the rule-of-reason standard, a court will examine the algorithm’s actual effects before determining whether the use of the algorithm unreasonably restrains competition. In December, however, the U.S. District Court for the Western District of Washington in Duffy v. Yardi Systems, Inc., No. 2:23-cv-01391-RSL (W.D. Wash.) held that antitrust claims premised on algorithmic pricing should be reviewed under the standard of per seillegality, meaning the practice is assumed to harm competition as a matter of law. Under the per sestandard, an antitrust plaintiff need only prove an unlawful agreement and the court will presume that the arrangement harmed competition. This ruling is significant because it departs from prior cases and could ease the burden on plaintiffs in future disputes.
In Yardi, the plaintiffs sued several large, multifamily property owners and their management company, Yardi Systems, Inc., claiming these defendants conspired to share sensitive pricing information and adopt the higher rental prices suggested by Yardi’s software. The court refused to dismiss the case, finding the plaintiffs had plausibly shown an agreement based on the defendants’ alleged “acceptance” of Yardi’s “invitation” to trade sensitive information for the ability to charge increased rents. See Yardi, No. 2:23-cv-01391-RSL, 2024 WL 4980771, at *4 (W.D. Wash. Dec. 4, 2024). The court also found the defendants’ parallel conduct in contracting with Yardi, together with certain “plus factors,” were enough to allege a conspiracy. The key “plus factor” was defendants’ alleged exchange of nonpublic information. The court noted the defendants’ behavior — sharing sensitive data with Yardi — was unusual and suggested they were acting together for mutual benefit.
The court decided the stricter per serule should apply to algorithmic pricing cases, rather than the rule-of-reason. The court emphasized that “[w]hen a conspiracy consists of a horizontal price-fixing agreement, no further testing or study is needed.” Id. at *8. This decision diverged from an earlier case against a different rental-software company, where the court thought more analysis was needed because the use of algorithms is a “novel” business practice and thus not one that could be condemned as per seillegal without more judicial experience about the practice’s competitive effect. The Yardi case also stands apart from others that have been dismissed, like a prior case involving hotel operators, where there was no claim that the companies pooled their confidential information in the dataset the algorithm used to suggest prices. The court in that case decided that simply using pricing software, without sharing confidential data, did not necessarily mean there was illegal collusion. Future cases may thus depend in part on whether the software uses competitors’ confidential data to set or suggest prices.
It is unclear if other courts will adopt the same strict approach as the Yardi case when dealing with claims involving algorithmic pricing. It is clear, however, that more cases are on the horizon, likely spanning a variety of industries using pricing software.
Regulatory Efforts
Beyond private lawsuits, government agencies and lawmakers also are paying close attention to algorithmic pricing. Last year, for example, the U.S. Department of Justice (DOJ) and a number of state attorneys general sued a different rental-software company. The DOJ also has weighed in on several ongoing cases. Meanwhile Congress, along with various states and cities, has introduced laws to regulate algorithmic pricing, with San Francisco and Philadelphia banning the use of algorithms in setting rents. And just last month, the DOJ and Federal Trade Commission raised concerns about algorithmic pricing in a different context — exchanges of information about employee compensation — in the agencies’ new Antitrust Guidelines for Business Activities Affecting Workers. The new guidelines note that “[i]nformation exchanges facilitated by or through a third party (including through an algorithm or other software) that are used to generate wage or other benefit recommendations can be unlawful even if the exchange does not require businesses to strictly adhere to those recommendations.” Expect more legal and legislative action on this front in 2025 and beyond.

New York Governor Signs Privacy and Social Media Bills

On December 21, 2024, New York Governor Kathy Hochul signed a flurry of privacy and social media bills, including:

Senate Bill 895B requires social media platforms that operate in New York to clearly post terms of service (“ToS”), including contact information for users to ask questions about the ToS, the process for flagging content that users believe violates the ToS, and a list of potential actions the social media platform may take against a user or content. The New York Attorney General has authority to enforce the act and may subject violators to penalties of up to $15,000 per day. The act takes effect 180 days after becoming law.
Senate Bill 5703B prohibits the use of social media platforms for debt collection. The act, which took effect immediately upon becoming law, defines a “social media platform” as a “public or semi-public internet-based service or application that has users in New York state” that meets the following criteria:

a substantial function of the service or application is to connect users in order to allow users to interact socially with each other within the service or application. A service or application that provides e-mail or direct messaging services shall not be considered to meet this criterion on the basis of that function alone; and
the service or application allows individuals to: (i) construct a public or semi-public profile for purposes of signing up and using the service or application; (ii) create a list of other users with whom they share a connection within the system; and (iii) create or post content viewable or audible by other users, including, but not limited to, livestreams, on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users.

Senate Bill 2376B amends relevant laws to add medical and health insurance information to the definitions of identity theft. The act defines “medical information” to mean any information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional. The act defines “health insurance information” to mean an individual’s health insurance policy number or subscriber identification number, any unique identifier used by a health insurer to identify the individual or any information in an individual’s application and claims history, including, but not limited to, appeals history. The act takes effect 90 days after becoming law.
Senate Bill 1759B, which takes effect 60 days after becoming law, requires online dating services to disclose certain information of banned members of the online dating services to New York members of the services who previously received and responded to an on-site message from the banned members. The disclosure must include:

the user name, identification number, or other profile identifier of the banned member;
the fact that the banned member was banned because, in the judgment of the online dating service, the banned member may have been using a false identity or may pose a significant risk of attempting to obtain money from other members through fraudulent means;
that a member should never send money or personal financial information to another member; and
a hyperlink to online information that clearly and conspicuously addresses the subject of how to avoid being defrauded by another member of an online dating service.

Firings at the US Privacy and Civil Liberties Oversight Board and Potential Impact on Transatlantic Data Transfers

President Trump recently fired the three democrats on the Privacy and Civil Liberties Oversight Board (PCLOB). Since these firings bring the Board to a sub-quorum level, they have the potential to significantly disrupt transatlantic transfers of employee and other personal data from the EU to the US under the EU-US Data Privacy Framework (DPF).
The PCLOB is an independent board tasked with oversight of the US intelligence community. It is a bipartisan board consisting of five members, three of whom represent the president’s political party and two represent the opposing party. The PCLOB’s oversight role was a significant element in the Trans-Atlantic Data Privacy Framework (TADPF) negotiations, helping the US demonstrate its ability to provide an essentially equivalent level of data protection to data transferred from the EU. Without this key element, it is highly likely there will be challenges in the EU to the legality of the TADPF. If the European Court of Justice invalidates the TADPF or the EU Commission annuls it, organizations that certify to the EU-US Data Privacy Framework will be without a mechanism to facilitate transatlantic transfers of personal data to the US. This could potentially impact transfers from the UK and Switzerland as well.
Organizations that rely on their DPF certification for transatlantic data transfers should consider developing a contingency plan to prevent potential disruption to the transfer of essential personal data. Steps to prepare for this possibility include reviewing existing agreements to identify what essential personal data is subject to ongoing transfers and the purpose(s), determining whether EU Standard Contractual Clauses would be an appropriate alternative and, if so, conducting a transfer impact assessment to ensure the transferred data will be subject to reasonable and appropriate safeguards.

FCC Delays Implementation of New Text Message Consent Rules

On January 24, the FCC issued an order postponing the effective date of its one-to-one consent rule. The rule, which would have required companies to obtain individual consent for each marketing partner before sharing customer data, was originally slated to go into effect on January 27, 2025. However, the FCC’s order has put the rule on hold until at least January 26, 2026, unless a court ruling dictates an earlier implementation date.
The delay stems from a legal challenge filed in the Eleventh Circuit Court of Appeals. (previously discussed here). The lawsuit argues that the FCC exceeded its statutory authority by requiring individual consent, and that this interpretation conflicts with established understandings of “prior express consent.” The challenge also alleges that the FCC did not adequately consider the economic impact of the rule. Additionally, plaintiffs argued that the rule would “significantly increase the cost of compliance” and “disrupt the insurance marketplace.”
The FCC’s one-to-one consent rule is intended to protect consumers from unwanted telemarketing calls. However, industry critics assert that the rule is unnecessary and would place undue burden on businesses. The Eleventh Circuit Court of Appeals is expected to rule on this challenge in the coming months.
Putting It Into Practice: This delay, as well as the upcoming Eleventh Circuit decision, could significantly impact how financial institutions that rely on telemarketing and data sharing for marketing purposes obtain and manage customer consent. We will continue to monitor this and other one-to-one consent rule litigation for further developments.
Listen to this post

5 Trends to Watch: 2025 Futures & Derivatives

Regulatory Evolution in Digital Assets. President Donald Trump’s signing of the executive order “Strengthening American Leadership in Digital Financial Technology” revoked the Biden administration’s directives on digital assets and established a federal policy aimed at promoting the digital asset industry. This will likely lead to increased cryptocurrency trading and the creation of new digital assets. The establishment of more exchanges dedicated to these assets could enhance market accessibility and liquidity. Less restrictive regulations may also attract firms that previously operated overseas to establish a presence onshore.
Integration of Crypto with Traditional Finance. The integration of cryptocurrencies and digital assets with traditional financial instruments is expected to gain momentum. This may be characterized by the introduction of additional crypto-based ETFs and other crypto-based derivative products.
Adoption of Decentralized Finance Protocols in Derivatives Trading. The expansion of digital asset exchanges may drive the adoption of decentralized finance (DeFi) protocols in the trading of futures and derivatives. Traditional exchanges and financial institutions are likely to integrate DeFi solutions to provide innovative derivative products. This adoption may expand access to derivatives markets, allowing a broader range of participants to engage in trading activities while maintaining the security and efficiency offered by blockchain technology.
Tax and Legal Framework Reforms. As mentioned above, President Trump’s executive order suggests a less restrictive government approach to regulating digital assets. More favorable tax treatment of cryptocurrency trades could encourage greater participation from both individual and institutional investors. Additionally, potential reforms in legal frameworks may address existing challenges related to crypto mining, possibly overriding local restrictions to promote growth in this sector.
Harmonization of Global Regulatory Standards. As digital asset firms migrate onshore, there may be a push towards harmonizing global regulatory standards for digitally based futures and derivatives. This could emerge from the need to create a cohesive legal framework that accommodates cross-border trading of digital asset derivatives.

Data Privacy Insights Part 2: The Most Common Types of Data Breaches Businesses Face

As part of Data Privacy Awareness Week, Ward and Smith is spotlighting the most common types of data breaches that businesses encounter.
In Part 1, we explored the industries most vulnerable to cyberattacks, highlighting the specific sectors frequently targeted by cybercriminals. In Part 2, we dive into the most common types of data breaches businesses face and offer actionable strategies to safeguard your organization. By understanding these threats, businesses can take the first step toward mitigating risks and protecting themselves from the costly and damaging consequences of cybersecurity incidents.
Human Error
Human error is at the core of many cybersecurity incidents. According to Infosec, 74% of breaches involve some sort of human element, making education and preventative measures critical.
Phishing Attacks
One of the most common manifestations of human error is phishing. Cybercriminals exploit trust and naivety through deceptive emails that mimic legitimate communications. These emails often trick employees into revealing sensitive information like login credentials or financial data. Businesses can reduce this risk by prioritizing comprehensive employee training to recognize and report phishing attempts.
Stolen Credentials
Closely linked to phishing is the issue of stolen credentials. Weak or reused passwords create openings for hackers to exploit. When an employee’s credentials are compromised, unauthorized access to company systems becomes a reality. Implementing strong password policies and multi-factor authentication (MFA) can significantly reduce this threat.
Ransomware
Ransomware represents an escalation of credential theft and phishing. These attacks encrypt vital business data and demand payment for its release, often causing operational paralysis. They frequently begin with malicious links or attachments. To combat this, businesses should invest in regular data backups and advanced endpoint protection tools.
Insider Threats
While external threats dominate headlines, insider threats—whether intentional or accidental—remain a critical concern. Employees can inadvertently leak data or intentionally sabotage systems. Mitigating this risk requires strong access controls, continuous monitoring, and fostering a culture of accountability.
Misconfigured Systems
Beyond human actions, misconfigured systems represent a technical vulnerability often stemming from human oversight. Improper security settings or cloud storage configurations can expose sensitive data to unauthorized users. Regular audits and vulnerability assessments are essential to identify and fix these issues.
Social Engineering
Building further on human vulnerabilities, social engineering attacks involve manipulation tactics such as impersonation of IT staff or executives. These tactics are designed to extract confidential information or gain unauthorized access to secure systems. Consistent training helps employees detect and resist these threats.
Physical Security Breaches
Cybersecurity measures are incomplete without addressing physical security. The theft or loss of devices like laptops, smartphones, or external drives can lead to unauthorized data access. Encrypting devices and enabling remote wipe capabilities can minimize the impact of such incidents.
Data Loss from Third-Party Vendors
Even with strong internal controls, businesses often depend on third-party vendors, which can introduce additional risks. Ensuring that vendors adhere to stringent data protection standards and conducting thorough due diligence are key steps to minimizing these v
How Businesses Can Protect Themselves
To combat these threats, businesses should adopt a proactive approach to data security:

Employee Training: Regular training sessions ensure employees can identify and respond to potential threats effectively.
Robust Policies: Develop and enforce data protection policies tailored to your organization’s needs.
Incident Response Plans: Have a comprehensive plan in place to respond to breaches swiftly and minimize damage.
Legal Guidance: Work with legal experts to ensure compliance with data privacy regulations and to create enforceable contracts with third-party vendors.

Data breaches can have devastating consequences, but with the right measures, your organization can stay ahead of these threats. 

Proposed Modernization of the HIPAA Security Rules

The HIPAA Security Rule was originally promulgated over 20 years ago.
While it historically provided an important regulatory floor for securing electronic protected health information, the Security Rule’s lack of prescriptiveness, combined with advances in technology and evolution of the cybersecurity landscape, increasingly indicate the HIPAA Security Rule neither reflects cybersecurity best practices nor effectively mitigates the proliferation of cyber risks in today’s interconnected digital world. On December 27, 2024, the HHS Office of Civil Rights (“OCR”) announced a Notice of Proposed Rulemaking, including significant changes to strengthen the HIPAA Security Rule (the “Proposed Rule”). In its announcement, OCR stated that the Proposed Rule seeks to “strengthen cybersecurity by updating the Security Rule’s standards to better address ever-increasing cybersecurity threats to the health care sector.” One key aim of the Proposed Rule is to provide a much clearer roadmap to achieve Security Rule compliance.
The Proposed Rule contains significant textual modifications to the current HIPAA Security Rule. While the actual redline changes may appear daunting, the proposed new requirements are aimed at aligning with current cybersecurity best practices as reflected across risk management frameworks, including NIST’s Cybersecurity Framework. For organizations that have already adopted these “best practices”, many of the new Proposed Rule requirements will be familiar and, in many cases, will have already been implemented. Indeed, for such organizations, the biggest challenge will be to comply with the new administrative requirements, which will involve policy updates, updates to business associate agreements, increased documentation rules (including mapping requirements), and the need for additional vendor management. For organizations that are still trying to meaningfully comply with the existing HIPAA Security Rule, or that seek to extend the Rule’s application to new technologies and systems handling PHI, the Proposed Rule will likely require significant investment of human and financial resources to meet the new requirements.
Proposed Key Changes to the HIPAA Security Rule
The following is a summary of the proposed key changes to the HIPAA Security Rule:

Removal of the distinction between “Addressable” and “Required” implementation specifications. Removal of the distinction is meant to clarify that the implementation of all the HIPAA Security Rule specifications is NOT optional.
Development of a technology asset inventory and network map. You cannot protect data unless you know where it resides, who has access to it, and how it flows within and through a network and information systems (including third party systems and applications used by the Covered Entity or Business Associate).
Enhancement of risk analysis requirements to provide more specificity regarding how to conduct a thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI. Specifically, the risk analysis must consider and document the risks to systems identified in the technology asset inventory.
Mandated incident and disaster response plans. This will require organizations to have documented contingency plans in place, including a process to restore critical data within 72 hours of a loss. This reflects a broader trend across the data protection landscape to ensure operational “resiliency”, recognizing that cyber attacks are routinely successful.
Updated access control requirements to better regulate which workforce members have access to certain data and address immediate termination of access when workforce members leave an organization.
Annual written verification that a Covered Entity’s Business Associates have implemented the HIPAA Security Rule.
Implementation of annual HIPAA Security Rule compliance audits.
Adoption of certain Security Controls:

Encryption of ePHI at rest and in transit;
Multi-factor authentication (i.e. requiring authentication of a user’s identity by at least two of three factors – e.g., password plus a smart identification card);
Patch management;
Penetration testing every 12 months;
Vulnerability scans every 6 months;
Network Segmentation;
Anti-malware protection; and
Back-up and recovery of ePHI.

Next Steps
The Proposed Rule was published in the Federal Register on January 6, 2025, and the 60-day comment period runs until March 7, 2025. We encourage regulated organizations to consider the impact of the Proposed Rule on their own systems and/or submit comments as the Proposed Rule will likely have substantial implications on the people, processes, and technologies of organizations required to comply.

New TCPA Consent Requirements Out the Window: What Businesses Need to Know

The landscape of prior express written consent under the Telephone Consumer Protection Act (TCPA) has undergone a significant shift over the past 13 months. In a December 2023 order, the Federal Communications Commission (FCC) introduced two key consent requirements to alter the TCPA, with these changes set to take effect on January 27, 2025. First, the proposed rule limited consent to a single identified seller, prohibiting the common practice of asking a consumer to provide a single form of consent to receive communications from multiple sellers. Second, the proposed rule required that calls be “logically and topically” associated with the original consent interaction. However, just a single business day before these new requirements were set to be enforced, the FCC postponed the effective date of the one-to-one consent, and a three-judge panel of circuit judges unanimously ruled that the FCC exceeded its statutory authority under the TCPA.
A Sudden Change in Course
On the afternoon of January 24, 2025, the FCC issued an order delaying the implementation of these new requirements to January 26, 2026, or until further notice following a ruling from the United States Court of Appeals for the Eleventh Circuit. The latter date referenced the fact that the Eleventh Circuit was in the process of reviewing a legal challenge to the new requirements at the time the postponement order was issued.
That decision from the Eleventh Circuit, though, arrived much sooner than expected. Just after the FCC’s order, the Eleventh Circuit issued its ruling in Insurance Marketing Coalition v. FCC, No. 24-10277, striking down both of the FCC’s proposed requirements. The court found that the new rules were inconsistent with the statutory definition of “prior express consent” under the TCPA. More specifically, the court held “the FCC exceeded its statutory authority under the TCPA because the 2023 Order’s ‘prior express consent’ restrictions impermissibly conflict with the ordinary statutory meaning of ‘prior express consent.’”
The critical takeaway from Insurance Marketing Coalition is that the TCPA’s “prior written consent” verbiage was irreconcilable with the FCC’s one-to-one consent and “logically and topically related” requirements. Under this ruling, businesses may continue to obtain consent for multiple sellers to call or text consumers through the use of a single consent form. The court clarified that “all consumers must do to give ‘prior express consent’ to receive a robocall is clearly and unmistakably state, before receiving a robocall, that they are willing to receive the robocall.” According to the ruling, the FCC’s rulemaking exceeded the statutory text and created duties that Congress did not establish.
The FCC could seek further review by the full Eleventh Circuit or appeal to the Supreme Court, but the agency’s decision to delay the effective date of the new requirements suggests it may abandon this regulatory effort. The ruling reinforces a broader judicial trend after the Supreme Court’s 2024 decision overturning Chevron deference – and curbing expansive regulatory interpretations.
What This Means for Businesses
With the Eleventh Circuit’s decision, the TCPA’s consent requirements revert to their previous state. Prior express written consent consists of an agreement in writing, signed by the recipient, that explicitly authorizes a seller to deliver, or cause to be delivered, advertisements or telemarketing messages via call or text message using an automatic telephone dialing system or artificial or prerecorded voice. The agreement must specify the authorized telephone number and cannot be a condition of purchasing goods or services.
This ruling is particularly impactful for businesses engaged in lead generation and comparison-shopping services. Companies may obtain consent that applies to multiple parties rather than being restricted to one-to-one consent. As a result, consent agreements may once again include language that covers the seller “and its affiliates” or “and its marketing partners” that hyperlinks to a list of relevant partners covered under the consent agreement.
A Costly Compliance Dilemma
Many businesses have spent the past year modifying their compliance processes, disclosures, and technology to prepare for the now-defunct one-to-one consent and logical-association requirements. These companies must now decide whether to revert to their previous consent framework or proceed with the newly developed compliance measures. The decision will depend on various factors, including the potential impact of the scrapped regulations on lead generation and conversion rates. In the comparison-shopping and lead generation sectors, businesses may be quick to abandon the stricter consent requirements. However, those companies that have already implemented changes to meet the one-to-one consent rule may be able to differentiate the leads they sell as the disclosure itself will include the ultimate seller purchasing the lead, which provides the caller with a documented record of consent in the event of future litigation.
What’s Next for TCPA Compliance?
An unresolved issue after the Eleventh Circuit’s ruling is whether additional restrictions on marketing calls — such as the requirement for prior express written consent rather than just prior express consent — could face similar legal challenges. Prior express consent can be established when a consumer voluntarily provides their phone number in a transaction-related interaction, whereas prior express written consent requires a separate signed agreement. If future litigation targets these distinctions, it is possible that the courts may further reshape the TCPA’s regulatory landscape.
The TCPA remains one of the most litigated consumer protection statutes, with statutory damages ranging from $500 to $1,500 per violation. This high-stakes enforcement environment has made compliance a major concern for businesses seeking to engage with consumers through telemarketing and automated calls. The Eleventh Circuit’s ruling provides a temporary reprieve for businesses, but ongoing legal battles could continue to influence the regulatory landscape.
For now, businesses must carefully consider their approach to consent management, balancing compliance risks with operational efficiency. Whether this ruling marks the end of the FCC’s push for stricter TCPA consent requirements remains to be seen.

New York’s Health Information Privacy Act: A Turning Point for Digital Health or a Roadblock to Innovation?

The proposed New York Health Information Privacy Act (NYHIPA), currently awaiting Governor Kathy Hochul’s signature, represents a major step in the state’s approach to protecting personal health data in the digital age. At its core, the bill aims to establish stronger privacy protections and restrict the use and sale of health-related data without explicit user consent. Supporters see it as a necessary evolution of data privacy laws, addressing gaps in federal regulations like HIPAA and responding to growing consumer concerns.
However, while the bill’s intent is clear, its practical implications are far more complex. If enacted, NYHIPA could create significant operational and financial burdens for digital health companies, insurers, and other businesses handling health information. It also raises pressing questions about the future of innovation in health technology, data-driven research, and even the fundamental business models that underpin much of today’s digital healthcare ecosystem. As New York weighs this decision, stakeholders must consider not only the benefits of stronger privacy protections but also the unintended consequences that could hinder the growth of the state’s thriving health tech sector.
In recent years, states across the country have introduced privacy laws that aim to strengthen consumer protections in response to widespread data breaches and growing concerns about corporate data practices. California’s Consumer Privacy Act (CCPA) and Privacy Rights Act (CPRA), Illinois’s Biometric Information Privacy Act (BIPA), and similar laws have set the stage for a complex web of state-led privacy regulations. At the federal level, the Federal Trade Commission (FTC) has intensified its scrutiny of health data practices, issuing warnings and imposing fines on companies that fail to protect consumer privacy.
New York’s legislation stands out because it casts a wide net in defining what constitutes “regulated health information.” Unlike HIPAA, which primarily governs hospitals, insurers, and healthcare providers, NYHIPA extends its scope to include any company that collects health-related data from New York residents. This means that digital health apps, wellness platforms, employers offering health benefits, and even non-traditional healthcare-adjacent businesses could be subject to its requirements. Companies would need to overhaul their data collection and consent practices, develop new compliance systems, and ensure that they are aligned with both state and federal regulations.
While these measures are intended to protect consumers, they also introduce significant challenges. Businesses operating in New York may find themselves facing higher compliance costs, which could be particularly burdensome for startups and mid-sized companies that lack the resources of larger corporations. If companies are forced to invest heavily in compliance, they may pass these costs onto consumers or scale back their services, limiting access to innovative digital health solutions. There is also the risk that companies could choose to leave New York or avoid entering the state altogether, putting New York at a competitive disadvantage in the rapidly growing health tech sector.
Beyond the financial and operational burdens, there is also concern about the unintended consequences this law could have on innovation. Many of the advances in health technology rely on data-driven insights to improve patient outcomes, streamline care coordination, and develop more personalized treatment plans. Overly restrictive regulations may limit the ability of companies to leverage data in ways that could be beneficial to patients and providers alike. If businesses are forced to navigate a regulatory minefield, some may choose to take a more cautious approach, slowing down progress in areas where data-driven innovation could make a meaningful difference.
At the same time, there is no denying that security threats and consumer expectations are changing. Cyberattacks on healthcare systems have become more frequent, with ransomware attacks targeting hospitals and breaches exposing millions of patient records. Consumers are becoming increasingly aware of how their data is being used and are demanding greater control over their personal information. Across the country, there is a growing push for opt-in models and stricter limitations on the use of personally identifiable information. Whether or not NYHIPA becomes law, companies should expect privacy regulations to become stricter in the coming years and take proactive steps to enhance security and transparency.
For businesses, adapting to this new landscape will require a strategic approach. Companies that process health-related data will need to closely examine how they collect, store, and use information. Those that can demonstrate a commitment to privacy and data security may find themselves with a competitive advantage as consumers become more discerning about which platforms they trust. At the same time, industry leaders should engage in policy discussions to ensure that privacy regulations are designed in a way that balances consumer protection with the need for continued innovation.
New York has an opportunity to be a leader in health data privacy, but it must do so without stifling the industry that relies on responsible data use to drive advancements in health care. Governor Hochul’s decision on NYHIPA will set an important precedent for the future of digital health regulation, not just in New York but across the country. If done right, this legislation could serve as a model for balancing privacy protections with business realities. If not, it risks becoming a case study in how regulatory overreach can do more harm than good.