Data, Deals, and Diplomacy, Part III: DOJ Issues National Security Final Rule with New Data Compliance Obligations for Transactions Involving Countries of Concern

On January 8, 2025, the Department of Justice (“DOJ”) published its final rule addressing Executive Order (E.O.) 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” With the final rule, the DOJ National Security Division’s Foreign Investment Review Section (“FIRS”) defines prohibited and restricted data transactions, and outlines trusted data flows for companies with overseas operations involving countries of concern, including IT infrastructure. The general effect of the rule is to close “front door” access to bulk sensitive personal data on U.S. persons and certain U.S.-government-related data. Until now—or rather, April 8, 2025, when the majority of the rule becomes effective—nefarious actors could procure sensitive data through legitimate business transactions.
We discussed the development of the new regulation in previous blogs (here and here), and the contours of the final rule are largely unchanged from the proposed rule. In this blog, we focus on some key clarifications and updates in the final rule. Then, we turn to what this final rule means for companies with operations in countries of concern and the questions every company with overseas IT infrastructure should be asking to know if these regulations might apply to them.
1. Updates in the Final Rule
There were no big surprises with the final rule, and it remains largely unchanged from the proposed rule. For the uninitiated, the rule prohibits or restricts a subset of covered transactions by U.S. persons involving covered data with covered persons.[1] The definitions of what is covered remain the same—even the bulk thresholds are the same as the proposed rule. However, below we highlight some of the key developments hidden among the minor clarifications and conforming edits.
1.1. Effective Date and Delayed Compliance Date. The rule sets an effective date of April 8, 2025 for every component of the rule except for specified compliance obligations. Those obligations, which include the due diligence and audit requirements from Subpart J and the reporting and recordkeeping requirements of Subpart K, do not require implementation until October 6, 2025. Those delayed compliance obligations do not encompass the security requirements required for restricted transactions and thus cybersecurity requirements established by CISA should be in place before engaging in any restricted transaction after April 8, 2025.
1.2. Expanded Government-Related Location Data List. The final rule substantially expands the Government-Related Location Data List from the 8 locations in the proposed rule to 736 locations in the final rule. These additional locations consist of commonly known Department of Defense sites and installations, such as bases, camps, posts, stations, yards, centers, or homeport facilities for any ship, ranges, and training areas in the United States and its territories. In its discussion of this list, DOJ acknowledges that it plans to provide this list in a format that would be easy for developers to access and implement (e.g., .csv, .json).
1.3. New definition of human ‘omic data. The final rule creates a new sub-definition of “human genomic data” for “human ‘omic data,” which includes human epigenomic data, human proteomic data, and human transcriptomic data. Those three data categories have a bulk threshold of data on more than 1,000 U.S. persons.[2] These new definitions will have an impact on clinical and predictive research, particularly those implementing AI within their research.
2. Effects of the Regulation
As Assistant Attorney General Matthew Olsen said last year, this regulation is built like sanctions and export controls and is expected to have “real teeth.” Any U.S. company with operations in the identified countries of concern, particularly with overseas IT infrastructure, will need to have a conversation about whether this regulation will affect their business. Companies need to know and understand the following:

What data the company has or collects that might constitute sensitive personal data and/or Government-related data as defined in the regulations;
What business relationships and transactions allow access to the data;
Who internally has access to the data; and
What security measures are in place to protect that data.

For companies impacted by this regulation, those companies will also need to understand how this regulation operates differently from other DOJ regulations and data privacy regulations. Here, DOJ has availed itself of IEEPA penalties, and this regulation operates more like sanctions and export controls. This means the regulation is very compliance-focused as opposed to using case-by-case approaches like CFIUS or Team Telecom. While corporate compliance is a key component of DOJ strategy, as we have seen with the Civil Cyber Fraud Initiative, DOJ is not shying away from enforcement. Further, the FIRS has developed the skillset and prosecutorial experience for reviewing corporate compliance programs. All to say, companies should take the April 8 and October 6, 2025 deadlines seriously.
Finally, companies should understand how this regulation operates differently from other data-related regulations. Chiefly, this is not a privacy regulation; it is a national security regulation. For that reason, the focus is not on the collection of data, but rather on the subsequent sale and/or accessibility of that data. Also, the scope of what is covered data is more limited than what companies may come to expect with state privacy laws. Rather than capture all personally identifiable information (PII), this regulation is concerned with sensitive information. That is to say, information that could be exploitable. However, because the data captured by the regulation is a national security concern, there is no consent exemption, meaning companies cannot have customers opt-out of the regulation’s protection.
While the programmatic compliance requirements (i.e., due diligence, auditing, reporting and recordkeeping) are not required until Q4 of this year, the effective date, and beginning of potential enforcement, is right around the corner on April 8. Additionally, companies will still need to implement the CISA security requirements by April 8 if they intend to continue with restricted transactions. Still, companies should not delay in beginning to build out and implement their compliance programs.

FOOTNOTES
[1] For more details, see our Data, Deal, and Diplomacy, Part II blog.
[2] Human genomic data’s bulk threshold remains the same at more than 100 U.S. persons.
Part one and part two of this series. 

YOU CAN’T JUST CALL IT A TCPA VIOLATION: The Court Needs Proof, Not a Vague Complaint!

Greetings TCPAWorld!
I’m back with another case update—this time, it’s all about relentless robocalls, a wrong number, and a lawsuit that didn’t go as planned! Few things are more annoying than a relentless robocall—except maybe realizing you’re being hounded for a debt that isn’t yours. Yikes! That’s where a recent New Jersey case caught my attention. So what’s the scoop? In Frato v. Cap. Mgmt. Servs. L.P., Civil Action No. 23-4049 (MAS) (JBD), 2025 U.S. Dist. LEXIS 5454 (D.N.J. Jan. 8, 2025) offers important lessons on what it takes to plead a TCPA violation successfully.
Here we have a Plaintiff who allegedly received 29 unwanted calls from Capital Management Services about a debt—but here’s the catch—the debt wasn’t even his. The calls kept coming despite repeatedly telling them they had the wrong person and being on the Do Not Call Registry (“DNCR”). Frustrated, Plaintiff took legal action.
But this is where things start to get into the details. The Court, following precedent from Facebook, Inc. v. Duguid, 592 U.S. 395, 398 (2021), reminded us that to prove the use of an automated telephone dialing system (“ATDS)”, you need to show the system could “use a random or sequential number generator to either store or produce phone numbers to be called.”
Plaintiff’s Complaint hit a snag because he basically just stated “upon information and belief” that an ATDS was used. That alone doesn’t cut it. As the Court put it, “[A] complaint must do more than simply parrot the definition” of ATDS when bringing a claim under Section 227(a)(1). Frato, 2025 U.S. Dist. LEXIS 5454, at *7. In other words, Plaintiff needed to show something more than just speculation—actual indicators of automation. The Court noted, in Smith v. Pro Custom Solar L.L.C., No. 19-20673 (KM) (ESK), 2021 WL 141336, at *2 (D.N.J. Jan. 15, 2021), that specific facts suggesting ATDS use might include delays before hearing messages, calls ending with beeps, instructions to call 1-800 numbers, unusual phone numbers, or robotic voices.
But here’s the problem—Plaintiff’s Complaint didn’t include any of these telltale signs. In fact, the only real detail supporting his claim of automation appeared in his opposition brief—not the Complaint itself. That’s a major issue. As the Court pointed out, you can’t amend your Complaint through briefing. See Derieux v. FedEx Ground Package Sys., Inc., No. 21-13645 (NLH)(EAP), 2023 U.S. Dist. LEXIS 10033, 2023 WL 349495, at *2 n.2 (D.N.J. Jan. 20, 2023) (collecting cases). The story’s moral is that if it’s not in the Complaint, it doesn’t count.
Next, the Court also tackled another interesting claim about prerecorded voices. While Plaintiff claimed he received “scripted voicemails of an impersonal nature,” he also described having actual conversations with representatives. What? That contradiction proved destructive to his claim under Section 227(b)(1)(B). If he was having live conversations, how could the calls be prerecorded? The court wasn’t buying it, and neither would the average Joe just hearing that statement.
Perhaps most intriguingly, the Court shot down Plaintiff’s claims under the TCPA’s implementing regulations, 47 C.F.R. § 64.1200, because—plot twist—debt collection calls aren’t considered “telephone solicitations” under the law. The TCPA defines a solicitation as an attempt to encourage a purchase of goods or services. However, the 2008 FCC ruling clarifies that debt collection calls fall outside those restrictions. See In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 23 FCC Rcd. 559, 565 (2008). So, while Plaintiff may have been annoyed by the calls, the law doesn’t treat debt collection the same way it treats telemarketing. This is key here to remember.
The good news for Plaintiff? Well, the Court dismissed his claims without prejudice, giving him another bite of the apple and pleading his case with more specific facts. Frustration alone won’t win a TCPA case; you need solid evidence.
The takeaway? If you bring a TCPA claim, you better come with receipts—because courts aren’t letting cases slide on vague allegations. As the saying goes, if you have the facts on your side, pound the facts; if you have the law on your side, pound the law; but if you have neither, pound the table. Plaintiff tried to pound the table, but the Court wasn’t listening. Will Plaintiff get it right the second time around? We’ll see. Until then, let this be a reminder that when it comes to ATDS lawsuits or any lawsuit for that matter, the details make or break your case.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!

Luxembourg Modernises the Custody Chain to Accommodate Blockchain Technology

On 31 December 2024, the Luxembourg law of 20 December 2024 amending the existing legislative framework on dematerialised securities (Blockchain IV Act) entered into force. As background, dematerialization of securities occurs with the move from physical stock certificates to electronic bookkeeping. When this occurs, actual stock certificates are removed and retired from circulation in exchange for the electronic recording. Securities are then transferred between securities accounts by book transfer.
While Luxembourg’s existing framework covered some preexisting technologies, the primary focus of the amendments made by the Blockchain IV Act is to integrate new technologies, particularly distributed ledger technology (DLT), into the financial sector to enhance legal security and operational efficiency.
The Blockchain IV Act introduces the concept of a “control agent”, an entity that can manage the issuance of dematerialized securities using DLT, providing an alternative to the existing (traditional) model that relies on a central account keeper and a custody chain. The control agent’s role includes maintaining the issuance account, monitoring the chain of custody of dematerialized securities (while the actual securities accounts can be held with different custodians without any custody relationship with the control agent), and ensuring the reconciliation of issued securities with those held in accounts with the relevant custodians. By contrast, the traditional central account keeper maintains the issuance account and sits at the top of the custody chain.
This new model is optional for issuers and aims to provide more flexibility, security, and transparency for both issuers and investors. The amendments also seek to strengthen Luxembourg’s position as a leading financial centre in the European Union (EU) for the use of DLT in unlisted debt and equity securities issuances. Beginning in 2019, Luxembourg has made a series of changes to the existing legal framework, making available the use of DLT in connection with financial instruments and recognising financial instruments issued using DLT in a growing number of fields as equivalent to traditional financial instruments.
Any credit institution (such as a chartered bank) or investment firm established in Luxembourg or any other EU member state, as well as operators of a Luxembourg security settlement system are eligible to serve as a control agent. The Luxembourg financial sector supervisory authority has been tasked with overseeing the compliance of control agents with the new legal requirements. Overall, the Blockchain IV Act aims to modernize the legal framework for securities in Luxembourg by leveraging DLT and other technological advancements, thereby enhancing the competitiveness and attractiveness of the financial sector while ensuring robust legal protections for market participants.
Tanner Wonnacott also contributed to this article.

RETURN TO NORMALCY: Choice Home Warranty Stuck in TCPA Class Action and it Feels Like Home

In Bradshaw v. CHW Group, 2025 WL 306783 (D. NJ Jan 24, 2025) Choice Home Warranty moved to dismiss a complaint leveraging a bunch of weak argument that seemed doomed to failure–and they were!
First, Defendant argued Plaintiff didn’t allege it called her cell phone. But, of course it did. The Complaint alleged a discussion with the Defendant and then receipt of a call from a person who identified herself as working for Defendant. Yeah that’s… pretty clear. Especially at the pleadings stage when a Court has to assume the Plaintiff is telling the truth.
Next, Defendant claims the calls were not prerecorded. But the message sounded robotic, was a general message and–my goodness–the recording started mid-sentence on the voicemail. Yeah, that argument’s a loser. The Court found the allegations of prerecorded voice usage sufficient.
Third, Defendant argued Plaintiff failed to allege the calls were made without consent. Yet Plaintiff alleged he asked Defendant to stop calling repeatedly. So not sure why Choice Home Warranty thought that doesn’t qualify as revoking any consent that was present– indeed, the fact that its lawyers would even make that argument almost concedes their client wasn’t following the DNC rules. Eesh.
Speaking of which, the allegations here were particularly egregious such that the Court inferred the Defendant didn’t even have an internal DNC policy. Ouch.
The Court also issued a perfunctory denial of the motion to strike that came along with the motion to dismiss.
So there you go, a complete and total rejection of Choice Home Warranty’s pleadings motions– and there’s nothing here that was even remotely had a chance as far as I could tell. Not sure what they were thinking. But we move on.

MAKE OUR PHONES GREAT AGAIN: R.E.A.C.H. Files Critical Petition Asking FCC to End Rampant Call/SMS Blocking, Labeling, and Registration Abuses by Wireless Carriers and their Partners

Well folks, its time to save the telecom world (again.)
With the distraction of one-to-one finally behind everybody we can now focus on the real battle– the blatant censorship and defamation being carried out everyday by the nation’s wireless carriers and their cohort of aggregator chums.
People are rightly waking up to the abuses of content-monitoring on social media networks but they remain largely blind to the far-more insidious censorship taking place on the most critical “social” network of all– the nation’s telephone system.
For years now the wireless carriers in this nation–banding together to form a cartel-like organization known as the CTIA–have dictated what Americans are allowed to say to each other over the phone and how they are allowed to communicate.
They have blocked billions of constitutionally-protected and perfectly legal calls/texts simply because they did not like the content of those calls– because they used certain “banned” words like “free” or “debt.”
They have served as judge, jury, and executioner of speech day in, day out.
And the worst part– the vast majority of Americans don’t even know its happening.
Oh sure they may have detected it here and there. Where was that reminder the company said it was going to send out? I know I needed to submit another loan document but I was supposed to receive a text? I thought I had a payment due, but the link for credit card never came through?
Most Americans assume these unfortunate everyday occurrences are just glitches. Network traffic jams or misdirected communications.
No. The truth is far worse.
Messages such as these are commonly blocked or delayed specifically based upon their content– a real-time censorship regime of the highest order operating right beneath our noses.
The carriers answer to no one. The FCC has never provided guidelines in terms of what can be blocked and what can’t be. All that carriers know now is they can use “reasonable analytics” to block “unwanted calls.”
But what does that even mean?
Its time for the FCC to answer that and give the carriers CLEAR rules of the road for the sorts of calls and texts they can block and what they CANNOT. Specifically, R.E.A.C.H. this morning has asked the FCC to clarify the following:

Clarify and confirm no member of the U.S. telecommunication ecosystem (including the wireless carriers and parties with whom they are in contractual privity) may block, throttle, or limit calls or text, MMS, RCS, SMS or other communications to telephone numbers on the basis of content;
Clarify and confirm no member of the U.S. telecom ecosystem (including the wireless carriers and parties with whom they are in contractual privity) may block, throttle, or limit calls or text, MMS, RCS, SMS or other communications to telephone numbers that were sent consistent with the TCPA’s statutory text and applicable regulation; and
Clarify and confirm any blocking, throttling, or limiting of calls or texts on the basis of content or any blocking, throttling, or limiting of calls or texts that were initiated consistent with the TCPA’s text and any applicable Commission’s rules is presumptively “unreasonable” under the Communications Act.

But call blocking is only half of the problem.
The wireless networks are also talking trash about callers behind their backs.
They label callers “scam” or “spam” or even “likely fraud” many time with ZERO actual indication the call is improper or illegal. I have heard stories of people missing calls from schools, friends, lawyers– even the police!–due to the INSANE mislabeling of callers taking place right now.
And the worst part?
The carriers are likely intentionally over-labeling to drive companies to use their “solutions”– white-label branded caller ID products that make the carriers millions in ill-gotten revenue.
Its terrible.
Many businesses won’t play the carriers little protection-money game so they turn to buying massive quantities of phone numbers to cycle through when one gets mislabeled. The carriers don’t like that and try to stop the practice to make sure they can maximize profits– but its only a natural response to the insane mislabeling practices exercised by the carriers themselves.
We need to put a stop to ALL of this.
As such R.E.A.C.H. is also asking the FCC today to prevent any labeling of legal calls. PERIOD.
Last– the biggest problem of all.
TCR– the Campaign Registry.
Every single business and political campaign in the nation that wishes to use a regular phone number to send high-volume text messages has to jump through the shifting and uncertain hoops presented by something called the TCR. Registration requires various disclosures of the types of messages to be sent, content, lists, practices, plans, etc.
A complete blueprint of every SMS program in America.
And guess what?
TCR’s parent is foreign owned.
*head exploding emoji*
Why in the world America would deliver a ready-made model of every SMS strategy deployed by every American business into the hands of a foreign company whose practices cannot be tracked and data footprint cannot be traced is a question beyond answer. It is entirely insane–especially when we consider political content is also disclosed.
WHAT ARE WE THINKING?
If TikTok is a threat to America, TCR is triple the threat.
R.E.A.C.H. asks the FCC to look into TCR and evaluate shutting down the entire campaign registration process or, alternatively, requiring the registry to be sold to an American-owned business.
Rather obviously these three asks– stopping call/text blocking, mislabeling, and a registration process that is a threat to national security– are the most important changes needed to preserve and protect our nation’s critical telecommunications infrastructure.
R.E.A.C.H., as an organization, is proud to be the vehicle behind this absolutely necessary movement. But we need your help!
When the FCC issues a notice of public comment we can expect the wireless carriers to fight tooth and nail in a short-sighted effect to preserve the current mess–truthfully, while carriers profit now they stand to lose everything in the long term by these errant practices as businesses move away from the PTSN altogether and toward OTT services– but we need YOUR help to assure the right movement is taken by the Commission on these items.
We will provide much more information over time. But for now begin cataloging all the ways the current SMS/call-blocking/labeling/registration paradigm is crippling consumers and your businesses.
Let’s put an end to censorship. An end to wide-scale defamation. An end to foreign companies snooping through our SMS practices.
Let’s get smart America.
And let’s save our damn telephone network.
Read the full petition here: REACH Petition to Save the World

5 Key Takeaways | SI’s Downtown ‘Cats Discuss Artificial Intelligence (AI)

Recently, we brought together over 100 alumni and parents of the St. Ignatius College Preparatory community, aka the Downtown (Wild)Cats, to discuss the impact of Artificial Intelligence (AI) on the Bay Area business community.
On a blustery evening in San Francisco, I was joined on a panel by fellow SI alumni Eurie Kim of Forerunner Ventures and Eric Valle of Foundry1 and by my Mintz colleague Terri Shieh-Newton. Thank you to my firm Mintz for hosting us.
There are a few great takeaways from the event:

What makes a company an “AI Company”?  
The panel confirmed that you cannot just put “.ai” at the end of your web domain to be considered an AI company. 
Eurie Kim shared that there are two buckets of AI companies (i) AI-boosted and (ii) AI-enabled.  
Most tech companies in the Bay Area are AI-boosted in some way – it has become table stakes, like a website 25 years ago. The AI-enabled companies are doing things you could not do before, from AI personal assistants (Duckbill) to autonomous driving (Waymo).   
What is the value of AI to our businesses?
In the future, companies will be infinitely more interesting using AI to accelerate growth and reduce costs. 
Forerunner, who has successfully invested in direct-to-consumer darlings like Bonobos, Warby Parker, Oura, Away and Chime, is investing in companies using AI to win on quality. 
Eurie explained that we do not need more information from companies on the internet, we need the answer. Eurie believes that AI can deliver on the era of personalization in consumer purchasing that we have been talking about for the last decade.  
What are the limitations of AI?
The panel discussed that there is a difference between how AI can handle complex human problems and simple human problems.  Right now, AI can replace humans for simple problems, like gathering all of the data you need to make a decision. But, AI has struggled to solve for the more complex human problems, like driving an 18-wheeler from New York to California. 
This means that, we will need humans using AI to effectively solve complex human problems. Or, as NVIDIA CEO Jensen Huang says, “AI won’t take your job, it’s somebody using AI that will take your job.”  
What is one of the most unique uses of AI today? 
Terri Shieh-Newton shared a fascinating use of AI in life sciences called “Digital Twinning”. This is the use of a digital twin for the placebo group in a clinical trial. Terri explained that we would be able to see the effect of a drug being tested without testing it on humans. This reduces the cost and the number of people required to enroll in a clinical trial. It would also have a profound human effects because patients would not be disappointed at the end of the trial to learn that they were taking the placebo and not receiving the treatment. 
Why is so much money being invested in AI companies?
Despite the still nascent AI market, a lot of investors are pouring money into building large language models (LLMs) and investing in AI startups. 
Eric Valle noted that early in his career the tech market generally delivered outsized returns to investors, but the maturing market and competition among investors has moderated those returns. AI could be the kind of investment that could generate those returns 20x+ returns.  
Eric also talked about the rise of venture studios like his Foundry1 in AI. Venture studios are a combination of accelerator, incubator and traditional funds, where the fund partners play a direct role in formulating the idea and navigating the fragile early stages. This venture studio model is great for AI because the studio can take small ideas and expand them exponentially – and then raise the substantial amount of money it takes to operationalize an AI company.

Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.
We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.
1. Wearables and the Intersection of Privacy, Security, and Disability Law
Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.
Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.
Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.
Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.
2. Performance Management Platforms and Employee Monitoring
Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.
While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.
Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.
3. AI-Powered Dash Cams in Fleet Management
AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.
State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.
Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.
4. Assessing Vendor Cybersecurity for Employee Benefits Plans
Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.
The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.
Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.
5. Biometrics for Access, Time Management, and Identity Verification
Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.
BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.
Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.
6. HIPAA Updates Affecting Group Health Plan Compliance
Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.
Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.
Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.
7. Data Breach Notification Laws and Incident Response Plans
Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.
Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.
8. AI Deployment in Recruiting and Retention
AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.
The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.
Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.
9. Employee Use of AI Tools
Moving beyond the HR department, AI tools are fundamentally changing how people work. Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI. The benefits of AI tools are undeniable, but so too are the associated risks. Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.
Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use. Equally important is establishing clear ground rules for how employees can use those tools. For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 
10. Data Minimization Across the Employee Lifecycle
At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.
From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.
Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.
Conclusion
The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.
By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

The Telephone Consumer Protection Act and Sales Agents: The Dangers of the ‘Canary Trap’

The Telephone Consumer Protection Act (TCPA), 47 U.S.C. § 227, was enacted in 1991 “to protect the privacy interests of residential telephone subscribers,” according to the act’s legislative history. The TCPA provides for a “do-not-call list,” a registry that allows consumers to opt out of receiving unsolicited telemarketing calls. The primary purpose of the do-not-call list is to give individuals a way to limit the number of unwanted sales calls they receive. The TCPA provides consumers with a private right of action.

Quick Hits

The Telephone Consumer Protection Act (TCPA) of 1991 was established to protect residential telephone subscribers’ privacy by allowing them to opt out of unsolicited telemarketing calls through a “do not call list.”
Some individuals exploit the TCPA by using tactics like the “canary trap” to create cycles of alleged violations and file numerous lawsuits, as seen with one person’s filing sixty-eight lawsuits in Michigan since June 2017.
Companies can defend against TCPA lawsuits by clearly defining agency relationships in written agreements, ensuring compliance with the TCPA, and regularly updating and checking their call lists against the National Do Not Call Registry.

The ‘Canary Trap’
The intention behind the TCPA has been undermined by individuals who have made it a full-time job to trap individuals and companies into alleged violations. For example, since June 2017, one individual has filed sixty-eight lawsuits in Michigan alleging TCPA violations. This individual utilizes what he characterizes as a “canary trap,” which he describes as an “investigative technique” by which he provides false personal information and his actual phone number on the TCPA do-not-call list. He then waits to see where the false information reappears. For example, he will give the false information, and an actual do-not-call phone number, to an insurance agent and use the false information to apply for insurance. Then he waits for other insurance agents, using the same false information, to cold call him using his actual do-not-call telephone number. In this way, he creates a cycle of alleged violations and then files a lawsuit.
Key Issue: Vicarious Liability
For most companies, a key issue when defending against this type of lawsuit is that “under federal common-law principles of agency, there is vicarious liability for TCPA violations” (i.e., liability imposed on a company through the actions of its agents), as the Supreme Court of the United States stated in a 2016 decision, Campbell-Ewald Company v. Gomez. In this context, vicarious liability can be established through apparent authority, actual authority, or ratification. (Previously, in Keating v. Peterson’s Nelnet, LLC, the U.S. Court of Appeals for the Sixth Circuit explained in 2015 that the Federal Communications Commission (FCC) had concluded that defendants could be held vicariously liable for TCPA violations under federal common-law agency principles, including actual authority, apparent authority, and ratification.)
“[A]n agent acts with actual authority ‘when, at the time of taking action that has legal consequences for the principal, the agent reasonably believes, in accordance with the principal’s manifestations to the agent, that the principal wishes the agent so to act,’” the Michigan Court of Appeals noted in 2022 in Dobronski v. NPS, Inc., citing the Restatement (Third) of Agency.
Under Section 2.03 of the Restatement (Third) of Agency, “apparent authority” requires that the principal made manifestations to the third party—normally the plaintiff—that created in the third party a reasonable belief that the agent “had authority to act on behalf of the principal.” Finally, under Section 4.01(1) of the Restatement (Third) of Agency, “[r]atification is the affirmance of a prior act done by another, whereby the act is given effect as if done by an agent acting with actual authority.”
Avoiding the ‘Canary Trap’
A key strategy for defending against TCPA lawsuits brought by high-volume litigators alleging agency is to readily demonstrate that an agency relationship does not exist. This can be done by carefully defining the scope of the agency relationship in a written producer’s agreement. Ideally, the agreement will expressly require compliance with the TCPA and state that conduct that violates the TCPA falls outside the agency relationship. The communications between the agent and the customer must make it clear that the agent is not acting as an agent for the company.
In addition, the company may want to require agents to:

check the National Do Not Call Registry every thirty days and remove registered numbers from call lists;
obtain express written consent before making an automated or prerecorded call or sending a text message, and keep a record of this consent;
provide opt-out mechanisms for recipients of calls and messages; and
keep records of all compliance efforts.

The constant filing of lawsuits under the TCPA by using canary traps needs to be stopped. Companies can go a long way in advancing this goal by taking reasonable steps to ensure TCPA compliance.

Data Privacy Insights Part 1: North Carolina Ranks High in Cybercrime Complaints

With Data Privacy Awareness Week underway, there’s a renewed focus on the importance of securing data.
The FBI’s Internet Crime Complaint Center (IC3) report sheds light on the growing threat of cybercrime, both nationally and within North Carolina. The state ranks among the top 15 in the U.S. for cybercrime complaints, highlighting significant local challenges.
National Cybercrime Trends
The report paints a grim picture of the national cybercrime landscape, with over 880,000 complaints filed and a staggering $12.5 billion in reported losses in 2023. Among the most common crimes were phishing attacks, non-payment/non-delivery scams, and personal data breaches. Business Email Compromise (BEC) scams and cryptocurrency-related fraud continue to account for a large share of financial losses, highlighting the sophisticated tactics employed by cybercriminals.
Challenges in North Carolina
In North Carolina, the top-reported crimes align with national trends, including phishing, identity theft, and BEC scams. However, the state’s financial losses underscore the disproportionate impact of these crimes on businesses and individuals alike. Notable figures from the report include:

12,282 Complaints Filed: North Carolina accounted for nearly 2% of all complaints nationwide.
$234 Million in Financial Losses: The state ranks 13th in the nation for total losses, reflecting the high stakes of these attacks.

These statistics highlight a pressing issue that demands urgent action from both private and public sectors to address vulnerabilities and reduce risks.
Who Is Being Targeted?
Certain industries and sectors have become prime targets for cyberattacks due to the sensitive data they handle or their operational vulnerabilities. According to the report, these include:

Healthcare: This sector faced a surge in ransomware and database leaks in early 2024, causing disruptions in patient care and financial loss.
Legal Services: Organizations such as law firms and courthouses are targeted for their sensitive client and case data, making them lucrative targets for cybercriminals.
Supply Chains: The interconnected nature of supply chains makes them attractive for disruption and data theft, with downstream effects on multiple businesses.
Engineering and Construction: These industries remained consistent targets through 2023 and 2024, particularly due to their involvement in critical infrastructure projects.
Financial Institutions: Banks and other financial entities are frequent targets due to the valuable financial information they manage, including payment systems and client records.
Governments: Local and state governments face ongoing threats due to their extensive networks and sensitive information, ranging from personal data to national security concerns.
Education: Schools and universities often face cyberattacks aimed at accessing student and faculty data, leading to significant breaches that disrupt learning environments.

Looking Ahead
As cybercrime continues to evolve, it is essential for businesses, individuals, and government agencies to collaborate to enhance their defenses. The IC3 report calls for North Carolina to bolster its security measures to shield its residents and businesses from the growing financial and emotional impacts of cybercrime. Stay tuned for part two, where we’ll explore common types of data breaches and strategies to protect your business.

New York State Legislature Passes Health Data Law to Protect Abortion Rights

On January 21, 2025, the New York legislature passed Senate Bill S929, an act to amend the general business law, in relation to providing for the protection of health information (the “Act”). The Act would provide for the protection of health information and require written consent or a designated necessary purpose for the processing of an individual’s health information. The bill is pending Governor Kathy Hochul’s signature.
The Act prohibits the sale of regulated health information and limits the circumstances in which an entity can lawfully “process” regulated health information, including but not limited to the collection, use, access and monetization of such information. It defines regulated health information to mean “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual,” including location or payment information. Notably, regulated health information does not include deidentified information, or information that “cannot reasonably be used to infer information about, or otherwise be linked to a particular individual, household, or device,” given reasonable technical safeguards.
Entities will still be able to “process” regulated health information in certain circumstances, including when they have received “valid authorization” from an individual to do so. In order for the authorization to be valid, it must satisfy 11 different conditions set forth by the Act. These include authorization made by written or electronic signature; the individual has the ability to provide or withhold authorization for different categories of processing activities; the individual has the ability to revoke authorization; and failure “to provide authorization will not affect the individual’s experience of using the regulated entity’s products or services.” Authorizations must expire within one year of being provided.
The Act provides for other circumstances that allow entities to “process” regulated health information absent “valid authorization” from the individual, including when such information is “strictly necessary” for “providing… a specific product or service requested by [the] individual,” “conducting… internal business operations,” “protecting against… illegal activity,” and “detecting, responding to, or preventing security incidents or threats.”
The Act would take effect one year after it is signed into law. Rules or regulations necessary to implement the Act are authorized to be made immediately following its passage and may be completed before the effective date.
The Act is now awaiting the signature of Governor Kathy Hochul. Governor Hochul’s Office has not yet commented on the bill, but she has been a longtime supporter of abortion access, a position on which she campaigned.

Updated COPPA Rule on Hold?

As we recently reported, the Federal Trade Commission (FTC or Commission) finalized updates to the Children’s Online Privacy Protection Rule (COPPA Final Rule or Rule) on January 16, 2025, and the updates were due to take effect 60 days after publication in the Federal Register. However, an Executive Order issued by President Trump on January 20, 2025, freezes “proposing or issuing any rule or publishing any rule to the Office of the Federal Register until a new agency head appointed or designated by the President reviews and approves the rule.” The Executive Order means that publication of the amended COPPA rule will likely be delayed, and the Rule may still be subject to change.
FTC Commissioner and now FTC Chair Andrew Ferguson may want to use President Trump’s Executive Order to press the pause button on rules in general and to consider whether further clarifications to the COPPA Final Rule are merited. While Chair Ferguson voted in favor of the COPPA Final Rule updates prior to Trump’s 2025 inauguration, he took exception to three “major problems” with the Final Rule that he identified in a concurring statement.

First, Chair Ferguson contended that what constitutes a “material change” to privacy terms, especially as they relate to categories of third parties with whom data is shared, requires clarification, “since not all additions or changes to the identities of third parties should require new (parental) consent.”
Next, he objected to the Rule’s prohibition on keeping personal information collected online indefinitely, arguing that while well-intentioned, the result could be undesirable and incentivize disclosure of lengthy retention times. Section 312.10 of the Children’s Online Privacy Protection Act allows retention of children’s data only for “as long as is reasonably necessary to fulfill the purpose for which the information was collected.” This language is identical to language in the old COPPA Rule. Thus, “as long as the data continued to serve the same important function for which it was collected,” indefinite retention would be appropriate.
Finally, Chair Ferguson stated that “the Commission missed the opportunity to clarify that the Final Rule is not an obstacle to the use of children’s personal information solely for the purpose of age verification.” He argued that “The old COPPA Rule and the Final Rule contain many exceptions to the general prohibition on the unconsented collection of children’s data, and these amendments should have added an exception for the collection of children’s personal information for the sole purpose of age verification, along with a requirement that such information be promptly deleted once that purpose is fulfilled.”

Businesses with an interest in children’s privacy should stay tuned for possible further action.

UK FCA Letter Expresses Concerns About Fund Service Providers

Go-To Guide:

UK Financial Conduct Authority (FCA) highlights concerns about fund service providers in “Dear CEO” letter. 
FCA identifies seven main risk areas, including operational resilience, cyber security, third-party management, and client asset protection. 
Fund managers to review the FCA’s risk areas when conducting due diligence on potential service providers. 
FCA plans to assess fund service providers’ compliance and may use formal intervention powers if necessary.

In late 2024, the United Kingdom’s Financial Conduct Authority (FCA) published a “Dear CEO” letter related to the FCA’s “Custody and Fund Services Supervision Strategy.” The letter shares the FCA’s expectations of UK FCA-authorised firms that act as custodians, depositories, and administrators in the funds sector. Importantly the letter also highlights some of the regulatory risks and topics fund managers should be reviewing as part of their due diligence before selecting service providers for their funds, irrespective of whether the service provider is an FCA-authorised firm in the UK or is domiciled offshore.
The overwhelming message from the FCA, is that fund service providers must have processes and procedures in place to identify risks and implement rules related to the areas of concern detailed below. The FCA will use its powers where necessary and conduct assessments on “a selection of firms” to ensure that firms comply with the requests made in the FCA’s letter. The FCA has also provided a reminder to in-scope firms that they must have performed mapping and testing to provide assurance that they are able to remain within impact tolerances by 31 March 2025.
The FCA has focussed on the following risks in the funds sector, which service providers must be identifying and mitigating. 

1.
Operational Resilience 

In the Dear CEO letter, the FCA state that they will focus on monitoring funds service providers’ compliance with, and implementation of, existing rules and guidance on building operational resilience. According to existing FCA requirements, authorised fund service providers must have performed mapping and testing by 31 March 2025 to provide assurance that they can remain within impact tolerances for each important business service in severe but plausible scenarios.
Within authorised fund service providers, the FCA is looking for evidence of prompt deployment of incident management plans; prioritisation of important business services to reduce operational and client impact; detailed mapping of delegation by fund service providers in order to understand underlying exposures to the same providers; and processes in place for clear communication with the FCA where required.  

2.
Cyber Resilience 

The FCA states that some funds service provider’s sub-optimal cyber resilience and security measures pose risks in the funds sector. The FCA notes that that it will continue to focus on this as a threat, including (i) how effectively firms manage critical vulnerabilities; (ii) threat detection; (iii) business recovery; (iv) stakeholder communication; and (v) remediation efforts to build resilience.
The letter is clear that fund service providers should ensure that their governing bodies are provided not only with a report of effectiveness of controls, but also with an assessment of the cyber risks present. 

3.
Third Party Management 

Fund service providers naturally (due to the levels of relevant expertise required) delegate specific roles to third parties. In its letter, the FCA has expressed concern that operational incidents involving third parties remain frequent. Where there is inadequate oversight, the likelihood of such incidents increases.
The FCA plans to assess fund service providers’ oversight, not only of their delegates, but also of those delegates’ delegates, including key material supplier relationships and management.
The FCA expects firms to have effective processes in place to identify, manage, monitor, and report third-party risks, and to perform an assessment on, and mapping of, third-party providers. 

4.
Change Management 

In its letter, the FCA has noted that with advances in technology (such as automation, artificial intelligence, and distributed ledger technology) and regulatory developments (such as settlement cycle changes), fund service providers must ensure that they are managing changes appropriately in order to maintain market integrity.
The FCA will assess a selection of fund service providers to review their change management frameworks, which involves looking at their overall approach and methodology, including testing, to understand how client and consumer outcomes have been considered.
The FCA has published guidance detailing key areas that contribute to successful change management. In addition, if any major firm initiatives or strategy changes are contemplated, fund service providers are encouraged to engage in early dialogue with the FCA. 

5.
Market Integrity 

In light of the increased use of sanctions and related complexity, the FCA has stated that it will review the effectiveness of select fund service providers’ systems and controls, governance processes, and resource sufficiency in connection with sanctions regime compliance.
The FCA expects that fund services providers should have effective procedures in place to detect, prevent, and deter financial crime, which should be appropriate and proportionate. Senior management at providers should take clear responsibility for managing and addressing these risks. Firms should have robust internal audit and compliance processes that test the firm’s defences against specific financial crime threats. 

6.
Depositary Oversight 

The FCA has identified a gap in expectations over the role of depositaries and has noted that, in its view, depositaries “have often demonstrated a less than proactive approach” to their oversight, risk identification, and escalation processes in relation to funds and AIFMs. The FCA will be clarifying its rules for, and expectations of, depositaries.
In its letter, the FCA notes that it expects depositaries to act more proactively in the interests of fund investors. They should provide effective, independent oversight of AIFMs’ operations and funds’ adherence to FCA rules. The FCA also reminds depositaries that they are expected to have processes in place to ensure that they receive the information needed to perform their duties. 

7.
Protection of Client Assets 

Protection of client assets is a regulatory priority set out in the FCA’s 2024/5 Business Plan. The FCA has identified weaknesses in important areas within fund service providers, including books and records and dependency on legacy IT infrastructure, which is at its end of life and includes high levels of manual processing and controls. The FCA has noted that it will continue to identify weaknesses and use formal intervention powers if necessary.
Takeaways
The FCA’s “Dear CEO” letter to fund service providers is both a warning and a plea for fund service providers to do all that they can to mitigate the risks identified by the FCA.FCA authorised fund service providers must expect the FCA to write to them later in 2025 seeking their own evaluation of their progress in mitigating the risks identified by the FCA in the FCA’s letter.
Importantly, fund managers should, as part of their due diligence in relation to the appointment of fund service providers (irrespective of whether the service provider is in the UK or is based offshore), be exploring how the risks identified by the FCA are being mitigated.