RETURN TO NORMALCY: Choice Home Warranty Stuck in TCPA Class Action and it Feels Like Home
In Bradshaw v. CHW Group, 2025 WL 306783 (D. NJ Jan 24, 2025) Choice Home Warranty moved to dismiss a complaint leveraging a bunch of weak argument that seemed doomed to failure–and they were!
First, Defendant argued Plaintiff didn’t allege it called her cell phone. But, of course it did. The Complaint alleged a discussion with the Defendant and then receipt of a call from a person who identified herself as working for Defendant. Yeah that’s… pretty clear. Especially at the pleadings stage when a Court has to assume the Plaintiff is telling the truth.
Next, Defendant claims the calls were not prerecorded. But the message sounded robotic, was a general message and–my goodness–the recording started mid-sentence on the voicemail. Yeah, that argument’s a loser. The Court found the allegations of prerecorded voice usage sufficient.
Third, Defendant argued Plaintiff failed to allege the calls were made without consent. Yet Plaintiff alleged he asked Defendant to stop calling repeatedly. So not sure why Choice Home Warranty thought that doesn’t qualify as revoking any consent that was present– indeed, the fact that its lawyers would even make that argument almost concedes their client wasn’t following the DNC rules. Eesh.
Speaking of which, the allegations here were particularly egregious such that the Court inferred the Defendant didn’t even have an internal DNC policy. Ouch.
The Court also issued a perfunctory denial of the motion to strike that came along with the motion to dismiss.
So there you go, a complete and total rejection of Choice Home Warranty’s pleadings motions– and there’s nothing here that was even remotely had a chance as far as I could tell. Not sure what they were thinking. But we move on.
MAKE OUR PHONES GREAT AGAIN: R.E.A.C.H. Files Critical Petition Asking FCC to End Rampant Call/SMS Blocking, Labeling, and Registration Abuses by Wireless Carriers and their Partners
Well folks, its time to save the telecom world (again.)
With the distraction of one-to-one finally behind everybody we can now focus on the real battle– the blatant censorship and defamation being carried out everyday by the nation’s wireless carriers and their cohort of aggregator chums.
People are rightly waking up to the abuses of content-monitoring on social media networks but they remain largely blind to the far-more insidious censorship taking place on the most critical “social” network of all– the nation’s telephone system.
For years now the wireless carriers in this nation–banding together to form a cartel-like organization known as the CTIA–have dictated what Americans are allowed to say to each other over the phone and how they are allowed to communicate.
They have blocked billions of constitutionally-protected and perfectly legal calls/texts simply because they did not like the content of those calls– because they used certain “banned” words like “free” or “debt.”
They have served as judge, jury, and executioner of speech day in, day out.
And the worst part– the vast majority of Americans don’t even know its happening.
Oh sure they may have detected it here and there. Where was that reminder the company said it was going to send out? I know I needed to submit another loan document but I was supposed to receive a text? I thought I had a payment due, but the link for credit card never came through?
Most Americans assume these unfortunate everyday occurrences are just glitches. Network traffic jams or misdirected communications.
No. The truth is far worse.
Messages such as these are commonly blocked or delayed specifically based upon their content– a real-time censorship regime of the highest order operating right beneath our noses.
The carriers answer to no one. The FCC has never provided guidelines in terms of what can be blocked and what can’t be. All that carriers know now is they can use “reasonable analytics” to block “unwanted calls.”
But what does that even mean?
Its time for the FCC to answer that and give the carriers CLEAR rules of the road for the sorts of calls and texts they can block and what they CANNOT. Specifically, R.E.A.C.H. this morning has asked the FCC to clarify the following:
Clarify and confirm no member of the U.S. telecommunication ecosystem (including the wireless carriers and parties with whom they are in contractual privity) may block, throttle, or limit calls or text, MMS, RCS, SMS or other communications to telephone numbers on the basis of content;
Clarify and confirm no member of the U.S. telecom ecosystem (including the wireless carriers and parties with whom they are in contractual privity) may block, throttle, or limit calls or text, MMS, RCS, SMS or other communications to telephone numbers that were sent consistent with the TCPA’s statutory text and applicable regulation; and
Clarify and confirm any blocking, throttling, or limiting of calls or texts on the basis of content or any blocking, throttling, or limiting of calls or texts that were initiated consistent with the TCPA’s text and any applicable Commission’s rules is presumptively “unreasonable” under the Communications Act.
But call blocking is only half of the problem.
The wireless networks are also talking trash about callers behind their backs.
They label callers “scam” or “spam” or even “likely fraud” many time with ZERO actual indication the call is improper or illegal. I have heard stories of people missing calls from schools, friends, lawyers– even the police!–due to the INSANE mislabeling of callers taking place right now.
And the worst part?
The carriers are likely intentionally over-labeling to drive companies to use their “solutions”– white-label branded caller ID products that make the carriers millions in ill-gotten revenue.
Its terrible.
Many businesses won’t play the carriers little protection-money game so they turn to buying massive quantities of phone numbers to cycle through when one gets mislabeled. The carriers don’t like that and try to stop the practice to make sure they can maximize profits– but its only a natural response to the insane mislabeling practices exercised by the carriers themselves.
We need to put a stop to ALL of this.
As such R.E.A.C.H. is also asking the FCC today to prevent any labeling of legal calls. PERIOD.
Last– the biggest problem of all.
TCR– the Campaign Registry.
Every single business and political campaign in the nation that wishes to use a regular phone number to send high-volume text messages has to jump through the shifting and uncertain hoops presented by something called the TCR. Registration requires various disclosures of the types of messages to be sent, content, lists, practices, plans, etc.
A complete blueprint of every SMS program in America.
And guess what?
TCR’s parent is foreign owned.
*head exploding emoji*
Why in the world America would deliver a ready-made model of every SMS strategy deployed by every American business into the hands of a foreign company whose practices cannot be tracked and data footprint cannot be traced is a question beyond answer. It is entirely insane–especially when we consider political content is also disclosed.
WHAT ARE WE THINKING?
If TikTok is a threat to America, TCR is triple the threat.
R.E.A.C.H. asks the FCC to look into TCR and evaluate shutting down the entire campaign registration process or, alternatively, requiring the registry to be sold to an American-owned business.
Rather obviously these three asks– stopping call/text blocking, mislabeling, and a registration process that is a threat to national security– are the most important changes needed to preserve and protect our nation’s critical telecommunications infrastructure.
R.E.A.C.H., as an organization, is proud to be the vehicle behind this absolutely necessary movement. But we need your help!
When the FCC issues a notice of public comment we can expect the wireless carriers to fight tooth and nail in a short-sighted effect to preserve the current mess–truthfully, while carriers profit now they stand to lose everything in the long term by these errant practices as businesses move away from the PTSN altogether and toward OTT services– but we need YOUR help to assure the right movement is taken by the Commission on these items.
We will provide much more information over time. But for now begin cataloging all the ways the current SMS/call-blocking/labeling/registration paradigm is crippling consumers and your businesses.
Let’s put an end to censorship. An end to wide-scale defamation. An end to foreign companies snooping through our SMS practices.
Let’s get smart America.
And let’s save our damn telephone network.
Read the full petition here: REACH Petition to Save the World
5 Key Takeaways | SI’s Downtown ‘Cats Discuss Artificial Intelligence (AI)
Recently, we brought together over 100 alumni and parents of the St. Ignatius College Preparatory community, aka the Downtown (Wild)Cats, to discuss the impact of Artificial Intelligence (AI) on the Bay Area business community.
On a blustery evening in San Francisco, I was joined on a panel by fellow SI alumni Eurie Kim of Forerunner Ventures and Eric Valle of Foundry1 and by my Mintz colleague Terri Shieh-Newton. Thank you to my firm Mintz for hosting us.
There are a few great takeaways from the event:
What makes a company an “AI Company”?
The panel confirmed that you cannot just put “.ai” at the end of your web domain to be considered an AI company.
Eurie Kim shared that there are two buckets of AI companies (i) AI-boosted and (ii) AI-enabled.
Most tech companies in the Bay Area are AI-boosted in some way – it has become table stakes, like a website 25 years ago. The AI-enabled companies are doing things you could not do before, from AI personal assistants (Duckbill) to autonomous driving (Waymo).
What is the value of AI to our businesses?
In the future, companies will be infinitely more interesting using AI to accelerate growth and reduce costs.
Forerunner, who has successfully invested in direct-to-consumer darlings like Bonobos, Warby Parker, Oura, Away and Chime, is investing in companies using AI to win on quality.
Eurie explained that we do not need more information from companies on the internet, we need the answer. Eurie believes that AI can deliver on the era of personalization in consumer purchasing that we have been talking about for the last decade.
What are the limitations of AI?
The panel discussed that there is a difference between how AI can handle complex human problems and simple human problems. Right now, AI can replace humans for simple problems, like gathering all of the data you need to make a decision. But, AI has struggled to solve for the more complex human problems, like driving an 18-wheeler from New York to California.
This means that, we will need humans using AI to effectively solve complex human problems. Or, as NVIDIA CEO Jensen Huang says, “AI won’t take your job, it’s somebody using AI that will take your job.”
What is one of the most unique uses of AI today?
Terri Shieh-Newton shared a fascinating use of AI in life sciences called “Digital Twinning”. This is the use of a digital twin for the placebo group in a clinical trial. Terri explained that we would be able to see the effect of a drug being tested without testing it on humans. This reduces the cost and the number of people required to enroll in a clinical trial. It would also have a profound human effects because patients would not be disappointed at the end of the trial to learn that they were taking the placebo and not receiving the treatment.
Why is so much money being invested in AI companies?
Despite the still nascent AI market, a lot of investors are pouring money into building large language models (LLMs) and investing in AI startups.
Eric Valle noted that early in his career the tech market generally delivered outsized returns to investors, but the maturing market and competition among investors has moderated those returns. AI could be the kind of investment that could generate those returns 20x+ returns.
Eric also talked about the rise of venture studios like his Foundry1 in AI. Venture studios are a combination of accelerator, incubator and traditional funds, where the fund partners play a direct role in formulating the idea and navigating the fragile early stages. This venture studio model is great for AI because the studio can take small ideas and expand them exponentially – and then raise the substantial amount of money it takes to operationalize an AI company.
Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace
As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.
We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.
1. Wearables and the Intersection of Privacy, Security, and Disability Law
Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.
Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.
Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.
Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.
2. Performance Management Platforms and Employee Monitoring
Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.
While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.
Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.
3. AI-Powered Dash Cams in Fleet Management
AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.
State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.
Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.
4. Assessing Vendor Cybersecurity for Employee Benefits Plans
Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.
The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.
Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.
5. Biometrics for Access, Time Management, and Identity Verification
Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.
BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.
Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.
6. HIPAA Updates Affecting Group Health Plan Compliance
Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.
Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.
Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.
7. Data Breach Notification Laws and Incident Response Plans
Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.
Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.
8. AI Deployment in Recruiting and Retention
AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.
The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.
Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.
9. Employee Use of AI Tools
Moving beyond the HR department, AI tools are fundamentally changing how people work. Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI. The benefits of AI tools are undeniable, but so too are the associated risks. Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.
Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use. Equally important is establishing clear ground rules for how employees can use those tools. For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences.
10. Data Minimization Across the Employee Lifecycle
At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.
From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.
Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.
Conclusion
The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.
By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.
The Telephone Consumer Protection Act and Sales Agents: The Dangers of the ‘Canary Trap’
The Telephone Consumer Protection Act (TCPA), 47 U.S.C. § 227, was enacted in 1991 “to protect the privacy interests of residential telephone subscribers,” according to the act’s legislative history. The TCPA provides for a “do-not-call list,” a registry that allows consumers to opt out of receiving unsolicited telemarketing calls. The primary purpose of the do-not-call list is to give individuals a way to limit the number of unwanted sales calls they receive. The TCPA provides consumers with a private right of action.
Quick Hits
The Telephone Consumer Protection Act (TCPA) of 1991 was established to protect residential telephone subscribers’ privacy by allowing them to opt out of unsolicited telemarketing calls through a “do not call list.”
Some individuals exploit the TCPA by using tactics like the “canary trap” to create cycles of alleged violations and file numerous lawsuits, as seen with one person’s filing sixty-eight lawsuits in Michigan since June 2017.
Companies can defend against TCPA lawsuits by clearly defining agency relationships in written agreements, ensuring compliance with the TCPA, and regularly updating and checking their call lists against the National Do Not Call Registry.
The ‘Canary Trap’
The intention behind the TCPA has been undermined by individuals who have made it a full-time job to trap individuals and companies into alleged violations. For example, since June 2017, one individual has filed sixty-eight lawsuits in Michigan alleging TCPA violations. This individual utilizes what he characterizes as a “canary trap,” which he describes as an “investigative technique” by which he provides false personal information and his actual phone number on the TCPA do-not-call list. He then waits to see where the false information reappears. For example, he will give the false information, and an actual do-not-call phone number, to an insurance agent and use the false information to apply for insurance. Then he waits for other insurance agents, using the same false information, to cold call him using his actual do-not-call telephone number. In this way, he creates a cycle of alleged violations and then files a lawsuit.
Key Issue: Vicarious Liability
For most companies, a key issue when defending against this type of lawsuit is that “under federal common-law principles of agency, there is vicarious liability for TCPA violations” (i.e., liability imposed on a company through the actions of its agents), as the Supreme Court of the United States stated in a 2016 decision, Campbell-Ewald Company v. Gomez. In this context, vicarious liability can be established through apparent authority, actual authority, or ratification. (Previously, in Keating v. Peterson’s Nelnet, LLC, the U.S. Court of Appeals for the Sixth Circuit explained in 2015 that the Federal Communications Commission (FCC) had concluded that defendants could be held vicariously liable for TCPA violations under federal common-law agency principles, including actual authority, apparent authority, and ratification.)
“[A]n agent acts with actual authority ‘when, at the time of taking action that has legal consequences for the principal, the agent reasonably believes, in accordance with the principal’s manifestations to the agent, that the principal wishes the agent so to act,’” the Michigan Court of Appeals noted in 2022 in Dobronski v. NPS, Inc., citing the Restatement (Third) of Agency.
Under Section 2.03 of the Restatement (Third) of Agency, “apparent authority” requires that the principal made manifestations to the third party—normally the plaintiff—that created in the third party a reasonable belief that the agent “had authority to act on behalf of the principal.” Finally, under Section 4.01(1) of the Restatement (Third) of Agency, “[r]atification is the affirmance of a prior act done by another, whereby the act is given effect as if done by an agent acting with actual authority.”
Avoiding the ‘Canary Trap’
A key strategy for defending against TCPA lawsuits brought by high-volume litigators alleging agency is to readily demonstrate that an agency relationship does not exist. This can be done by carefully defining the scope of the agency relationship in a written producer’s agreement. Ideally, the agreement will expressly require compliance with the TCPA and state that conduct that violates the TCPA falls outside the agency relationship. The communications between the agent and the customer must make it clear that the agent is not acting as an agent for the company.
In addition, the company may want to require agents to:
check the National Do Not Call Registry every thirty days and remove registered numbers from call lists;
obtain express written consent before making an automated or prerecorded call or sending a text message, and keep a record of this consent;
provide opt-out mechanisms for recipients of calls and messages; and
keep records of all compliance efforts.
The constant filing of lawsuits under the TCPA by using canary traps needs to be stopped. Companies can go a long way in advancing this goal by taking reasonable steps to ensure TCPA compliance.
Data Privacy Insights Part 1: North Carolina Ranks High in Cybercrime Complaints
With Data Privacy Awareness Week underway, there’s a renewed focus on the importance of securing data.
The FBI’s Internet Crime Complaint Center (IC3) report sheds light on the growing threat of cybercrime, both nationally and within North Carolina. The state ranks among the top 15 in the U.S. for cybercrime complaints, highlighting significant local challenges.
National Cybercrime Trends
The report paints a grim picture of the national cybercrime landscape, with over 880,000 complaints filed and a staggering $12.5 billion in reported losses in 2023. Among the most common crimes were phishing attacks, non-payment/non-delivery scams, and personal data breaches. Business Email Compromise (BEC) scams and cryptocurrency-related fraud continue to account for a large share of financial losses, highlighting the sophisticated tactics employed by cybercriminals.
Challenges in North Carolina
In North Carolina, the top-reported crimes align with national trends, including phishing, identity theft, and BEC scams. However, the state’s financial losses underscore the disproportionate impact of these crimes on businesses and individuals alike. Notable figures from the report include:
12,282 Complaints Filed: North Carolina accounted for nearly 2% of all complaints nationwide.
$234 Million in Financial Losses: The state ranks 13th in the nation for total losses, reflecting the high stakes of these attacks.
These statistics highlight a pressing issue that demands urgent action from both private and public sectors to address vulnerabilities and reduce risks.
Who Is Being Targeted?
Certain industries and sectors have become prime targets for cyberattacks due to the sensitive data they handle or their operational vulnerabilities. According to the report, these include:
Healthcare: This sector faced a surge in ransomware and database leaks in early 2024, causing disruptions in patient care and financial loss.
Legal Services: Organizations such as law firms and courthouses are targeted for their sensitive client and case data, making them lucrative targets for cybercriminals.
Supply Chains: The interconnected nature of supply chains makes them attractive for disruption and data theft, with downstream effects on multiple businesses.
Engineering and Construction: These industries remained consistent targets through 2023 and 2024, particularly due to their involvement in critical infrastructure projects.
Financial Institutions: Banks and other financial entities are frequent targets due to the valuable financial information they manage, including payment systems and client records.
Governments: Local and state governments face ongoing threats due to their extensive networks and sensitive information, ranging from personal data to national security concerns.
Education: Schools and universities often face cyberattacks aimed at accessing student and faculty data, leading to significant breaches that disrupt learning environments.
Looking Ahead
As cybercrime continues to evolve, it is essential for businesses, individuals, and government agencies to collaborate to enhance their defenses. The IC3 report calls for North Carolina to bolster its security measures to shield its residents and businesses from the growing financial and emotional impacts of cybercrime. Stay tuned for part two, where we’ll explore common types of data breaches and strategies to protect your business.
New York State Legislature Passes Health Data Law to Protect Abortion Rights
On January 21, 2025, the New York legislature passed Senate Bill S929, an act to amend the general business law, in relation to providing for the protection of health information (the “Act”). The Act would provide for the protection of health information and require written consent or a designated necessary purpose for the processing of an individual’s health information. The bill is pending Governor Kathy Hochul’s signature.
The Act prohibits the sale of regulated health information and limits the circumstances in which an entity can lawfully “process” regulated health information, including but not limited to the collection, use, access and monetization of such information. It defines regulated health information to mean “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual,” including location or payment information. Notably, regulated health information does not include deidentified information, or information that “cannot reasonably be used to infer information about, or otherwise be linked to a particular individual, household, or device,” given reasonable technical safeguards.
Entities will still be able to “process” regulated health information in certain circumstances, including when they have received “valid authorization” from an individual to do so. In order for the authorization to be valid, it must satisfy 11 different conditions set forth by the Act. These include authorization made by written or electronic signature; the individual has the ability to provide or withhold authorization for different categories of processing activities; the individual has the ability to revoke authorization; and failure “to provide authorization will not affect the individual’s experience of using the regulated entity’s products or services.” Authorizations must expire within one year of being provided.
The Act provides for other circumstances that allow entities to “process” regulated health information absent “valid authorization” from the individual, including when such information is “strictly necessary” for “providing… a specific product or service requested by [the] individual,” “conducting… internal business operations,” “protecting against… illegal activity,” and “detecting, responding to, or preventing security incidents or threats.”
The Act would take effect one year after it is signed into law. Rules or regulations necessary to implement the Act are authorized to be made immediately following its passage and may be completed before the effective date.
The Act is now awaiting the signature of Governor Kathy Hochul. Governor Hochul’s Office has not yet commented on the bill, but she has been a longtime supporter of abortion access, a position on which she campaigned.
Updated COPPA Rule on Hold?
As we recently reported, the Federal Trade Commission (FTC or Commission) finalized updates to the Children’s Online Privacy Protection Rule (COPPA Final Rule or Rule) on January 16, 2025, and the updates were due to take effect 60 days after publication in the Federal Register. However, an Executive Order issued by President Trump on January 20, 2025, freezes “proposing or issuing any rule or publishing any rule to the Office of the Federal Register until a new agency head appointed or designated by the President reviews and approves the rule.” The Executive Order means that publication of the amended COPPA rule will likely be delayed, and the Rule may still be subject to change.
FTC Commissioner and now FTC Chair Andrew Ferguson may want to use President Trump’s Executive Order to press the pause button on rules in general and to consider whether further clarifications to the COPPA Final Rule are merited. While Chair Ferguson voted in favor of the COPPA Final Rule updates prior to Trump’s 2025 inauguration, he took exception to three “major problems” with the Final Rule that he identified in a concurring statement.
First, Chair Ferguson contended that what constitutes a “material change” to privacy terms, especially as they relate to categories of third parties with whom data is shared, requires clarification, “since not all additions or changes to the identities of third parties should require new (parental) consent.”
Next, he objected to the Rule’s prohibition on keeping personal information collected online indefinitely, arguing that while well-intentioned, the result could be undesirable and incentivize disclosure of lengthy retention times. Section 312.10 of the Children’s Online Privacy Protection Act allows retention of children’s data only for “as long as is reasonably necessary to fulfill the purpose for which the information was collected.” This language is identical to language in the old COPPA Rule. Thus, “as long as the data continued to serve the same important function for which it was collected,” indefinite retention would be appropriate.
Finally, Chair Ferguson stated that “the Commission missed the opportunity to clarify that the Final Rule is not an obstacle to the use of children’s personal information solely for the purpose of age verification.” He argued that “The old COPPA Rule and the Final Rule contain many exceptions to the general prohibition on the unconsented collection of children’s data, and these amendments should have added an exception for the collection of children’s personal information for the sole purpose of age verification, along with a requirement that such information be promptly deleted once that purpose is fulfilled.”
Businesses with an interest in children’s privacy should stay tuned for possible further action.
UK FCA Letter Expresses Concerns About Fund Service Providers
Go-To Guide:
UK Financial Conduct Authority (FCA) highlights concerns about fund service providers in “Dear CEO” letter.
FCA identifies seven main risk areas, including operational resilience, cyber security, third-party management, and client asset protection.
Fund managers to review the FCA’s risk areas when conducting due diligence on potential service providers.
FCA plans to assess fund service providers’ compliance and may use formal intervention powers if necessary.
In late 2024, the United Kingdom’s Financial Conduct Authority (FCA) published a “Dear CEO” letter related to the FCA’s “Custody and Fund Services Supervision Strategy.” The letter shares the FCA’s expectations of UK FCA-authorised firms that act as custodians, depositories, and administrators in the funds sector. Importantly the letter also highlights some of the regulatory risks and topics fund managers should be reviewing as part of their due diligence before selecting service providers for their funds, irrespective of whether the service provider is an FCA-authorised firm in the UK or is domiciled offshore.
The overwhelming message from the FCA, is that fund service providers must have processes and procedures in place to identify risks and implement rules related to the areas of concern detailed below. The FCA will use its powers where necessary and conduct assessments on “a selection of firms” to ensure that firms comply with the requests made in the FCA’s letter. The FCA has also provided a reminder to in-scope firms that they must have performed mapping and testing to provide assurance that they are able to remain within impact tolerances by 31 March 2025.
The FCA has focussed on the following risks in the funds sector, which service providers must be identifying and mitigating.
1.
Operational Resilience
In the Dear CEO letter, the FCA state that they will focus on monitoring funds service providers’ compliance with, and implementation of, existing rules and guidance on building operational resilience. According to existing FCA requirements, authorised fund service providers must have performed mapping and testing by 31 March 2025 to provide assurance that they can remain within impact tolerances for each important business service in severe but plausible scenarios.
Within authorised fund service providers, the FCA is looking for evidence of prompt deployment of incident management plans; prioritisation of important business services to reduce operational and client impact; detailed mapping of delegation by fund service providers in order to understand underlying exposures to the same providers; and processes in place for clear communication with the FCA where required.
2.
Cyber Resilience
The FCA states that some funds service provider’s sub-optimal cyber resilience and security measures pose risks in the funds sector. The FCA notes that that it will continue to focus on this as a threat, including (i) how effectively firms manage critical vulnerabilities; (ii) threat detection; (iii) business recovery; (iv) stakeholder communication; and (v) remediation efforts to build resilience.
The letter is clear that fund service providers should ensure that their governing bodies are provided not only with a report of effectiveness of controls, but also with an assessment of the cyber risks present.
3.
Third Party Management
Fund service providers naturally (due to the levels of relevant expertise required) delegate specific roles to third parties. In its letter, the FCA has expressed concern that operational incidents involving third parties remain frequent. Where there is inadequate oversight, the likelihood of such incidents increases.
The FCA plans to assess fund service providers’ oversight, not only of their delegates, but also of those delegates’ delegates, including key material supplier relationships and management.
The FCA expects firms to have effective processes in place to identify, manage, monitor, and report third-party risks, and to perform an assessment on, and mapping of, third-party providers.
4.
Change Management
In its letter, the FCA has noted that with advances in technology (such as automation, artificial intelligence, and distributed ledger technology) and regulatory developments (such as settlement cycle changes), fund service providers must ensure that they are managing changes appropriately in order to maintain market integrity.
The FCA will assess a selection of fund service providers to review their change management frameworks, which involves looking at their overall approach and methodology, including testing, to understand how client and consumer outcomes have been considered.
The FCA has published guidance detailing key areas that contribute to successful change management. In addition, if any major firm initiatives or strategy changes are contemplated, fund service providers are encouraged to engage in early dialogue with the FCA.
5.
Market Integrity
In light of the increased use of sanctions and related complexity, the FCA has stated that it will review the effectiveness of select fund service providers’ systems and controls, governance processes, and resource sufficiency in connection with sanctions regime compliance.
The FCA expects that fund services providers should have effective procedures in place to detect, prevent, and deter financial crime, which should be appropriate and proportionate. Senior management at providers should take clear responsibility for managing and addressing these risks. Firms should have robust internal audit and compliance processes that test the firm’s defences against specific financial crime threats.
6.
Depositary Oversight
The FCA has identified a gap in expectations over the role of depositaries and has noted that, in its view, depositaries “have often demonstrated a less than proactive approach” to their oversight, risk identification, and escalation processes in relation to funds and AIFMs. The FCA will be clarifying its rules for, and expectations of, depositaries.
In its letter, the FCA notes that it expects depositaries to act more proactively in the interests of fund investors. They should provide effective, independent oversight of AIFMs’ operations and funds’ adherence to FCA rules. The FCA also reminds depositaries that they are expected to have processes in place to ensure that they receive the information needed to perform their duties.
7.
Protection of Client Assets
Protection of client assets is a regulatory priority set out in the FCA’s 2024/5 Business Plan. The FCA has identified weaknesses in important areas within fund service providers, including books and records and dependency on legacy IT infrastructure, which is at its end of life and includes high levels of manual processing and controls. The FCA has noted that it will continue to identify weaknesses and use formal intervention powers if necessary.
Takeaways
The FCA’s “Dear CEO” letter to fund service providers is both a warning and a plea for fund service providers to do all that they can to mitigate the risks identified by the FCA.FCA authorised fund service providers must expect the FCA to write to them later in 2025 seeking their own evaluation of their progress in mitigating the risks identified by the FCA in the FCA’s letter.
Importantly, fund managers should, as part of their due diligence in relation to the appointment of fund service providers (irrespective of whether the service provider is in the UK or is based offshore), be exploring how the risks identified by the FCA are being mitigated.
Coast Guard Issues Final Maritime Cybersecurity Rule: Key Requirements and Implementation Timeline
On January 17, the US Coast Guard released its much-anticipated final rule on cybersecurity in the US Marine Transportation System, which establishes mandatory minimum cybersecurity requirements for the maritime sector. The new regulations are effective July 16, 2025 and represent the most significant maritime cybersecurity regulations to date. Affected entities should review their existing policies, identify any gaps or deficiencies, and implement compliance procedures.
Jones Walker’s 2022 Ports and Terminals Cybersecurity Survey data was cited in the final rule, helping to shape some of the new regulations.
I. Scope and Applicability
The primary goal of the final rule is to enhance the cybersecurity of the US Marine Transportation System. The new regulations establish minimum mandatory requirements for US flag vessels, Outer Continental Shelf (OCS) facilities, and facilities subject to the Maritime Transportation Security Act of 2002. The rule aims to address the increasing risks posed by cyber threats due to the growing reliance on interconnected digital systems within the maritime industry. It emphasizes both preventing cyber incidents and preparing to respond to them effectively.
The rule applies to:
a. US flag vessels subject to 33 CFR part 104
33 CFR part 104 applies to:
Cargo vessels greater than 100 gross tons
Commercial passenger vessels certified to carry more than 150 passengers
Offshore Supply Vessels (OSVs)
Mobile Offshore Drilling Units (MODUs)
Towing vessels more than 26 feet long engaged in towing certain dangerous cargo barges
Cruise ships and passenger vessels carrying more than 12 passengers on international voyages
b. Facilities subject to 33 CFR part 105
These facilities are covered by the regulation:
Container terminals
Chemical facilities with waterfront access
Petroleum terminals
Cruise ship terminals
Bulk liquid transfer facilities
LNG/LPG terminals
Barge fleeting facilities handling dangerous cargo
Facilities that receive vessels carrying more than 150 passengers
Marine cargo terminals otherwise subject to the Maritime Transportation Security of 2002
c. OCS facilities subject to 33 CFR part 106
These OCS facilities are affected:
Offshore oil and gas production platforms
Offshore drilling rigs
Floating production storage and offloading units (FPSOs)
Deepwater ports
Offshore wind energy facilities
Offshore loading/unloading terminals
II. Core Requirements
The cybersecurity plan must include measures for account security (e.g., automatic account lockout, strong passwords, multifactor authentication), device security (e.g., approved hardware/software lists, disabling executable code), and data security (e.g., secured logging, data encryption). Entities must also create or implement the following:
a. Cybersecurity Officer — Each covered entity must designate a Cybersecurity Officer (CySO) responsible for implementing and maintaining cybersecurity requirements. The rule allows for designation of alternate CySOs and permits one individual to serve multiple vessels or facilities, providing welcome flexibility for operators.
b. Cybersecurity Plans and Assessments — Organizations must develop and maintain the following:
A comprehensive Cybersecurity Plan
A separate Cyber Incident Response Plan
Regular cybersecurity assessments
Plans must be submitted to the Coast Guard for review within 24 months of the rule’s effective date.
c. Training and Exercises — The rule mandates the following:
Cybersecurity training for all personnel using IT/OT systems beginning July 17, 2025
Two cybersecurity drills annually
Regular penetration testing aligned with plan renewal cycles
d. Technical Controls — Required security measures include the following:
Account security controls including multifactor authentication
Device security measures and approved hardware/software lists
Data encryption and secure log management
Network segmentation and monitoring
Supply chain security requirements
III. Implementation Timeline
Key phase-in compliance dates include:
Rule effective date: July 16, 2025
Training requirements begin: July 17, 2025
Initial cybersecurity assessment: Due by July 16, 2027
Cybersecurity Plan submission: Due by July 16, 2027
The Coast Guard is seeking comments on extending implementation periods for the new requirements by two to five years for US flag vessels. Comments are due no later than March 18, 2025. After review of these comments, the Coast Guard may issue a future rule to allow additional time for US flag vessels to implement the new regulations.
IV. Harmonization with Other Requirements
The Coast Guard has worked to align these requirements with other cybersecurity regulations, including the Cybersecurity and Infrastructure Security Agency’s (CISA) Cyber Incident Reporting for Critical Infrastructure Act of 2022 reporting requirements. The rule establishes the National Response Center (NRC) as the primary reporting channel for maritime cyber incidents, simplifying compliance for regulated entities.
V. Some Basic Questions and Answers
What are the mandatory cybersecurity measures outlined in the rule? Owners and operators must implement a range of cybersecurity measures that are based on “cybersecurity performance goals” developed by CISA. This includes vulnerability identification of critical IT and OT systems, addressing known exploited vulnerabilities in those critical systems, and conducting penetration testing in conjunction with renewing the Cybersecurity Plan.
What constitutes a reportable cyber incident, and to whom do I report it? A reportable cyber incident is defined as any incident leading to substantial loss of confidentiality, integrity, or availability of a covered system; to disruption to business operations; to unauthorized access to nonpublic personal information of a large number of individuals; or to operational disruption of critical infrastructure. Such an incident also includes any event that may lead to a “transportation security incident.” Such incidents must be reported to the NRC.
What is the Coast Guard’s approach to compliance and enforcement of this new rule? The rule takes a performance-based approach, meaning that it focuses on outcomes rather than prescribing specific technical solutions, thus providing some flexibility to the entities in meeting the requirements. However, the rule does not specify the methods of enforcement, and the Coast Guard is currently working with policymakers to define the compliance criteria. The Coast Guard will address those questions at upcoming symposiums. Noncompliance with the rule could lead to penalties, legal action, and financial losses.
Is there any flexibility or possibility of waivers in complying with this rule? Yes. After completing a cybersecurity assessment, owners and operators can seek a waiver or an equivalence determination for the requirements, based on the waiver and equivalency provisions of 33 CFR parts 104, 105, and 106. Owners and operators must also notify the Coast Guard of temporary deviations from the requirements.
VI. Key Takeaways
Begin preparation now — the 24-month implementation period will pass quickly given the scope of required changes.
Evaluate current cybersecurity staffing and capabilities against new CySO requirements.
Review existing security measures against the detailed technical requirements.
Plan for increased training and exercise obligations.
Consider whether to comment on the proposed implementation extension for vessels.
Our cross-disciplinary team has extensive experience helping clients navigate complex regulatory requirements. We can assist with:
Gap analysis against new requirements
CySO program development
Cybersecurity Plan creation and review
Training program development
Technical compliance assessment
How to Report Crypto Fraud and Qualify for a CFTC Whistleblower Award
CFTC Whistleblower Program Rewards Whistleblowers for Providing Original Information About Crypto Fraud
Crypto fraud schemes have caused investors to lose more than one billion dollars and undermine public confidence in the digital asset and cryptocurrency ecosystem. Indeed, the implosion of FTX led to a crypto winter in which the value of digital assets plummeted and several crypto lending firms went bankrupt. Whistleblowers can help the CFTC identify and combat crypto fraud schemes by promptly providing specific and credible information.
Crypto fraud whistleblowers are eligible to receive between 10% and 30% of the monetary sanctions collected in successful enforcement actions. The CFTC has issued more than $390 million in awards to whistleblowers. The largest CFTC whistleblower awards to date are $200 million, $45 million, $30 million, and $10 million. Whistleblower disclosures have enabled the CFTC to bring successful enforcement actions against wrongdoers with orders for more than $3.2 billion in monetary relief.
Whistleblowers that voluntarily provide the CFTC with original information about violations of the CEA that leads the CFTC to bring a successful enforcement action resulting in the imposition of monetary sanctions exceeding $1 million can qualify for a CFTC whistleblower award from CFTC collected monetary sanctions and from related actions brought by other governmental entities.
Crypto Fraud Schemes that the CFTC Combats
Wash trading of digital currencies or swaps or futures contracts. For example, CLS Global recently plead guilty to a fraudulent “wash trading” scheme whereby it attempted to manipulate the crypto market by inflating the value of various cryptocurrencies through wash trading – repeatedly buying and selling tokens to make them appear more valuable to investors. In particular, CLS Global used an algorithm that executed self-trades from multiple wallets to appear as organic buying and selling.
Pump and sump schemes, such as the CFTC’s action against Adam Todd and four companies he controlled for attempting to manipulate Digitex’s native utility token, DGTX, by allegedly pumping the token’s price through the use of a computerized bot on third-party exchanges he designed to be “always buying more than it was selling” and by filling large over-the-counter orders to purchase DGTX on third-party exchanges.
Pig butchering or relationship confidence schemes in which fraudsters build online relationships with unsuspecting individuals before convincing them to trade crypto assets or foreign currency on fake trading platforms. According to the FBI’s 2023 Cryptocurrency Report, losses from cryptocurrency-related investment fraud schemes reported to the FBI Internet Crime Complaint totaled $3.96 billion in 2023.
Crypto Ponzi schemes, such as Ikkurty Capital, LLC soliciting more than $40 million from investors by promising to invest the funds in a crypto hedge fund or a carbon offset bond and instead using the funds to pay off previous investors in another crypto hedge fund and investing a small portion in volatile digital tokens. The final order of judgment in that matter imposed over $209 million in monetary sanctions.
Violating rules that protect customers funds and require custodians to segregate and separately account for customer funds. For example, FTX and Alameda Research were required to pay $8.7 billion in restitution and $4 billion in disgorgement for commingling of customer funds, using customer funds to extend a line of credit to an affiliate, investing customer funds in non-permitted investments through an affiliate, and appropriating customer funds for luxury real estate purchases, political contributions, and high-risk, illiquid digital asset industry investments.
Operating an illegal commodity pool. For example, the CFTC obtained an order against Mirror Trading International Proprietary Limited (MTI) requiring it to pay $1.7 billion in restitution and a $1.7 billion civil penalty for failure to comply with commodity pool operator regulations. MTI solicited Bitcoin from investors for participation in an unregistered commodity pool that purportedly traded off-exchange, retail forex through a proprietary “bot” or software program, but in fact MTI misappropriated the Bitcoin that they accepted from the pool participants.
CFTC Whistleblower Reward Program
Under the CFTC Whistleblower Reward Program, the CFTC will issue rewards to whistleblowers who provide original information that leads to CFTC enforcement actions with total civil penalties in excess of $1 million (see how the CFTC calculates monetary sanctions). A whistleblower may receive an award of between 10% and 30% of the total monetary sanctions collected. Monetary sanctions includes restitution, disgorgement, and civil monetary penalties,
Reporting original information about cryptocurrency fraud “leads to” a successful enforcement action if either:
The original information caused the staff to open an investigation, reopen an investigation, or inquire into different conduct as part of a current investigation, and the Commission brought a successful action based in whole or in part on conduct that was the subject of the original information; or
The conduct was already under examination or investigation, and the original information significantly contributed to the success of the action.
In determining a reward percentage, the CFTC considers the particular facts and circumstances of each case. For example, positive factors may include the significance of the information, the level of assistance provided by the whistleblower and the whistleblower’s attorney, and the law enforcement interests at stake.
Awards are paid from the CFTC Customer Protection Fund, which is financed through monetary collected by the CFTC in any covered judicial or administrative action that is not otherwise distributed, or ordered to be distributed, to victims of a violation of the CEA underlying such action.
Crypto Fraud Whistleblowers Can Report Anonymously to the CFTC
If represented by counsel, a crypto fraud whistleblower may submit a tip anonymously to the CFTC. In certain circumstances, a whistleblower may remain anonymous, even to the CFTC, until an award determination. However, even at the time of a reward, a whistleblower’s identity is not made available to the public.
The confidentiality protections of the CEA require the CFTC not to disclose information that “could reasonably be expected to reveal the identity of the whistleblower.” According to a recent report of the CFTC Whistleblower Office, the Office takes steps to protect whistleblower confidentiality. For example, in a recent fiscal year the Office considered 267 requests to produce documents from the investigation and litigation files of the Enforcement Division and found 16 requests to implicate whistleblower-identifying information. The Office worked with the Enforcement Division to remove whistleblower-identifying information or otherwise take steps to preserve whistleblower confidentiality.
SEC Actions in Review: What Officers and Directors Should Know for 2025
As the regulatory landscape continues to evolve, public company officers and directors must stay abreast of the enforcement priorities and expectations of the Securities and Exchange Commission (SEC). Over the past year, the SEC has brought various enforcement actions that involve the oversight and reporting obligations of management and boards. These cases highlight potential blind spots in corporate compliance programs. This article summarizes recent enforcement actions related to director independence, cybersecurity, insider “shadow” trading, internal investigations, executive compensation beneficial ownership and insider transaction reports, and Artificial Intelligence, which despite the change in administration, public company officers and directors should view as potential areas of continued SEC focus over the upcoming year.
Director Independence
In September 2024, the SEC announced it had settled[1] charges against a director of an NYSE-listed consumer packaged goods company for violation of the proxy rules, for failure to disclose in his D&O questionnaire information about his close friendship with an executive officer, which caused the company to falsely list him as an independent director in its proxy statement.[2] This undisclosed relationship included multiple domestic and international paid vacations with the executive.[3] The director also allegedly provided confidential information to the executive about the company’s CEO search and instructed the executive to withhold information about their personal relationship to avoid the impression that the director was biased toward the executive becoming CEO of the company.[4] The director agreed to a civil penalty of $175,000, a five-year officer and director bar, and a permanent injunction from further violations of the proxy rules.
Takeaway: For directors, this case underscores the importance of being “honest, truthful, and forthright”[5] when completing D&O questionnaires and not treating them as mere formalities that are rolled forward from one year to the next. This enforcement action further shows that material misstatements and omissions in the D&O questionnaire can give rise to a direct violation of the proxy disclosure rules against the director for causing a company’s proxy statements to contain false and misleading statements. The determination of independence can be complex. However, directors are not tasked with making that determination themselves; they merely must disclose all relevant facts in their D&O questionnaires, including social relationships with management.
Cybersecurity
In October 2024, the SEC announced settlements with four issuers for misleading disclosures regarding cybersecurity risks and intrusions. [6] These cases stemmed from an ongoing investigation of companies impacted by the two-year long cyberattack against a software company, which the SEC charged a year earlier for failure to accurately convey its cybersecurity vulnerabilities and the extent of the cyberattack.[7] Each issuer charged by the SEC in October 2024 utilized this company’s software and discovered the actor likely behind the software company’s breach also had accessed their systems, but according to the SEC, their public disclosures minimized or generalized the cybersecurity incidents. Specifically, two of the issuers failed to disclose the full scope and impact of the cyberattack, including the nation-state nature of the threat actor, the duration of the malicious activities, and in one case[8] the number of compromised files and the large number of customers whose information was accessed, as well as in another case the percentage of code that was compromised.[9] The other two issuers failed to update their risk disclosures in SEC filings and instead framed cybersecurity risks and intrusions as general and not material[10] or in hypothetical terms[11] rather than disclosing the actual malicious activities and their impact on the company.
The SEC charged each issuer with violations of Sections 17(a)(2) and 17(a)(3) of the Securities Act (which prohibit misleading statements or fraud in connection with the offering or sale of securities) and Section 13(a) of the Exchange Act and Rules 13a-1, 13a-11, 13a-13, and 13a-15(a) thereunder (rules related to required filings for public companies, including requirements that such filings include any material information to ensure filings are not misleading, and companies have internal controls and procedures over financial reporting). One of the companies also was charged with disclosure controls and procedures violations. While each issuer received credit for cooperating in the SEC investigation, the settlements included civil penalties ranging from $990,000 to $4 million.
Takeaway: When a cybersecurity breach is identified, the board and management must ensure their company’s disclosures are accurate, current, and tailored to the company’s “particular cybersecurity risks and incidents.”[12] Indeed, the SEC’s cybersecurity disclosure rules, adopted on July 26, 2023, specifically require registrants to, among other things, report on Form 8-K any cybersecurity incident deemed to be material and to disclose on Form 10-K the registrant’s processes for assessing, identifying, and managing material risks from cybersecurity threats, the material impacts of cybersecurity threats and previous incidents, and specific information relating to the role of the board and management in identifying and managing such risks.[13] As the SEC stated, “Downplaying the extent of a material cybersecurity breach is a bad strategy”[14] and, as these cases demonstrate, can subject the company to an enforcement investigation and action. Navigating cybersecurity disclosure obligations, however, especially when the breach is ongoing and the origin and impact is not fully understood, presents unique challenges for issuers. And despite the dissenting opinion in the October 2024 cybersecurity enforcement cases by two of the SEC commissioners, who believed the omitted details were not material to investors, the board and management must constantly evaluate whether their company’s cybersecurity risk disclosures, as well as the disclosed scope and impact of any material breach, are sufficiently detailed and remain accurate throughout the company’s investigation.
Insider “Shadow” Trading
In April 2024, the SEC won a jury verdict in an insider trading case based on a “shadow” insider trading theory.[15] Shadow trading involves an insider’s misappropriation of confidential information about the insider’s company to trade in securities of another company where there is a sufficient “market connection” between the two companies. In this case, the SEC alleged, and the jury found, the defendant used confidential information about a potential acquisition of the biotech company he worked for to purchase call options in a second biotech company in the belief its stock price would materially increase after the deal involving his company was publicly announced. What was novel about this case is the lack of commercial connection between the two companies and the fact that the confidential information did not directly relate to the company whose securities the defendant traded in.[16] The nexus between the two companies that served as the basis for the SEC’s insider trading charges was that they were both operating in a field where viable acquisition candidates were scarce, such that the announcement of the sale of the insider’s company was likely to drive up the stock price of the other company.
Takeaway: Officers and directors should take note of this case and, pending further judicial developments, should refrain from shadow trading when in possession of material non-public information (MNPI). Indeed, corporate insider trading policies and codes of conduct often prohibit trading in the securities of publicly-traded customers, vendors, and other commercial partners when an insider is in possession of MNPI. Further, the SEC’s success in this civil case, and the existence of criminal penalties for insider trading, creates an additional risk of criminal prosecution. In short, officers and directors should avoid becoming embroiled in allegations of shadow trading, which could be costly to defend, cause reputational damage, and lead to the imposition of significant sanctions.
Internal Investigations
The SEC has made clear that when a company fails to investigate and remediate wrongful conduct, it will hold officers and directors responsible even if they may not have been involved in the underlying violation. And when a board and management take prompt action to investigate, remediate, and self-report, the SEC will “reward [] meaningful cooperation to efficiently promote compliance” in the form of reduced charges and/or sanctions.[17]
In September 2024, the SEC brought unsettled civil fraud charges in federal court against the former CEO, former CFO, and former director and audit committee chair of a bankrupt (formerly Nasdaq-listed) software company for their roles in an alleged scheme that resulted in the company overstating and misrepresenting its revenues in connection with two public stock offerings that raised $33 million.[18] The SEC alleged that while the CEO initiated and directed the fraud, the CFO and director received a complaint from a senior company employee regarding revenue concerns about the main product disclosed in the offering materials, but other than consulting with outside counsel, they failed to investigate the employee’s concerns or correct the potential misstatements. As a result, both signed public filings that contained false and misleading statements and, in connection with the year-end audit, falsely represented to the outside auditors that they had no knowledge of any complaints regarding the company’s financial reporting. The SEC is seeking disgorgement of ill-gotten gains, civil penalties, and officer-and-director bars against each defendant. In its press release, the SEC warned, “This case should send an important signal to gatekeepers like CFOs and audit committee members that the SEC and the investing public expect responsible behavior when critical issues are brought to their attention.”[19]
In stark contrast, in December 2024 the SEC declined to impose a civil penalty in a settled administrative cease-and-desist action against a publicly-traded biotechnology company due to its self-reporting, proactive remediation, and meaningful cooperation.[20] The SEC credited the company’s board for (1) forming an independent special committee, which hired outside counsel to conduct an investigation into two anonymous complaints; (2) adopting the special committee’s remediation recommendations, including appointing an interim CEO, establishing a disclosure committee, and appointing two new independent directors; and (3) self-reporting the results of the internal investigation.[21] The SEC filed separate settled charges against the former CEO and former CFO for misleading investors about the status of FDA reviews of the company’s drug candidates related to a follow-on public offering. Among other sanctions, the CEO and CFO agreed to civil penalties, and the CEO agreed to an officer-and-director bar.[22]
Similarly, in a settled action announced in September 2024, the SEC credited a former publicly-traded technology manufacturer for conducting an internal investigation, self-reporting the investigation results, and implementing remedial measures.[23] Despite the existence of fraudulent conduct by a high-level employee, the SEC charged the issuer with only non-fraud violations of the financial reporting, books and records, and accounting control provisions of the federal securities laws and did not impose any penalty. The SEC explained in its press release that “this kind of response by a corporate entity can lead to significant benefits including, as here, no penalty.”[24] The SEC did bring civil fraud charges against the company’s finance director who perpetrated a fraud related to the company’s financial performance during a three-year period.[25]
Takeaway: When accounting errors or improper conduct are discovered or alleged, a company and its board should take prompt action. Conducting an independent investigation, undertaking prompt remediation, and being transparent with the company’s outside auditors are critical to ensuring accurate disclosures, preventing further errors and misconduct, and mitigating regulatory and legal exposure. Failing to do so will increase business and legal costs, damage the company’s reputation, and expose officers and directors to individual liability. And where appropriate, with the advice of experienced counsel, companies should evaluate the pros and cons of self-reporting, which regulators will credit as a mitigating factor when considering charges, sanctions, and settlements.
Executive Compensation
In December 2024, the SEC announced it had settled charges against an NYSE-listed fashion retail company for failing to disclose within its definitive proxy statements $979,269 worth of executive compensation related to perks and personal benefits provided to a now-former CEO for fiscal years 2019, 2020, and 2021.[26] These unreported personal benefits included expenses associated with the authorized use of chartered aircraft for personal purposes.[27] The company’s failure to disclose these benefits resulted in it underreporting the “All Other Compensation” portion of its then-CEO’s compensation by an average of 94% of the three fiscal years.[28] The SEC charged the company with violations of Sections 13(a) and 14(a) of the Exchange Act and Rules 12b-20, 13a-1, 13a-15(a), 14a-3, and 14a-9 thereunder (which prohibits companies from making false or misleading statements in proxy statements).[29] The SEC imposed a cease-and-desist order and declined to impose a civil penalty, in part due to the company’s prompt remediation and self-reporting.[30]
Takeaway: This case underscores the importance of companies having adequate processes, policies, and controls for identifying perks and personal benefits and ensuring they are included in executive compensation disclosures. SEC rules require, among other things, companies to disclose the total value of such benefits provided to named executive officers who receive at least $10,000 worth of such items in a given year. See Item 402 of Regulation S-K. Transparent disclosure not only fulfills a company’s regulatory obligations but also helps maintain public trust. Failing to fully report non-compensation benefits executives receive can lead to increased government scrutiny, reputational damage, and loss of investor confidence. And when a company falls short, prompt remediation is critical and can result in a reduction of regulatory sanctions.
Beneficial Ownership and Insider Transaction Reports
On September 25, 2024, the SEC announced charges against 23 officers, directors, and major shareholders for violating Sections 16(a), 13(d), and 13(g) of the Exchange Act, which requires reporting information concerning holdings and transactions in public company stock.[31] In addition, the SEC charged two publicly-traded companies for their failure to report these insiders’ filing delinquencies or for contributing to these insiders’ failures to file.[32] In its press release, the SEC explained the importance of complying with these reporting obligations: “To make informed investment decisions, shareholders rely on, among other things, timely reports about insider holdings and transactions and changes in potential controlling interests.”[33] The settlements included penalties ranging from $10,000 to $200,000 for individuals and $40,000 to $750,000 for companies — totaling more than $3.8 million in penalties.[34] The SEC used data analytics to identify individuals and entities with late required reports.
Takeaway: While it is unusual for the SEC to bring so many actions at once, the “SEC’s enforcement initiatives” are not surprising given the SEC’s continued focus on policing compliance.[35] The SEC continues to send a clear signal to insiders and investors that they need to “commit necessary resources to ensure these reports are filed on time” or risk enforcement action.[36] And as the SEC recently warned, “[T]hese reporting requirements apply irrespective of whether the trades were profitable and regardless of a person’s reasons for the transactions.”[37] For public companies that assist insiders in complying with these filing requirements, the SEC actions further make clear companies are not immune and must stay abreast of amendments and ensure their monitoring processes and controls are working effectively to ensure timely reporting.
Artificial Intelligence
The SEC continued its crackdown on “AI-washing” by bringing a settled enforcement action on January 14, 2025 against a restaurant services technology company due to alleged misrepresentations concerning “critical aspects of its flagship artificial intelligence [] product[.]”[38] According to the SEC, AI-washing is a deceptive tactic that consists of promoting a product or a service by overstating the role of artificial intelligence integration.[39] The product at issue in the enforcement action employed AI-assisted speech recognition technology to automate aspects of drive-thru ordering at quick-service restaurants. Among other things, the SEC accused the company of disclosing a misleading reporting rate of orders completed without human intervention using the product.[40] The company was charged with violations of Section 17(a)(2) of the Securities Act and Section 13(a) of the Exchange Act.[41] The SEC declined to impose a civil penalty based on the company’s cooperation during the Staff’s investigation and remedial efforts, with the company consenting to a cease-and-desist order.
While this most recent enforcement against AI-washing led to a cease-and-desist order, the Commission’s enforcement cases in 2024 included steep penalties for violators.[42] In an earlier enforcement action against two investment advisory companies, the SEC levied civil penalties of $400,000 for the company’s false and misleading statements concerning their purported use of artificial intelligence.[43] Specifically, the companies were alleged to have marketed to their clients (and prospective clients) that they were using AI in certain ways when they were not.[44] In the SEC’s press release, Chair Gary Gensler warned, “We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. . . . Such AI washing hurts investors. . . . [P]ublic issuers making claims about their AI adoption must [] remain vigilant about [] misstatements that may be material to individuals’ investing decisions.”[45]
Takeaway: It is evident that “[a]s more and more people seek out AI-related investment opportunities,” the SEC becomes more and more committed to “polic[ing] the markets against AI-washing[.]” [46] The SEC’s emphasis, that any claims regarding AI must be substantiated with accurate information, makes it essential for companies integrating AI to have clear and accurate ways to measure and assess its AI-supported products and/or services. For directors and executives, this means carefully reviewing public disclosures and press releases related to AI technologies to ensure that all AI-related statements are supported by verifiable information. Without this verifiable information, a company opens itself up to significant penalties from enforcement actions brought pursuant to Section 17 of the Securities Act, which may also result in lost trust from shareholders around a company’s AI-related technologies.
Closing
The news for boards and management isn’t all bad; the number of SEC enforcement actions dropped significantly in 2024, and there is reason to believe that this drop may continue into 2025. In 2024, there were 583 SEC enforcement proceedings, compared to between 697 and 862 for each of the prior five years.[47] While the SEC touted record financial remedies for 2024,[48] over half of that amount came from a single case.[49] Signals from the new administration indicate reduced enforcement activity is likely to continue, given the administration’s focus on deregulation and government efficiency, which will likely lead to fewer resources available to the SEC. There also is an expectation that the SEC will avoid “regulation by enforcement” and take a “friendlier” view of certain activities that the outgoing SEC administration sought to reign in, such as with the crypto industry.[50] An additional factor pointing toward changes in enforcement approach is that the SEC is no longer able to try certain cases in administrative proceedings and instead must adjudicate such matters in federal jury trials.[51] This could result in the SEC choosing to pursue fewer actions or lesser sanctions, particularly given that it has historically been less successful in federal courts compared to in-house proceedings.[52] Nonetheless, the SEC’s enforcement actions involving public companies over the past year serve as a reminder to officers and directors of the importance of complying with their duties and obligations and ensuring strong internal controls and reporting practices. Staying ahead of compliance requirements is not just a matter of risk mitigation — it is essential for preserving shareholder trust and corporate integrity.
If you have questions about these and other SEC enforcement actions, contact the authors or your Foley & Lardner attorney.
[1] Typically with settled SEC actions, the settling party neither admits nor denies the SEC’s findings. See 17 CFR § 202.5.
[2] https://www.sec.gov/newsroom/press-releases/2024-161.
[3] See id.
[4] See id.
[5] See id.
[6] https://www.sec.gov/newsroom/press-releases/2024-174.
[7] https://www.sec.gov/newsroom/press-releases/2023-227. In July 2024, most of the SEC’s claims were dismissed; most notably, the court held that charges of internal accounting controls failures do not extend to cybersecurity deficiencies. See https://www.foley.com/insights/publications/2024/08/down-but-not-out-federal-court-curbs-sec-cybersecurity-enforcement-authority/.
[8] See https://www.sec.gov/newsroom/press-releases/2024-174.
[9] See id.
[10] See id.
[11] See id.
[12] Release Nos. 33-10459, 34-82746 (Feb. 21, 2018) (“We expect companies to provide disclosure that is tailored to their particular cybersecurity risks and incidents”).
[13] See Release Nos. 33-11216, 34-97989 (July 26, 2023); see also https://www.foley.com/insights/publications/2023/08/sec-adopts-new-cybersecurity-disclosure-rules/.
[14] https://www.sec.gov/newsroom/press-releases/2024-174.
[15] See https://www.sec.gov/enforcement-litigation/litigation-releases/lr-25970; see also https://www.sec.gov/enforcement-litigation/litigation-releases/lr-25170.
[16] https://www.foley.com/insights/publications/2024/03/sec-v-panuwat-shadow-trading-insider-trading-trial/.
[17] https://www.sec.gov/newsroom/press-releases/2023-234.
[18] https://www.sec.gov/newsroom/press-releases/2024-131.
[19] Id.
[20] https://www.sec.gov/newsroom/press-releases/2024-189.
[21] https://www.sec.gov/files/litigation/admin/2024/33-11332.pdf.
[22] https://www.sec.gov/files/litigation/admin/2024/34-101796.pdf.
[23] https://www.sec.gov/newsroom/press-releases/2024-116.
[24] Id.
[25] Id.
[26] https://www.sec.gov/newsroom/press-releases/2024-203
[27] Id.
[28] Id.
[29] Id.
[30] Id.
[31] https://www.sec.gov/newsroom/press-releases/2024-148
[32] Id.
[33] Id.
[34] Id.
[35] https://www.sec.gov/newsroom/press-releases/2023-219 (press release); https://www.sec.gov/files/33-11253-fact-sheet.pdf (fact sheet); https://www.sec.gov/files/rules/final/2023/33-11253.pdf (final rule).
[36] https://www.foley.com/insights/publications/2014/09/sec-charges-insiders-for-violations-of-section-16a/
[37] https://www.sec.gov/newsroom/press-releases/2024-148
[38] https://www.sec.gov/enforcement-litigation/administrative-proceedings/33-11352-s
[39] See https://www.sec.gov/newsroom/speeches-statements/gensler-office-hours-ai-washing-090424
[40] Id.
[41] Id.
[42] https://www.sec.gov/newsroom/press-releases/2024-36
[43] Id.
[44] Id.
[45] Id.
[46] See https://www.sec.gov/newsroom/press-releases/2024-70
[47] https://www.sec.gov/files/fy24-enforcement-statistics.pdf.
[48] https://www.sec.gov/newsroom/press-releases/2024-186.
[49] See https://www.sec.gov/enforcement-litigation/distributions-harmed-investors/sec-v-terraform-labs-pte-ltd-do-hyeong-kwon-no-23-cv-1346-jsr-sdny.
[50] https://www.nytimes.com/2024/12/04/business/trump-sec-paul-atkins.html.
[51] See https://www.foley.com/insights/publications/2024/06/us-supreme-court-rules-sec-securities-fraud-cases-federal-jury/.
[52] Id.