ROSES ARE RED, THE COURT HAD ITS SAY: Online Fax Services Get No TCPA
Greetings TCPAWorld!
Happy Valentine’s Day! Whether you’re celebrating with loved ones or enjoying the discounted chocolate tomorrow, one thing’s for sure—online fax providers won’t feel the love from this latest ruling. In a significant ruling highlighting the collision between aging telecommunications laws and modern technology, a Colorado federal court dropped an important ruling on the online fax industry that needs to be on your radar. In Astro Companies, LLC v. WestFax Inc., the Court tackled a deceptively simple question: Is an online fax service the same as a traditional fax machine under the law? See ASTRO Co. v. Westfax Inc., Civil Action No. 1:23-cv-02328-SKC-CYC, 2025 U.S. Dist. LEXIS 25629 (D. Colo. Feb. 12, 2025).
Here’s the deal. Astro Companies, an online fax provider, sued WestFax and others for allegedly bombarding their system with junk faxes. Astro claimed this violated the TCPA. But, of course, there was a catch—the TCPA explicitly protects “telephone facsimile machines,” and the court had to decide if Astro’s cloud-based service qualified.
The Court’s answer? A resounding no.
Judge S. Kato Crews dove deep into the statutory language, focusing on how the TCPA defines a “telephone facsimile machine.” While the law allows faxes to be sent from various devices (including computers), it only protects faxes received by actual fax machines. The Court noted in Career Counseling, Inc. v. AmeriFactors Fin. Grp., L.L.C., 91 F.4th 202 (4th Cir. 2024) that the law was meant to protect equipment “well understood to be a traditional fax machine.”
But this wasn’t just a case of statutory interpretation—it was a complete rejection of Astro’s legal theory. The Court didn’t just rule against Astro; it dismissed the entire case with prejudice, shutting down any attempt to refile the same claims.
What makes this ruling particularly interesting is how the Court distinguished between a machine and a service. The Judge pointed out that while Astro’s servers could print faxes, it still wasn’t enough. Black’s Law Dictionary defines a machine as “a device or apparatus consisting of fixed and moving parts that work together to perform some function.” Astro’s cloud-based service, despite its printing capabilities, didn’t fit this definition.
So what’s next? Astro tried to argue that its service still counted under the TCPA because its servers “had the capacity to print.” But the Court made clear that capacity alone isn’t enough—the TCPA requires an actual telephone facsimile machine, not just a system that can eventually print a fax if someone decides to. Astro leaned heavily on Lyngaas v. Curaden AG, 992 F.3d 412 (6th Cir. 2021), but the Court saw a fundamental problem. In Lyngass, the case involved whether a computer receiving an eFax could qualify as a telephone facsimile machine. But Astro wasn’t just a recipient—it was an online fax provider acting as an intermediary. That distinction alone made Lyngaas inapplicable.
Furthermore, the Court supported the FCC’s interpretation, significantly weakening Astro’s case. In In re Amerifactors Fin. Grp., L.L.C., 34 FCC Rcd. 11950 (2019), the FCC explained that “a fax received by an online fax service as an electronic message is effectively an email.” Unlike traditional fax machines that automatically print incoming messages (using up paper and ink), online fax services allow users to manage messages like emails—blocking, deleting, or storing them indefinitely.
This distinction highlights the core reason Congress enacted the TCPA. As noted in the 1991 House Committee Report, the law was concerned with two specific problems: 1) shifting the cost of unwanted advertisements to the recipient (through wasted paper and ink), and 2) tying up fax lines, preventing businesses from receiving legitimate communications. H.R. Rep. No. 102-317, at 10 (1991). Neither of those concerns applies to online fax services, where nothing is automatically printed, and no business lines are blocked.
The takeaway? Consider this ruling a tough love letter from the court—if your service functions more like an email inbox than a fax machine, don’t expect the TCPA to be your Valentine.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!
IN HOT WATER: Louisiana Crawfish Company Sued Over Early-Morning Text Messages
Hi TCPAWorld! The Dame here with an interesting lawsuit against a company from my home state of Louisiana. And I’ll start by admitting that, despite my family being in the seafood industry for five generations, I had no idea you could order live crawfish online and have them delivered straight to your door. This company named Louisiana Crawfish Company does just that. And a few discount offers via text—allegedly sent a little too early—have now landed this company in a federal class action lawsuit.
The lawsuit, filed in the Central District of California by plaintiff Mason Ibarra (“Plaintiff”), accuses Louisiana Crawfish Company of violating the TCPA by sending at least 10 unsolicited marketing texts before 8 AM. According to the complaint, texts were sent as early as 6:40 AM, 7:01 AM, and 7:30 AM—times the TCPA clearly prohibits for telemarketing communications. Plaintiff is seeking statutory damages of $500 per text, or $1,500 per text if the violations are deemed willful, along with an injunction to prevent further messages.
Under the TCPA, businesses can’t make telemarketing calls or texts before 8 AM or after 9 PM (local time for the recipient).
However, my reading of the TCPA is that call time restrictions only restricts “telephone solicitations” to these call time hours—which means calls made with consent or an established business relationship with the recipient are not subject to these restrictions. And Plaintiff does not allege that these texts were nonconsensual. Plaintiff only alleges that he “never signed any type of authorization permitting or allowing Defendant to send them telephone solicitations before 8 am or after 9 pm.”
Either way, an easy mistake to avoid. And honestly, even the biggest of companies get caught up in these time call restriction cases.
You can read the entire complaint here: Mason Ibarra v Louisiana Crawfish Company Complaint.
Love to Litigate: Serial Plaintiff Brings Another TCPA Complaint
Hey TCPAWorld!
Roses are red. Violets are blue. Here comes Salaiz bringing another TCPA suit.
This Valentine’s Day, we’re covering a complaint filed against PEOPLE’S LEGAL GROUP INC., a Wyoming-based law firm offering consumer legal services.
In SALAIZ v. PEOPLE’S LEGAL GROUP INC., No. 3:25-CV-00038-DB (W.D. Tex. Feb. 12, 2025), Salaiz (“Plaintiff”) alleges that even though Plaintiff has been listed on the National Do-Not-Call Registry (“DNCR”) for over 30 days, People’s Legal Group Inc., (“Defendant”) through the use of an ATDS, delivered at least two unsolicited calls to Plaintiff’s residential number on November 8, 2024. Plaintiff alleges to have heard the following when answering both calls:
“This is an important reminder from Daisy Young please listen to the following message from telephone number 888-803-7025 hi it’s Daisy Young I know we’ve uh reached out previously about getting some financial help but based on your previous profile we are offering you an amount of nineteen thousand dollars in your case possibly more if you could give me a call back today thank you please call telephone number 888-803-7025 that’s telephone number 888-803-7025.”
Id. at ¶ 26. After calling the Defendant’s alleged number to probe further into its identify, Plaintiff was sent a retainer agreement with Defendant’s name on it, along with a follow-up text that reads:
“Hi Erik this is Lee with Peoples Legal Group. Here is my contact info 949-777-9583 please save this is your contacts.”
Id. at ¶ ¶ 43-44. Due to these allegations, Plaintiff filed a Complaint in the Western District of Texas asserting Defendant violated the ATDS provisions 47 U.S.C. § 227(b)(1)(A)(iii) and 47 U.S.C. § 227(b)(1)(B) when Defendant called Plaintiff’s residential phone through the use of an artificial or prerecorded voice. Plaintiff further alleges violations of DNC provisions, 47 U.S.C. 227(c)(5) and 47 C.F.R. § 64.1200(c)(2), by delivering telemarketing calls to Plaintiff, while Plaintiff was listed on the DNCR. Lastly, Plaintiff claims multiple violations of the Texas Business and Commerce Code § 305.053 which grants a private right of action for individuals receiving unsolicited telemarketing calls in violation of state law, and § 302.101 for an alleged failure to obtain a Telephone Solicitation Registration Certificate prior to making the calls.
Plaintiff seeks to represent the following two classes:
TCPA Class. All persons in the United States who: (1) from the last 4 years to present (2) Defendant called and/or any entity making calls on behalf of Defendant (3) whose telephone numbers were registered on the Federal Do Not Call registry for more than 30 days at the time.
Texas Subclass. All persons in Texas who: (1) from the last 4 years to present (2) Defendant called any entity making calls on behalf of Defendant (3) whose telephone numbers were registered on the Federal Do Not Call registry for more than 30 days at the time.
Id. at ¶ 62. Repeat litigators are constantly on the hunt for TCPA violations. Tighten up your TCPA compliance so your company’s name isn’t on the next complaint we review.
“Stupidly Rhetorical” Online Posts –Your Employer’s Rights to React (UK)
In these days of fevered and angry social media comment on almost everything, it is always wise for HR to keep its feet anchored firmly on the ground when all that online bile and indignation washes up at the employer’s door. Here to help with that is this week’s Court of Appeal decision in Higgs – v – Farmors School & Others, a case bulging at the seams with KCs (five!) and abstruse legal analysis.
In brief, Ms Higgs worked as an administrator for the School. She was dismissed after expressing on Facebook what a member of the public described as “homophobic and prejudiced views” concerning purported government policy on teaching same-sex relationships and gender identification matters in schools. Farmors was concerned that readers of the posts would conclude that Higgs held homophobic and transphobic views incompatible with her role there, and that this would put its reputation at risk. However, it did not suggest that Higgs had in fact ever brought those views into her work or had allowed them to affect her treatment of any of her colleagues or pupils.
The views which led to Higgs’ posts – a lack of belief in gender fluidity or that someone can change their biological sex, an Old Testament assertion that “divinely-instituted” marriage could be between opposite sexes only and a perceived duty, when unbiblical ideas or ideologies were promoted, to “witness to the world” her own views of “biblical truth – were accepted at the outset as protected under the Equality Act. Not everyone’s cup of tea, perhaps, but that did not mean that they were unworthy of respect in a democratic society.
Once that was accepted, the Court of Appeal had to consider the law around the manifestation of such beliefs. Article 9.1 of the European Convention on Human Rights grants an absolute right to freedom of thought and religion, while 9.2 limits one’s ability to manifest those beliefs by reference to any restrictions necessary in a democratic society for the protection of the rights and freedoms of others. Article 10.1 confers a right to freedom of expression, but then qualifies it in terms very similar to 9.2.
Paraphrased, therefore, Higgs had a right to hold her beliefs, no question, but only to manifest them to the extent that they did not infringe the rights and freedoms of others. In that regard, the Court noted that this was a high bar, and that those rights and freedoms would not be infringed by expressions of opinion which merely disturb, shock or offend. “Free speech“, said the Court, “includes … the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative … freedom only to speak inoffensively is not worth having“.
Of course, free speech is not an employment law concept, whatever your more self-important employees may suggest. Your staff do not have an unfettered right to disturb, shock or offend each other. How should employers apply those principles to online statements which flirt with the Equality Act protections and so risk internal discord with other employees or harm to the reputation of the business? The Court of Appeal said that a balancing act is required to ensure that the restrictions or sanctions which employers impose (here, dismissal) are proportionate to the harm done or likely to be done by those statements. It noted a number of considerations as of particular relevance to that question, as below, but stressed that each case of course depends on its own facts:
Is the company’s unhappiness about the views themselves or the way in which they were expressed? A bold but neutrally-toned statement that this is what I believe is very different from a post crammed with gratuitously offensive hyperbole, spite, insult, incitement to violence or other “egregiously offensive language“. The Court of Appeal drew a distinction (perhaps easier in law than in fact) between that on the one hand and Higgs’ mere “derogatory sneers” and “stupidly rhetorical exaggeration” on the other. The more offensive the manner of the expression of the views (as distinct from the views themselves), the more easily an employer might justify action.
Substantial parts of Higgs’ posts were actually lifted from online comments by others. The Court said that this “does not absolve her of responsibility”, but at the same time that it was still “relevant to the degree of any culpability“. Not for me to say, and greatest of respect and all that, but I disagree. If as an adult you expressly reproduce someone else’s words, top-and-tail them with your own asterisked calls to action, admit in your disciplinary meeting that you meant to give them wider circulation, and decline to take them down when asked, you must surely be treated as if you wrote them. It cannot be correct that you are less culpable for publishing your views in someone else’s offensive words than in your own.
What do the offending words actually mean? The School concluded that readers of the posts might infer homophobic or transphobic views on Higgs’ part, but if we take refuge in semantics, the posts did not strictly say that. “It is necessary to judge an employee’s statements by what they actually say (including any necessary implications) rather than by what some readers might choose illegitimately to read into them“, said the Court, alternatively phrased as “What message would they convey to a reasonable reader?” In principle this has to be right, but the practical consequences are both highlighted and disregarded in the same paragraph of the decision. The Court goes on: “this is particularly important in the current social media climate where messages are often read hastily and sometimes by people who are partisan or even ill-intentioned or … simply succumb to the common human tendency to find in a communication what they expect to find rather than what is actually there“. By extension, that makes it OK if your company risks social media flak, cancelation, press persecution, reputational crucifixion, etc., so long as it is at the hands of people whose opinions are not objectively well founded because they are over-sensitive and under-informed. So when the Court of Popular Opinion despatches the torches-and-pitchforks brigade to your offices, you can just give them a decent lecture on reading their social media feeds more carefully next time, and all will be fine. Really?
Has any reputational or other harm actually been done? We all tend to be a little self-centred around the newsworthiness of our own businesses, but if the hurtful reality is that no one has been the least bit interested in the post or connected it to your company by some weeks later, the transient nature of social media comments must reduce the risk very substantially. Here the School could point to one person only who had complained, but not to any reputational, let alone actual, harm to it.
Did the post relate to the employee’s work, not just in the sense of the employer being identifiable in it, but also in terms of its subject matter? Higgs worked in a school and was complaining in fairly hysterical terms about government education policy, so there was obviously a link. If she had been expressing views on immigration or the conflict in the Middle East instead, the risk of reputational harm to the School would be much harder to establish.
Does the employee show remorse or understanding of the harm which the posts might do, so as to provide some reassurance to the employer that they won’t be repeated? Higgs didn’t, confirming expressly at her disciplinary meeting that she would do pretty much exactly the same again next time.
Is there any argument that the employee’s posts are anything more than their personal view, i.e. in some way representative of the business? That might be a function of their seniority or the use of their job title in the posts, for example.
Weighing up these main factors, the Court of Appeal decided that the School’s decision to dismiss had not been proportionate and therefore that Higgs had been discriminated against on the grounds of her beliefs. While Higgs had not scored well under a number of those factors, the fact remained that there had been no harm done and by the time of the dismissal there was no real reason to think it would be. The Court said in terms that “an employer does not have carte blanche to interfere with an employee’s right to express their beliefs simply because third parties find those beliefs offensive and think the worse of it for employing them“, but as soon as consequential harm is done, that position changes.
Despite first appearances, this case lays down no principle at all that you can’t be dismissed for saying stupid things on social media, provided that the sanction is proportionate to the damage caused or reasonably likely to be caused. Similarly, it does not mean that it is acceptable as a matter of employment law to disturb, offend or shock your colleagues online, particularly in the face of a policy and/or warnings to the contrary. I think that Ms Higgs can regard herself as much blessed here that almost no-one was sufficiently interested in what she wrote to react to it. If Farmors had received any material amount of heat as a result, her poor showing under those factors could well have tipped that balance the other way.
Privacy and Advertising Year in Review 2024: Will Kids and Teens Remain a Focus in 2025?
A new year. A new administration in Washington. While protecting kids and teens is likely to remain an issue that drives legislation, litigation, and policy discussions in 2025, issuance of 1,000 Executive Orders on day one of the Trump Administration may result in new or changed priorities and some delay in the effective date of the recently updated Children’s Online Privacy Protection Rule (COPPA Final Rule).
We start with a recap of significant actions affecting kids and teens from the beginning of 2024 to the end of the Biden Administration in January 2025 and some early action by the Trump Administration.
Key Actions Affecting Kids and Teens:
FTC Regulation and Reports
The Federal Trade Commission (FTC or Commission) kicked off 2024 with proposed rules updating the Children’s Online Privacy Protection Act (COPPA) and issued a COPPA Final Rule in the closing days of the Biden Administration. FTC Commissioner and now Chair Andrew Ferguson identified several areas requiring clarification, and publication of the COPPA Final Rule will likely be delayed due to President Trump’s Executive Order freezing agency action.
The FTC released a highly critical staff report, A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services, in September of 2024. The report, based on responses to the FTC’s 2020 6(b) FTC Act orders issued to nine of the largest social media platforms and video streaming services, including TikTok, Amazon, Meta, Discord, and WhatsApp, highlighted privacy and data security practices of social media and video streaming services and their impacts on children and teens.
Policy debates centered on Artificial Intelligence (AI), and one of the Commission’s final acts was a January 17, 2025, FTC Staff Report on AI Partnerships & Investments 6(b) Study.
The Project 2025 report, Mandate for Leadership 2025: The Conservative Promise, recommends possible FTC reforms and highlights the need for added protections for kids and teens and action to safeguard the rights of parents. The report stresses in particular the inability of minors to enter into contracts.
Litigation and Enforcement: the FTC
On July 9, 2024, chat app maker NGL Labs settled with the FTC and Los Angeles District Attorney after they brought a joint enforcement action against the company and its owners for violations of the COPPA Rule and other federal and state laws.
On January 17, 2025, the FTC announced a $20 million settlement of an enforcement action alleging violations of COPPA and deceptive and unfair marketing practices against the developer of the popular game Genshin Impact. In addition to an allegation that the company collected personal information from children in violation of COPPA, the complaint alleged that the company deceived users about the costs of in-game transactions and odds of obtaining prizes. As a result, the company is required to block children under 16 from making in-game purchases without parental consent.
Federal and State Privacy Legislation
Federal privacy legislation, including the Kids Online Safety Act (KOSA 2.0) and its successor, the Kids Online Safety and Privacy Act (KOSPA), failed to make it through Congress, although 32 state attorney generals (AGs) sent a letter to Congress urging passage of KOSA 2.0 on November 18, 2024.
Last year, Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Rhode Island enacted comprehensive privacy laws, and they include provisions affecting children and teens. Those states join California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia.
Litigation and Enforcement: the Courts
Throughout 2024, state attorneys general and private plaintiffs brought litigation targeting social media platforms and streaming services, alleging that they are responsible for mental and physical harm to kids and teens.
Legal challenges to more state social media laws arguing they violate First Amendment rights, among other grounds, were filed or were heard by courts in 2024, and legal action has continued into this year. On February 3, 2025, the tech umbrella group NetChoice filed a complaint in Maryland district court against the Maryland Age-Appropriate Design Code Act (Maryland Kid’s Code or Code), which was enacted on May 9, 2024. The complaint, which is similar to NetChoice’s recent challenges to other state social media laws, alleges that the Code violates the First Amendment by requiring websites “to act as government speech police” and alter their protected editorial functions through a vague and subjective “best interests of children” standard that gives state officials “nearly boundless discretion to restrict speech.” In 2024, NetChoice and its partners successfully obtained injunctions or partial injunctions against social media laws on constitutional grounds in Utah on September 9, 2024, in Ohio on February 12, 2024, and, at the eleventh hour, on December 31, 2024, against California’s Protecting Our Kids from Social Media Addiction Act. NetChoice’s complaint against Mississippi HB 1126 was heard by the U.S. Court of Appeals for the Fifth Circuit on February 2, 2025, but a decision has not yet been published as of the time of this writing.
On August 16, 2024, a panel of the U.S. Court of Appeals for the Ninth Circuit partially affirmed the district court’s opinion that the data privacy impact assessment (DPIA) provisions of the California Age Appropriate Design Code Act (CAADCA) “clearly compel(s) speech by requiring covered businesses to opine on potential harms to children” and are therefore likely unconstitutional. However, the appeals court vacated the rest of the district court’s preliminary injunction “because it is unclear from the record whether the other challenged provisions of the CAADCA facially violate the First Amendment, and it is too early to determine whether the unconstitutional provisions of the CAADCA were likely severable” from the rest of the Act. The panel remanded the case to the district court for further proceedings.
On July 1, 2024, the Supreme Court held that the content moderation provisions of both Texas HB 20 and Florida SB 7072, which the court decided jointly, violated the First Amendment and sent the cases back to the lower courts for further “fact-intensive” analysis.
On January 17, 2025, the U.S. Supreme Court affirmed the judgment of the U.S. Court of Appeals for the D.C. Circuit that upheld a Congressional ban on TikTok due to national security concerns regarding TikTok’s data collection practices and its relationship with a foreign adversary. The Court concluded that the challenged provisions do not violate the petitioners’ First Amendment rights. President Trump has vowed to find a solution so that U.S. users can access the platform.
On January 15, 2025, the U.S. Supreme Court heard an appeal of a Texas law that requires age verification to access porn sites, and it seems likely the Court will uphold the law.
We Forecast:
Efforts to advance a general federal privacy law and added protections for kids and teens will redouble. Indeed, S. 278, Keep Kids Off Social Media Act (KOSMA), advanced out of the Senate Commerce Committee on February 5, 2025. However, tight margins and the thorny issues of preemption and a private right of action will complicate enactment of general privacy legislation.
States will continue to be actively engaged on privacy and security legislation, and legal challenges on constitutional and other grounds are expected to continue.
Legal challenges to data collection and advertising practices of platforms, streaming services, and social media companies will continue.
The FTC was planning to hold a virtual workshop on February 25, 2025 on design features that “keep kids engaged on digital platforms.” The FTC’s September 26, 2024 announcement outlines topics for discussion, including the positive and negative physical and psychological impacts of design features on youth well-being, but the workshop has been postponed.
Our crystal ball tells us that privacy protection of kids and teens and related questions of responsibility, liability, safety, parental rights, and free speech will continue to drive conversation, legislation, and litigation in 2025 at both the federal and the state level. While the deadline for complying with the new COPPA Rule is likely to slide, businesses will need to implement operational changes to comply with new obligations under the Rule, while remaining aware of the evolving policy landscape and heightened litigation risks.
Global Data Protection Authorities Issue Joint Statement on Artificial Intelligence
On February 11, 2025, the data protection authorities of the UK, Ireland, France, South Korea and Australia issued a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective artificial intelligence (“AI”) (the “Joint Statement”). In the Joint Statement, the DPAs recognize “the importance of supporting players in the AI ecosystem in their efforts to comply with data protection and privacy rules and help them reconcile innovation with respect for individuals’ rights.”
The Joint Statement refers to the “leading role” DPAs have in “shaping data governance” to address the evolving challenges of AI. Specifically, the Joint Statement indicates that the authorities will commit to:
Foster a shared understanding of lawful grounds for processing personal data in the context of AI training.
Exchange information and establish a shared understanding of proportionate safety measures, to be updated in line with evolving AI data processing activities.
Monitor technical and societal impacts of AI.
Reduce legal uncertainties and create opportunities for innovation where data processing is considered essential for AI.
Strengthen interactions with other authorities to enhance consistency between different regulatory frameworks for AI systems, tools and applications, including those responsible for competition, consumer protection and intellectual property.
Read the full Joint Statement.
Extension Vs. Capability – Google Learns the Difference The Hard Way
Earlier this week, frequent CIPAWorld participant Google lost a motion to dismiss based on the use of their Google Cloud Contact Center AI (“GCCCAI”) product. And this case (Ambriz v. Google, LLC, Case No. 23-cv-05437-RFL (N.D. Cal Feb. 10, 2025) raises some fascinating questions regarding the use of AI in Contact Centers and more generally.
The GCCCAI product (which a prior motion to dismiss was discussed on TCPAWorld) “offers a virtual agent for callers to interact with, and it can also support a human agent, including by: (i) sending the human agent the transcript of the initial interaction and the GCCAI virtual agent, (ii) acting as a ‘session manager’ who provides the human agent with a real-time transcript, makes article suggestions, and provides step-by-step guidance and ‘smart replies’.” It does all of this without informing the consumers that the call is being transcribed and analyzed.
Plaintiffs sued Google under Section 631(a) of the California Penal Code. This provision has three main prohibition: (i) “intentional wiretapping”, (ii) “willfully attempting to learn the contents or meaning of a communication in transit”, and (iii) “attempting to use or communicate information obtained as a result of engaging in either of the two previous activities”. Plaintiffs in this case claim Google violated (i) and (ii) of the above provisions.
Google’s best argument in this case is that they are not a third party to the communications. Because only “unauthorized third-party listeners” can violate Section 631(a). Google argues that they aren’t a third party, they are merely a software provider, like a tape recorder.
The Court disagreed. Recognizing that there are essentially two distinct branches of these cases when it comes to how to look at software as a service providers, the court proceeds to discuss whether the GCCAI product is an “extension” of the parties or whether the GCCAI product has the “capability” to use the data for its own purposes.
If a software has “merely captured the user data and hosted it on its own servers where [one of the parties] could then use data by analyzing”, the software is generally considered to be an extension of the parties. Therefore, it’s not a third-party and wouldn’t violate CIPA. This is similar to the “tape recorder” example preferred by Google.
Alas, however, the court looked at GCCCAI as “a third-party based on its capability to use user data to its benefit, regardless of whether or not it actually did so.” The Court applied this capability test and found that the Plaintiffs had “adequately alleged that Google ‘has the capability to use the wiretapped data it collects…to improve its AI/ML models.’” Because Google’s own terms of use stated that they may do so if their Customer allows them to do, the Court inferred that Google had the capacity to do just that.
Google argued that it was contractually prohibited from do so, but the Court also found those prohibitions don’t change the fact that Google has the ability to do so. And that is the determining factor. Therefore, the motion to dismiss was denied.
A couple of interesting takeaways from this case:
In a world where every company is throwing AI in their products, it is vital to understand not only WHAT they are doing with your data, but also what they COULD do with it. The capability to improve their models may be enough under this line of cases to require additional consumer disclosures.
We are all so used to “AI notetakers” on calls, whether Zoom, Teams, or, heaven forbid, Google Meet. What are those notetakers doing with your data? Should you be getting affirmative consent? Potentially. I think it’s a matter of time before someone tests the waters on those notetakers under CIPA.
Spoiler alert: I have reviewed the Terms of Service of some major players in that space. Their Terms say they are going to use your data to train their models. Proceed with caution.
Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors
On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:
Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.
In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:
Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.
Thomson Reuters Wins Copyright Case Against Former AI Competitor
Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.
A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.
In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.
Privacy Tip #431 – DOGE Has Access to Our Personal Information: What You Need to Know
According to a highly critical article recently published by TechCrunch, the Department of Government Efficiency (DOGE), President Trump’s advisory board headed by Elon Musk, has “taken control of top federal departments and datasets” and has access to “sensitive data of millions of Americans and the nation’s closest allies.” The author calls this “the biggest breach of US government data.” He continues, “[w]hether a feat or a coup (which depends entirely on your point of view), a small group of mostly young, private-sector employees from Musk’s businesses and associates — many with no prior government experience — can now view and, in some cases, control the federal government’s most sensitive data on millions of Americans and our closest allies.”
According to USA Today, “The amount of sensitive data that Musk and his team could access is so vast it has historically been off limits to all but a handful of career civil servants.” The article points out that:
If you received a tax refund, Elon Musk could get your Social Security number and even your bank account and routing numbers. Paying off a student loan or a government-backed mortgage? Musk and his aides could dig through that data, too.
If you get a monthly Social Security check, receive Medicaid or other government benefits like SNAP (formerly known as food stamps), or work for the federal government, all of your personal information would be at the Musk team’s fingertips. The same holds true if you’ve been awarded a federal contract or grant.
Private medical history could potentially fall under the scrutiny of Musk and his assistants if your doctor or dentist provides that level of detail to the government when requesting Medicaid reimbursement for the cost of your care.
A federal judge in New York recently issued a preliminary injunction stopping Musk and his software engineers from accessing the data, despite Musk calling the judge “corrupt” on X. USA Today reports that the White House says Musk and his engineers only have “read-only” access to the data, but that is not very comforting from a security standpoint. The Treasury Department has reportedly admitted that one DOGE staffer, a 25-year-old software engineer, had been mistakenly granted “read/write” permission on February 5, 2025. That is just frightening to me as one who works hard to protects my personal information.
Tech Crunch reported that data security is not a priority for DOGE.
“For example, a DOGE staffer reportedly used a personal Gmail account to access a government call, and a newly filed lawsuit by federal whistleblowers claims DOGE ordered an unauthorized email server to be connected to the government network, which violates federal privacy law. DOGE staffers are also said to be feeding sensitive data from at least one government department into AI software.”
We all know that Musk loves AI. We are also well aware of the risks of using AI with highly sensitive data, including unauthorized disclosure and the ability to include it in outputs.
All of this has prompted questions about whether this advisory board has proper security clearance to access our data.
Should you be concerned? Absolutely. I understand the goal of cutting costs. But why do these employees have access to our most private information, including our full Social Security numbers and health information? Do they really need that specific data to determine fraud or overspending?
I argue no. A tenet of data security is proper access controls, only having access to the data needed for business purposes. DOGE’s unfettered access to our highly sensitive information is not limited to only data needed for a specific purpose. The security procedures for accessing the data are in question, and proper security protocols must be followed. According to Senator Ron Wyden of Oregon and Senator Jon Ossoff of Georgia, who is a member of the U.S. Senate Intelligence Committee, this is “a national security risk.” As a privacy and cybersecurity lawyer, I am very concerned. A hearing on an early lawsuit filed to prohibit this unrestricted access is scheduled for tomorrow. We will keep you apprised of developments as they progress.
Criminal Charges Lodged Against Alleged Phobos Ransomware Affiliates
Unfortunately, I’ve had unpleasant dealings with the Phobos ransomware group. My interactions with Phobos have been fodder for a good story when I educate client employees on recent cyber-attacks to prevent them from becoming victims. The story highlights how these ransomware groups, including Phobos, are sophisticated criminal organizations with managerial hierarchy. They use common slang in their communications and have to get “authority” to negotiate a ransom. It’s a strange world.
Because of my unpleasant dealings with Phobos, I was particularly pleased to see that the Department of Justice (DOJ) recently announced the arrest and extradition of Russian national Evgenii Ptitsyn on charges that he administered the Phobos ransomware variant.
This week, the DOJ unsealed charges against two more Russian nationals, Roman Berezhnoy and Egor Nikolaevich Glebov, who “operated a cybercrime group using the Phobos ransomware that victimized more than 1,000 public and private entities in the United States and around the world and received over $16 million in ransom payments.” They were arrested “as part of a coordinated international disruption of their organization, which includes additional arrests and the technical disruption of the group’s computer infrastructure.” I’m thrilled about this win. People always ask me whether these cyber criminals get caught. Yes, they do. This is proof of how important the Federal Bureau of Investigation (FBI) is in assisting with international cybercrime, and how effective its partnership with international law enforcement is in catching these pernicious criminals. This is why I firmly believe that we must continue to share information with the FBI to assist with investigations, and why the FBI must be allowed to continue its important work to protect U.S. businesses from cybercrime.
Three States Ban DeepSeek Use on State Devices and Networks
New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.
Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”
According to the Texas Governor’s press release:
“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.”
New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”
The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”
The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”
These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.