Extension Vs. Capability – Google Learns the Difference The Hard Way

Earlier this week, frequent CIPAWorld participant Google lost a motion to dismiss based on the use of their Google Cloud Contact Center AI (“GCCCAI”) product. And this case (Ambriz v. Google, LLC, Case No. 23-cv-05437-RFL (N.D. Cal Feb. 10, 2025) raises some fascinating questions regarding the use of AI in Contact Centers and more generally.
The GCCCAI product (which a prior motion to dismiss was discussed on TCPAWorld) “offers a virtual agent for callers to interact with, and it can also support a human agent, including by: (i) sending the human agent the transcript of the initial interaction and the GCCAI virtual agent, (ii) acting as a ‘session manager’ who provides the human agent with a real-time transcript, makes article suggestions, and provides step-by-step guidance and ‘smart replies’.” It does all of this without informing the consumers that the call is being transcribed and analyzed.
Plaintiffs sued Google under Section 631(a) of the California Penal Code. This provision has three main prohibition: (i) “intentional wiretapping”, (ii) “willfully attempting to learn the contents or meaning of a communication in transit”, and (iii) “attempting to use or communicate information obtained as a result of engaging in either of the two previous activities”. Plaintiffs in this case claim Google violated (i) and (ii) of the above provisions.
Google’s best argument in this case is that they are not a third party to the communications. Because only “unauthorized third-party listeners” can violate Section 631(a). Google argues that they aren’t a third party, they are merely a software provider, like a tape recorder.
The Court disagreed. Recognizing that there are essentially two distinct branches of these cases when it comes to how to look at software as a service providers, the court proceeds to discuss whether the GCCAI product is an “extension” of the parties or whether the GCCAI product has the “capability” to use the data for its own purposes.
If a software has “merely captured the user data and hosted it on its own servers where [one of the parties] could then use data by analyzing”, the software is generally considered to be an extension of the parties. Therefore, it’s not a third-party and wouldn’t violate CIPA. This is similar to the “tape recorder” example preferred by Google.
Alas, however, the court looked at GCCCAI as “a third-party based on its capability to use user data to its benefit, regardless of whether or not it actually did so.” The Court applied this capability test and found that the Plaintiffs had “adequately alleged that Google ‘has the capability to use the wiretapped data it collects…to improve its AI/ML models.’” Because Google’s own terms of use stated that they may do so if their Customer allows them to do, the Court inferred that Google had the capacity to do just that.
Google argued that it was contractually prohibited from do so, but the Court also found those prohibitions don’t change the fact that Google has the ability to do so. And that is the determining factor. Therefore, the motion to dismiss was denied.
A couple of interesting takeaways from this case:

In a world where every company is throwing AI in their products, it is vital to understand not only WHAT they are doing with your data, but also what they COULD do with it. The capability to improve their models may be enough under this line of cases to require additional consumer disclosures.

We are all so used to “AI notetakers” on calls, whether Zoom, Teams, or, heaven forbid, Google Meet. What are those notetakers doing with your data? Should you be getting affirmative consent? Potentially. I think it’s a matter of time before someone tests the waters on those notetakers under CIPA. 

Spoiler alert: I have reviewed the Terms of Service of some major players in that space. Their Terms say they are going to use your data to train their models. Proceed with caution.

Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors

On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:

Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.

In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:

Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.

Thomson Reuters Wins Copyright Case Against Former AI Competitor

Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.
A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.
In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.

Privacy Tip #431 – DOGE Has Access to Our Personal Information: What You Need to Know

According to a highly critical article recently published by TechCrunch, the Department of Government Efficiency (DOGE), President Trump’s advisory board headed by Elon Musk, has “taken control of top federal departments and datasets” and has access to “sensitive data of millions of Americans and the nation’s closest allies.” The author calls this “the biggest breach of US government data.” He continues, “[w]hether a feat or a coup (which depends entirely on your point of view), a small group of mostly young, private-sector employees from Musk’s businesses and associates — many with no prior government experience — can now view and, in some cases, control the federal government’s most sensitive data on millions of Americans and our closest allies.”
According to USA Today, “The amount of sensitive data that Musk and his team could access is so vast it has historically been off limits to all but a handful of career civil servants.” The article points out that:
If you received a tax refund, Elon Musk could get your Social Security number and even your bank account and routing numbers. Paying off a student loan or a government-backed mortgage? Musk and his aides could dig through that data, too.
If you get a monthly Social Security check, receive Medicaid or other government benefits like SNAP (formerly known as food stamps), or work for the federal government, all of your personal information would be at the Musk team’s fingertips. The same holds true if you’ve been awarded a federal contract or grant.
Private medical history could potentially fall under the scrutiny of Musk and his assistants if your doctor or dentist provides that level of detail to the government when requesting Medicaid reimbursement for the cost of your care.
A federal judge in New York recently issued a preliminary injunction stopping Musk and his software engineers from accessing the data, despite Musk calling the judge “corrupt” on X. USA Today reports that the White House says Musk and his engineers only have “read-only” access to the data, but that is not very comforting from a security standpoint. The Treasury Department has reportedly admitted that one DOGE staffer, a 25-year-old software engineer, had been mistakenly granted “read/write” permission on February 5, 2025. That is just frightening to me as one who works hard to protects my personal information.
Tech Crunch reported that data security is not a priority for DOGE.
“For example, a DOGE staffer reportedly used a personal Gmail account to access a government call, and a newly filed lawsuit by federal whistleblowers claims DOGE ordered an unauthorized email server to be connected to the government network, which violates federal privacy law. DOGE staffers are also said to be feeding sensitive data from at least one government department into AI software.”
We all know that Musk loves AI. We are also well aware of the risks of using AI with highly sensitive data, including unauthorized disclosure and the ability to include it in outputs.
All of this has prompted questions about whether this advisory board has proper security clearance to access our data.
Should you be concerned? Absolutely. I understand the goal of cutting costs. But why do these employees have access to our most private information, including our full Social Security numbers and health information? Do they really need that specific data to determine fraud or overspending?
I argue no. A tenet of data security is proper access controls, only having access to the data needed for business purposes. DOGE’s unfettered access to our highly sensitive information is not limited to only data needed for a specific purpose. The security procedures for accessing the data are in question, and proper security protocols must be followed. According to Senator Ron Wyden of Oregon and Senator Jon Ossoff of Georgia, who is a member of the U.S. Senate Intelligence Committee, this is “a national security risk.” As a privacy and cybersecurity lawyer, I am very concerned. A hearing on an early lawsuit filed to prohibit this unrestricted access is scheduled for tomorrow. We will keep you apprised of developments as they progress.

Criminal Charges Lodged Against Alleged Phobos Ransomware Affiliates

Unfortunately, I’ve had unpleasant dealings with the Phobos ransomware group. My interactions with Phobos have been fodder for a good story when I educate client employees on recent cyber-attacks to prevent them from becoming victims. The story highlights how these ransomware groups, including Phobos, are sophisticated criminal organizations with managerial hierarchy. They use common slang in their communications and have to get “authority” to negotiate a ransom. It’s a strange world.
Because of my unpleasant dealings with Phobos, I was particularly pleased to see that the Department of Justice (DOJ) recently announced the arrest and extradition of Russian national Evgenii Ptitsyn on charges that he administered the Phobos ransomware variant.
This week, the DOJ unsealed charges against two more Russian nationals, Roman Berezhnoy and Egor Nikolaevich Glebov, who “operated a cybercrime group using the Phobos ransomware that victimized more than 1,000 public and private entities in the United States and around the world and received over $16 million in ransom payments.” They were arrested “as part of a coordinated international disruption of their organization, which includes additional arrests and the technical disruption of the group’s computer infrastructure.” I’m thrilled about this win. People always ask me whether these cyber criminals get caught. Yes, they do. This is proof of how important the Federal Bureau of Investigation (FBI) is in assisting with international cybercrime, and how effective its partnership with international law enforcement is in catching these pernicious criminals. This is why I firmly believe that we must continue to share information with the FBI to assist with investigations, and why the FBI must be allowed to continue its important work to protect U.S. businesses from cybercrime.

Three States Ban DeepSeek Use on State Devices and Networks

New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.
Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”
According to the Texas Governor’s press release:
“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.” 

New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”
The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”
The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”
These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.

Reminder: Data Protection Impact Assessments May Be Required Under New State Privacy Laws

As we settle in to 2025, and five additional state privacy laws have or are about to go into effect, we wanted to put on your radar the obligation to conduct data protection impact assessments (DPIAs). In general, a DPIA should contain:

a systematic description of potential processing operations and the purpose of the processing, including where applicable, the legitimate interest pursued by the controller;
an assessment of the necessity and proportionality of the processing operations in relation to the purpose;
an assessment of the risks to the rights and freedoms of consumers; and
potential measures to address the risks, including safeguards, security measures, and mechanisms to ensure the protection of personal data.

As a reminder, most of the new state privacy laws require businesses to complete DPIAs if you do any of the following:

Cookies and pixels (i.e., browser-based targeted advertising)
Custom and lookalike audience (i.e., CRM-based targeted advertising)
CAPI (i.e., server-based targeted advertising)
App advertising (i.e., SDK-based targeted advertising)
Find-a-store (i.e., precise geolocation collection)
Other sensitive information collection (e.g., race, ethnicity, health, etc.)
Selling of personal data
Adaptive pricing (i.e., profiling that may cause financial injury)
Collecting credit cards number (New Jersey privacy statute only)

HHS Security Rule NPRM Proposes Makeover for Administrative Safeguard Compliance for Regulated Entities

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are exploring the proposed updates to the HIPAA Security Rule’s administrative safeguards requirement (45 C.F.R. § 164.308).
Background
Currently, HIPAA regulated entities must generally implement nine standards for administrative safeguards protecting electronic protected health information (ePHI):

Security Management Process
Assigned Security Responsibility
Workforce Security
Information Access Management
Security Awareness and Training
Security Incident Procedures
Contingency Plan
Evaluation
Business Associate Contracts and Other Arrangements

Entities are already familiar with these requirements and their implementation specifications. The existing requirements either do not identify the specific control methods or technologies to implement or are otherwise “addressable” as opposed to “required” in some circumstances for regulated entities. As noted throughout this series, HHS has proposed removing the distinction between “required” and “addressable” implementation specifications, providing for specific guidelines for implementation with limited exceptions for certain safeguards, as well as introducing new safeguards.
New Administrative Safeguard Requirements
The NPRM proposes updates to the following administrative safeguards: risk analyses, workforce security, and information access management. HHS also introduced a new administrative safeguard, technology inventory management and mapping. These updated or new administrative requirements are summarized here:

Asset Inventory Management –  The HIPAA Security Rule does not explicitly mandate a formal asset inventory, but HHS informal guidance and audits suggest that inventorying assets that create, receive, maintain, or transmit ePHI is a critical step in evaluating security risks. The NPRM proposes a new administrative safeguard provision requiring regulated entities to conduct and maintain written inventories of any technological assets (e.g., hardware, software, electronic media, data, etc.) capable of creating, receiving, maintaining, or transmitting ePHI, and to illustrate a network map showing the movement of ePHI throughout the organization. HHS would require these inventories and maps to be periodically reviewed and updated at least once every 12 months andwhen certain events prompt changes in how regulated entities protect ePHI, such as new, or updates to, technological assets; new threats to ePHI; transactions that impact all or part of regulated entities; security incidents; or changes in laws.
Risk Analysis – While conducting a risk analysis has always been a required administrative safeguard, the NPRM proposes more-detailed content specifications around items that need to be addressed in the written risk assessment, including reviewing the technology asset inventory; identifying reasonably anticipated threats and vulnerabilities; documenting security measures, policies and procedures for documenting risks and vulnerabilities to ePHI systems; and making documented “reasonable determinations” of the likelihood and potential impact of each threat and vulnerability identified.
Workforce Security and Information Access Management – The NPRM proposes that, with respect to its ePHI or relevant electronic information systems, regulated entities would need to establish and implement written procedures that (1) determine whether access is appropriate based on a workforce member’s role; (2) authorize access consistent with the Minimum Necessary Rule; and (3) grant and revise access consistent with role-based access policies. Under the NPRM, these administrative safeguard specifications would no longer be “addressable,” as previously classified, meaning these policies and procedures would now be mandatory for regulated entities. In addition, the NPRM develops specific standards for the content and timing for training workforce members of Security Rule compliance beyond the previous general requirements.

Texas’ Power Transmission Infrastructure: Addressing Growing Demand from Data Centers and Crypto Mining

Texas is facing a rapidly evolving energy landscape, driven in part by the surging power demands of data centers and cryptocurrency mining operations. As the digital economy expands, the state’s existing power transmission infrastructure must adapt to ensure grid reliability, affordability and sustainability. However, the growing demand for electricity raises critical challenges, including the need for additional transmission capacity, grid resilience, and fair cost allocation for new infrastructure investments.
Rising Energy Demand from Data Centers and Crypto Mining
Texas has become a prime location for data centers and cryptocurrency mining operations due to its deregulated energy market, favorable business climate and relatively low electricity costs. Data centers, which support cloud computing, artificial intelligence (AI), and financial transactions, require vast amounts of power, often operating 24/7. Similarly, cryptocurrency mining facilities run continuously, consuming significant amounts of electricity to maintain blockchain networks.
The Electric Reliability Council of Texas (ERCOT) projects that power demand from these industries will grow substantially in the coming years. Consumption of electricity from large flexible loads such as data centers and crypto mining facilities is projected to account for 10% of ERCOT’s total forecasted electricity consumption in 2025.  ERCOT currently expects power demand to nearly double by 2030.  Without strategic infrastructure upgrades, this demand would likely strain the grid, increase congestion and lead to higher electricity prices for consumers.
Challenges with Existing Transmission Infrastructure
Texas operates its own independent power grid, which provides flexibility but also limits its ability to import electricity from neighboring states during periods of high demand. The state’s transmission infrastructure has already faced challenges in keeping up with rapid population growth and extreme weather events.  In 2021, Winter Storm Uri exposed vulnerabilities in the grid, leading to widespread outages and highlighting the need for greater investment in both generation and transmission capacity.
One major challenge is that much of Texas’ renewable energy generation—especially wind and solar—is located in rural areas, far from major load centers like Dallas, Houston and Austin. Without sufficient transmission capacity, this clean energy cannot be efficiently delivered to where it is needed. The addition of high-energy-consuming industries like data centers and crypto mining exacerbates this challenge by increasing congestion on existing transmission lines.
The Need for Additional Transmission Infrastructure
To accommodate the growing energy needs, Texas must significantly expand its high-voltage transmission network. New transmission lines are necessary to:

Relieve Grid Congestion – increasing transmission capacity reduces bottlenecks that can drive up energy prices and cause reliability concerns.
Enhance Grid Resilience – strengthening transmission infrastructure can help prevent widespread outages during extreme weather events.
Support Renewable Integration – more transmission lines will allow Texas to take full advantage of its abundant wind and solar resources by connecting them to high-demand areas.
Ensure Reliability for Data Centers and Crypto Mining – dedicated infrastructure planning can ensure that new energy-intensive operations do not disrupt service for residential and commercial consumers.

The Costs of Transmission Expansion
One of the biggest questions surrounding transmission expansion is funding. Historically, Texas has used a mix of ratepayer contributions, state incentives, and private investments to build and maintain its power infrastructure. There are several potential funding mechanisms for new transmission lines:
   •   Ratepayer Contributions – transmission costs are often passed on to consumers through electricity bills. However, increasing rates to fund expansion may face resistance, especially if residential and small-business customers bear a disproportionate burden of the cost.
   •   ERCOT Transmission Cost Recovery – ERCOT has a cost allocation model that spreads transmission investments across various market participants. This approach ensures that those benefiting from the upgrades contribute to the costs.
   •   Direct Charges on Large Energy Consumers – one potential policy solution is to require data centers and crypto mining companies to pay a larger share of transmission infrastructure costs. Special tariffs or direct infrastructure investment agreements could be established to ensure that these industries contribute fairly.
   •   Public-Private Partnerships – collaboration between the state government, utilities, and private investors could help finance large-scale transmission projects. In some cases, tax incentives or low-interest financing options could encourage private sector investment in critical infrastructure.
   •   Federal Funding and Grants – the federal government has recently made funding available for grid modernization projects through the Infrastructure Investment and Jobs Act. The new administration has called some of this into question. Texas could leverage these funds to supplement state and private investments.
Balancing Growth and Grid Reliability
Expanding transmission infrastructure is essential, but it must be done in a way that balances economic growth with grid reliability. Policymakers must ensure that the costs are distributed equitably and that the grid remains stable during periods of high demand. Additionally, investments in energy storage, smart grid technology, and demand response programs can complement transmission expansion by improving overall efficiency.
Texas has long been a leader in energy innovation, and addressing these transmission challenges will be critical to the state maintaining that position. By implementing forward-thinking policies and funding strategies, the state can support its growing digital economy while ensuring a reliable and affordable power supply for all consumers.

Employers Should Plan for the Impact of Evolving Social Policy on Their Workforce

Even before the 2024 presidential election and the recent wave of executive orders, employers were evaluating their positions on various social issues.
Whether taking a formal stand, abstaining from a position, or landing somewhere in between, employers often consider external stakeholders and the court of public opinion. But they frequently forget about a critical and impactful audience—their employees.
Below are a few key areas where evolving social policies intersect with employee considerations.

Environmental, Social, and Governance (ESG) Policies: Regulations around diversity, equity, and inclusion; sustainability; the environment; and financial investments can differ across federal, state, and local jurisdictions, and certain rules apply only to government contractors. Aside from legal concerns, employers may face public and private questions about their actions or policies from employees. As such, employers should make sure that their ESG policies are current, thoughtful, and well communicated, especially in light of changing public sentiment, regulations, and legislation.
Social Media and Freedom of Speech: Employer policies on social media, recording/filming in the workplace (and online), volunteerism, non-solicitation, and whistleblowing should be updated to ensure that they reflect the latest laws, regulations, and guidance by applicable agencies and regulatory bodies. Management should also be trained on these policies, including how to respond to situations when the company’s employees choose to speak out on issues.
Benefit Programs: Employees might question their employer’s benefit policies relating to health care coverage provisions, benefit subsidies, time off/leave and holidays, and even voluntary benefit choices. Do these programs appear to favor certain employees over others? Employers should regularly evaluate these programs not only for compliance but also through the lens of their employees’ needs and expectations, which may differ based on location.
Labor Negotiations: An employer’s social advocacy and related positions impact its employees and the labor unions that currently—or may in the future—represent them. Therefore, employers should make sure that they have a strategy that supports this relationship and is in compliance with applicable labor laws, as well as labor contracts that are in place.
Outsourced, Offshore/Nearshore Workforce: When a company’s contingent and contract labor works side by side with the company’s employees, it’s essential that policies and programs account for this important and sometimes significant part of the workforce. Vendor contracts and communication strategies should also be aligned with these efforts.
Immigration Policies: Most industries and their employees are affected by immigration policy. A legal immigrant workforce will likely be concerned about their own status and that of their families during this uncertain time. Employers must review their policies and programs for these valuable workers and consider what supports, policies, and communications they should provide.
Mandatory Training programs: Employers should annually review mandatory training programs against changing regulations and expectations, as well as current strategies related to advocacy and ESG.

The bottom line: An employer’s stand on social issues and related policies, investments, programs, and trainings affects its workforce. A company’s employees are its face to customers and the public, so employees’ engagement and alignment matters. Because laws and regulations affecting ESG are continually changing, employees will be more engaged and better ambassadors for their employer if it has a well-considered strategy and communication plan addressing these topics.
Michelle Wright also contributed to this post

SO IT GOES: Lead Buyer Out of ATDS Claim But Hooked on DNC and Texas Registration Claim in TCPA Class Action

Pretty common factual scenario.
Lead generator makes outbound calls and talks to consumer.
Consumer either pretends to be interested or actually is interested and then is transferred to a lead buyer who can actually provide the good or service the consumer wants.
But then consumer sues lead buyer for making the illegal call–even though the lead buyer did not make the call at all and likely had no idea the call was illegal.
It happens literally every day in TCPAWorld and it remains the biggest problem/risk with buying third-party leads.
Well in Ortega v. Ditommaso 2025 WL 440278 (W.D. Tx. Feb 6, 2025) a lead buyer walked away from a piece of a TCPA case–defeating the ATDS component.
In Ortega a call center run by Meridian Services, LLC allegedly contacted plaintiff to try to sell a business loan. Plaintiff stayed on the line and pretended to be interested to find out who was calling. As a result the call was transferred to Ditommaso, Inc. who tried to sell a loan.
Plaintiff sued Meridian and Ditomasso for the calls alleging they were made using an ATDS and violated his DNC rights since they were made without consent.
Ditomasso moved to dismiss and the court threw out part of the case.
As to the ATDS component the court adopted the narrow ATDS definition accepted in the Second and Ninth Circuit’s and determined that because there were no allegations establishing the calls were placed at random an ATDS was not used. (Careful with this folks because some courts apply a different standard.) Regardless, nice win for the defense on this piece.
Next Defendant asked the court to toss the case because only one call was placed to the Plaintiff and the follow up texts were only sent because he stated he was interested in the product. But the evidence of the flow of calls and texts was not on the face of the complaint so the court would not consider it and denied the motion on that basis.
The Court also found the vicarious liability allegations were sufficient because Meridian was alleged to be an agent of Ditomasso for purposes of making the calls.
The Court also determined Ditomasso could be vicariously liable for the Texas Business and Commerce Code § 302.101 violation–even though it was not itself required to register as a marketer in the state.
So, some good, some bad. But better than nothing.
Take aways here:

Buying third-party leads is dangerous;
Make sure you are working with only registered marketers in Texas;
Some courts will toss ATDS claims if calls are made from a list– but not all;
You cannot introduce evidence of consent at the pleadings stage (unless you are challenging standing, which Defendant did not do.)

SEC Grants Further Relief From Including Personally Identifiable Information in CAT Reporting

On February 10, the Securities and Exchange Commission (SEC) granted relief exempting industry members from reporting a natural person’s name, address, and year of birth to the Consolidated Audit Trail (CAT). Industry members must still report transformed social security numbers (SSNs) or individual taxpayer identification numbers (ITINs) for natural persons and, to the extent applicable, Larger Trader IDs (LTIDs) and Legal Entity Identifiers (LEIs). This exemptive relief builds on the SEC’s 2020 relief that exempted industry members from reporting actual SSNs/ITINs and full birth dates to CAT (but then requiring year-of-birth reporting) and developed the system for transforming SSNs/ITINs, which are then used to generate CAT Customer-ID (CCIDs).
The SEC’s relief acknowledges the ongoing concerns of industry members and trade associations that the wholesale collection of customer information created cybersecurity risks, as such sensitive customer information was vulnerable to hacking by cybercriminals. Particularly when such customer information could be paired with the full inventory of historical securities transactions effected by that customer maintained in the CAT transaction database, cybercriminals could further use compromised information to impersonate customers or regulators, take over or otherwise compromise customer accounts, or otherwise engage in fraud or other bad acts affecting customers or the markets. The SEC’s action largely tracks a recommendation from FINRA President and CEO Robert Cook last month (https://www.finra.org/media-center/blog/cat-should-be-modified-to-cease-collecting-personal-information-on-retail-investors), perhaps anticipating inevitable CAT reform by a Republican-led Commission.
Regulators will still be able to obtain customer-specific information regarding individual transactions, but they will have to do so by requesting do so by requesting such information from broker-dealers through Bluesheet and other regulatory requests. Both the SEC’s exemptive order and FINRA’s proposal highlighted reverting to such a “request-response” system. 
The SEC’s exemptive order is available at https://www.sec.gov/files/rules/sro/nms/2025/34-102386.pdf.