Privacy and Advertising Year in Review 2024: Will Kids and Teens Remain a Focus in 2025?

A new year. A new administration in Washington. While protecting kids and teens is likely to remain an issue that drives legislation, litigation, and policy discussions in 2025, issuance of 1,000 Executive Orders on day one of the Trump Administration may result in new or changed priorities and some delay in the effective date of the recently updated Children’s Online Privacy Protection Rule (COPPA Final Rule).
We start with a recap of significant actions affecting kids and teens from the beginning of 2024 to the end of the Biden Administration in January 2025 and some early action by the Trump Administration.
Key Actions Affecting Kids and Teens:

FTC Regulation and Reports

The Federal Trade Commission (FTC or Commission) kicked off 2024 with proposed rules updating the Children’s Online Privacy Protection Act (COPPA) and issued a COPPA Final Rule in the closing days of the Biden Administration. FTC Commissioner and now Chair Andrew Ferguson identified several areas requiring clarification, and publication of the COPPA Final Rule will likely be delayed due to President Trump’s Executive Order freezing agency action. 
The FTC released a highly critical staff report, A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services, in September of 2024. The report, based on responses to the FTC’s 2020 6(b) FTC Act orders issued to nine of the largest social media platforms and video streaming services, including TikTok, Amazon, Meta, Discord, and WhatsApp, highlighted privacy and data security practices of social media and video streaming services and their impacts on children and teens. 
Policy debates centered on Artificial Intelligence (AI), and one of the Commission’s final acts was a January 17, 2025, FTC Staff Report on AI Partnerships & Investments 6(b) Study. 
The Project 2025 report, Mandate for Leadership 2025: The Conservative Promise, recommends possible FTC reforms and highlights the need for added protections for kids and teens and action to safeguard the rights of parents. The report stresses in particular the inability of minors to enter into contracts. 

Litigation and Enforcement: the FTC

On July 9, 2024, chat app maker NGL Labs settled with the FTC and Los Angeles District Attorney after they brought a joint enforcement action against the company and its owners for violations of the COPPA Rule and other federal and state laws. 
On January 17, 2025, the FTC announced a $20 million settlement of an enforcement action alleging violations of COPPA and deceptive and unfair marketing practices against the developer of the popular game Genshin Impact. In addition to an allegation that the company collected personal information from children in violation of COPPA, the complaint alleged that the company deceived users about the costs of in-game transactions and odds of obtaining prizes. As a result, the company is required to block children under 16 from making in-game purchases without parental consent. 

Federal and State Privacy Legislation 

Federal privacy legislation, including the Kids Online Safety Act (KOSA 2.0) and its successor, the Kids Online Safety and Privacy Act (KOSPA), failed to make it through Congress, although 32 state attorney generals (AGs) sent a letter to Congress urging passage of KOSA 2.0 on November 18, 2024. 
Last year, Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Rhode Island enacted comprehensive privacy laws, and they include provisions affecting children and teens. Those states join California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia. 

Litigation and Enforcement: the Courts

Throughout 2024, state attorneys general and private plaintiffs brought litigation targeting social media platforms and streaming services, alleging that they are responsible for mental and physical harm to kids and teens. 
Legal challenges to more state social media laws arguing they violate First Amendment rights, among other grounds, were filed or were heard by courts in 2024, and legal action has continued into this year. On February 3, 2025, the tech umbrella group NetChoice filed a complaint in Maryland district court against the Maryland Age-Appropriate Design Code Act (Maryland Kid’s Code or Code), which was enacted on May 9, 2024. The complaint, which is similar to NetChoice’s recent challenges to other state social media laws, alleges that the Code violates the First Amendment by requiring websites “to act as government speech police” and alter their protected editorial functions through a vague and subjective “best interests of children” standard that gives state officials “nearly boundless discretion to restrict speech.” In 2024, NetChoice and its partners successfully obtained injunctions or partial injunctions against social media laws on constitutional grounds in Utah on September 9, 2024, in Ohio on February 12, 2024, and, at the eleventh hour, on December 31, 2024, against California’s Protecting Our Kids from Social Media Addiction Act. NetChoice’s complaint against Mississippi HB 1126 was heard by the U.S. Court of Appeals for the Fifth Circuit on February 2, 2025, but a decision has not yet been published as of the time of this writing. 
On August 16, 2024, a panel of the U.S. Court of Appeals for the Ninth Circuit partially affirmed the district court’s opinion that the data privacy impact assessment (DPIA) provisions of the California Age Appropriate Design Code Act (CAADCA) “clearly compel(s) speech by requiring covered businesses to opine on potential harms to children” and are therefore likely unconstitutional. However, the appeals court vacated the rest of the district court’s preliminary injunction “because it is unclear from the record whether the other challenged provisions of the CAADCA facially violate the First Amendment, and it is too early to determine whether the unconstitutional provisions of the CAADCA were likely severable” from the rest of the Act. The panel remanded the case to the district court for further proceedings.  
On July 1, 2024, the Supreme Court held that the content moderation provisions of both Texas HB 20 and Florida SB 7072, which the court decided jointly, violated the First Amendment and sent the cases back to the lower courts for further “fact-intensive” analysis.  
On January 17, 2025, the U.S. Supreme Court affirmed the judgment of the U.S. Court of Appeals for the D.C. Circuit that upheld a Congressional ban on TikTok due to national security concerns regarding TikTok’s data collection practices and its relationship with a foreign adversary. The Court concluded that the challenged provisions do not violate the petitioners’ First Amendment rights. President Trump has vowed to find a solution so that U.S. users can access the platform. 
On January 15, 2025, the U.S. Supreme Court heard an appeal of a Texas law that requires age verification to access porn sites, and it seems likely the Court will uphold the law.

We Forecast:

Efforts to advance a general federal privacy law and added protections for kids and teens will redouble. Indeed, S. 278, Keep Kids Off Social Media Act (KOSMA), advanced out of the Senate Commerce Committee on February 5, 2025. However, tight margins and the thorny issues of preemption and a private right of action will complicate enactment of general privacy legislation. 
States will continue to be actively engaged on privacy and security legislation, and legal challenges on constitutional and other grounds are expected to continue. 
Legal challenges to data collection and advertising practices of platforms, streaming services, and social media companies will continue. 
The FTC was planning to hold a virtual workshop on February 25, 2025 on design features that “keep kids engaged on digital platforms.” The FTC’s September 26, 2024 announcement outlines topics for discussion, including the positive and negative physical and psychological impacts of design features on youth well-being, but the workshop has been postponed.

Our crystal ball tells us that privacy protection of kids and teens and related questions of responsibility, liability, safety, parental rights, and free speech will continue to drive conversation, legislation, and litigation in 2025 at both the federal and the state level. While the deadline for complying with the new COPPA Rule is likely to slide, businesses will need to implement operational changes to comply with new obligations under the Rule, while remaining aware of the evolving policy landscape and heightened litigation risks.

European Commission Withdraws ePrivacy Regulation and AI Liability Directive Proposals

On February 11, 2025, the European Commission made available its 2025 work program (the “Work Program”). The Work Program sets out the key strategies, action plans and legislative initiatives to be pursued by the European Commission.
As part of the Work Program, the European Commission announced that it plans to withdraw its proposals for a new ePrivacy Regulation (aimed at replacing the current ePrivacy Directive) and AI Liability Directive (aimed at complementing the new Product Liability Directive) due to lack of a consensus for their adoption. The withdrawal means that the current ePrivacy Directive and its national transposition laws will remain in force and postpones the regulation of non-contractual liability for damages arising from the use of AI at the EU level. 
Read the Work Program.

State Regulators Eye AI Marketing Claims as Federal Priorities Shift

With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part two of our series, in which we turn our focus to regulators, where we’re seeing increased scrutiny at the state level amidst uncertainty at the federal level.
FTC Led the Charge but Unlikely to Continue AI “Enforcement Sweep”
As mentioned in part one of our series, last year regulators at the Federal Trade Commission (FTC) launched “Operation AI Comply,” which it described as a “new law enforcement sweep” related to using new AI technologies in misleading or deceptive ways.
In September 2024, the FTC announced five cases against AI technology providers for allegedly deceptive claims or unfair trade practices. While some of these cases involve traditional get-rich-quick schemes with an AI slant, others highlight the risks inherent in the rapid adoption of new AI technologies. Specifically, the complaints filed by the FTC involve:

An “AI lawyer” who was supposedly able to draft legal documents in the U.S. and automatically analyze a customer’s website for potential violations.
Marketing of a “risk free” business powered by AI that refused to honor money-back guarantees when the business began to fail.
Claims of a get-rich-quick scheme that attracted investors by claiming they could easily invest in online businesses “powered by artificial intelligence.”
Positioning a business opportunity supposedly powered by AI as a “surefire” investment and threatening people who attempted to share honest reviews.
An “AI writing assistant” that enabled users to quickly generate thousands of fake online reviews of their businesses.

Since these announcements, dramatic changes have occurred at the FTC (and across the federal government) as a result of the new administration. Last month, the Trump administration appointed FTC Commissioner Andrew N. Ferguson as the new FTC chair, and Mark Meador’s nomination to fill the FTC Commissioner seat left vacant by former chair Lina M. Khan appears on track for confirmation. These leadership and composition changes will likely impact whether and how the FTC pursues cases against AI technology providers.
For example, Commissioner Ferguson strongly dissented from the FTC’s complaint and consent agreement with the company that created the “AI writing assistant,” arguing that the FTC’s pursuit of the company exceeded its authority.
And in a separate opinion supporting the FTC’s action against the “AI lawyer” mentioned above, Commissioner Ferguson emphasized that the FTC does not have authority to regulate AI on a standalone basis, but only where AI technologies interact with its authority to prohibit unfair methods of competition and unfair or deceptive acts and practices.
While it is impossible to predict precisely how the FTC under the Trump administration will approach AI, Commissioner Ferguson’s prior writings provide insight into the FTC’s future regulatory focus for AI, along with the focus in Chapter 30 of Project 2025 (drafted by Adam Candeub, who served in the first Trump administration) on protecting children online.
The impact of the new administration’s different approach to AI regulation is not limited to the FTC and likely will affect all federal regulatory and enforcement activity. This is due in part to one of President Trump’s first executive orders, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”
That order repealed the Biden administration’s 2023 executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established guidelines for the development and use of AI. An example of this broader impact is the SEC’s proposed rule on the use of AI by broker-dealers and registered investment advisors, which is likely to be withdrawn based on the recent executive order, especially given the acting chair’s public hostility toward the rule and the emphasis on reducing securities regulation outlined in Chapter 27 of Project 2025. 
The new administration has also been outspoken in international settings regarding its view that regulating AI will give advantages to authoritarian nations in the race to develop the powerful technology.
State Attorneys General Likely to Take on Role of AI Regulation and Enforcement
Given the dramatic shifts in direction and focus at the federal level, it is likely that short-term regulatory action will increasingly shift to the states.
In fact, state attorneys general of both parties have taken recent action to regulate AI and issue guidance. As discussed in a previous client alert, Massachusetts Attorney General Andrea Campbell has emphasized that AI development and use must conform with the Massachusetts Consumer Protection Act (Chapter 93A), which prohibits practices similar to those targeted by the FTC.
In particular, she has highlighted practices such as falsely advertising the quality or usability of AI systems or misrepresenting the safety or conditions of an AI system, including representations that the AI system is free from bias.
Attorney General Campbell also recently joined a coalition of 38 other attorneys general and the Department of Justice in arguing that Google engaged in unfair methods of competition by making its AI functionality mandatory for Android devices, and by requiring publishers to share data with Google for the purposes of training its AI.
Most recently, California Attorney General Rob Bonta issued two legal advisories emphasizing that developers and users of AI technologies must comply with existing California law, including new laws that went into effect on January 1, 2025. The scope of his focus on AI seems to extend beyond competition and consumer protection laws to include laws related to civil rights, publicity, data protection, and election misinformation.
Bonta’s second advisory emphasizes that the use of AI in health care poses increased risks of harm that necessitate enhanced testing, validation, and audit requirements, potentially signaling to the health care industry that its use of AI will be an area of focus for future regulatory action.
Finally, in a notable settlement that was the first of its kind, Texas Attorney General Ken Paxton resolved allegations that an AI health care technology company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its AI products, including error and hallucination rates.
As AI technology continues to impact consumers, we expect other attorneys general to follow suit in bringing enforcement actions based on existing consumer protection laws and future AI legislation.
Moving Forward with Caution
Recent success by plaintiffs, combined with an active focus on AI by state regulators, should encourage businesses to be thoughtfully cautious when investing in new technology. Fortunately, as we covered in our chatbot alert, there are a wide range of measures businesses can take to reduce risk, both during the due diligence process and upon implementing new technologies, including AI technologies, notwithstanding the change in federal priorities. Other countries – particularly in Europe – may also continue their push to regulate AI. 
At a minimum, businesses should review their consumer-facing disclosures — usually posted on the company website — to ensure that any discussion of technology is clear, transparent, and aligned with how the business uses these technologies. Companies should expect the same transparency from their technology providers. Businesses should also be wary of so-called “AI washing,” which is the overstatement of AI capabilities and understatement of AI risks, and scrutinize representations to business partners, consumers, and investors.
Future alerts in this series will cover:

Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.

Trump Nominates McKernan to Be CFPB Director

On Feb. 11, 2025, President Trump nominated Jonathan McKernan to be the next director of the Consumer Financial Protection Bureau (CFPB). If confirmed by the Senate, McKernan would replace Russell Vought, who President Trump designated as acting director of the CFPB on Feb. 7, 2025, a day after Vought was confirmed as the director of the White House Office of Management and Budget. 
In his most recent role, McKernan was a member of the Board of Directors of the Federal Deposit Insurance Corporation (FDIC). He held that post from Jan. 5, 2023, until he resigned on Feb. 10, 2025.  
While at the FDIC, Mr. McKernan served as co-chairman of the Special Committee of the Board that oversaw an independent third-party review of sexual harassment and professional misconduct allegations at the FDIC, as well as issues relating to the FDIC’s workplace culture. As one of two Republican members of the Board, McKernan—along with now-Acting FDIC Chairman Travis Hill—frequently opposed the efforts of the Democratic majority. 
Prior to joining the FDIC, McKernan was a counsel to Ranking Member Pat Toomey (R-PA) on the staff of the Senate Committee on Banking, Housing, and Urban Affairs. He also served as a senior counsel at the Federal Housing Finance Agency, a senior policy advisor at the Department of the Treasury, and a senior financial policy advisor to Sen. Bob Corker (R-TN). He holds a Bachelor of Arts and Master of Arts in economics from the University of Tennessee, and a law degree from the Duke University School of Law. 
If confirmed by the Senate, McKernan would return to the FDIC board, as the CFPB’s Director holds an ex officio seat. 
Industry groups have welcomed McKernan’s nomination. For instance, the American Financial Services Association released a statement congratulating McKernan on his nomination and complementing his “extensive experience in the financial-services policy arena.” The Consumer Bankers Association followed suit, issuing a statement that noted that “Mr. McKernan’s experience as a former FDIC Board Member, counsel to lawmakers, and attorney focused on consumer finance and banking highlight his qualifications to lead the Bureau at a critical time.” 
It is unclear what McKernan will try to accomplish if confirmed as CFPB director. The Trump administration has suggested that the CFPB should be shut down. President Trump has confirmed that his goal is to dismantle the CFPB, which he said, “was set up to destroy people.” But it is unlikely that the Trump administration would be able to abolish or shut down the CFPB, as any such effort would likely face significant challenges in Congress and the courts.  
Given that fact, McKernan—who is likely to be confirmed—may take steps to rein in the CFPB’s perceived abuses and chart out a business-friendly regulatory and enforcement agenda, with much of the agenda tied to the goals of reducing regulatory compliance costs and promoting innovation. But those broad outlines leave open questions that would only be answered in the coming weeks and months.

Global Data Protection Authorities Issue Joint Statement on Artificial Intelligence

On February 11, 2025, the data protection authorities of the UK, Ireland, France, South Korea and Australia issued a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective artificial intelligence (“AI”) (the “Joint Statement”). In the Joint Statement, the DPAs recognize “the importance of supporting players in the AI ecosystem in their efforts to comply with data protection and privacy rules and help them reconcile innovation with respect for individuals’ rights.”
The Joint Statement refers to the “leading role” DPAs have in “shaping data governance” to address the evolving challenges of AI. Specifically, the Joint Statement indicates that the authorities will commit to:

Foster a shared understanding of lawful grounds for processing personal data in the context of AI training.
Exchange information and establish a shared understanding of proportionate safety measures, to be updated in line with evolving AI data processing activities.
Monitor technical and societal impacts of AI.
Reduce legal uncertainties and create opportunities for innovation where data processing is considered essential for AI.
Strengthen interactions with other authorities to enhance consistency between different regulatory frameworks for AI systems, tools and applications, including those responsible for competition, consumer protection and intellectual property.

Read the full Joint Statement.

Extension Vs. Capability – Google Learns the Difference The Hard Way

Earlier this week, frequent CIPAWorld participant Google lost a motion to dismiss based on the use of their Google Cloud Contact Center AI (“GCCCAI”) product. And this case (Ambriz v. Google, LLC, Case No. 23-cv-05437-RFL (N.D. Cal Feb. 10, 2025) raises some fascinating questions regarding the use of AI in Contact Centers and more generally.
The GCCCAI product (which a prior motion to dismiss was discussed on TCPAWorld) “offers a virtual agent for callers to interact with, and it can also support a human agent, including by: (i) sending the human agent the transcript of the initial interaction and the GCCAI virtual agent, (ii) acting as a ‘session manager’ who provides the human agent with a real-time transcript, makes article suggestions, and provides step-by-step guidance and ‘smart replies’.” It does all of this without informing the consumers that the call is being transcribed and analyzed.
Plaintiffs sued Google under Section 631(a) of the California Penal Code. This provision has three main prohibition: (i) “intentional wiretapping”, (ii) “willfully attempting to learn the contents or meaning of a communication in transit”, and (iii) “attempting to use or communicate information obtained as a result of engaging in either of the two previous activities”. Plaintiffs in this case claim Google violated (i) and (ii) of the above provisions.
Google’s best argument in this case is that they are not a third party to the communications. Because only “unauthorized third-party listeners” can violate Section 631(a). Google argues that they aren’t a third party, they are merely a software provider, like a tape recorder.
The Court disagreed. Recognizing that there are essentially two distinct branches of these cases when it comes to how to look at software as a service providers, the court proceeds to discuss whether the GCCAI product is an “extension” of the parties or whether the GCCAI product has the “capability” to use the data for its own purposes.
If a software has “merely captured the user data and hosted it on its own servers where [one of the parties] could then use data by analyzing”, the software is generally considered to be an extension of the parties. Therefore, it’s not a third-party and wouldn’t violate CIPA. This is similar to the “tape recorder” example preferred by Google.
Alas, however, the court looked at GCCCAI as “a third-party based on its capability to use user data to its benefit, regardless of whether or not it actually did so.” The Court applied this capability test and found that the Plaintiffs had “adequately alleged that Google ‘has the capability to use the wiretapped data it collects…to improve its AI/ML models.’” Because Google’s own terms of use stated that they may do so if their Customer allows them to do, the Court inferred that Google had the capacity to do just that.
Google argued that it was contractually prohibited from do so, but the Court also found those prohibitions don’t change the fact that Google has the ability to do so. And that is the determining factor. Therefore, the motion to dismiss was denied.
A couple of interesting takeaways from this case:

In a world where every company is throwing AI in their products, it is vital to understand not only WHAT they are doing with your data, but also what they COULD do with it. The capability to improve their models may be enough under this line of cases to require additional consumer disclosures.

We are all so used to “AI notetakers” on calls, whether Zoom, Teams, or, heaven forbid, Google Meet. What are those notetakers doing with your data? Should you be getting affirmative consent? Potentially. I think it’s a matter of time before someone tests the waters on those notetakers under CIPA. 

Spoiler alert: I have reviewed the Terms of Service of some major players in that space. Their Terms say they are going to use your data to train their models. Proceed with caution.

Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors

On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:

Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.

In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:

Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.

Thomson Reuters Wins Copyright Case Against Former AI Competitor

Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.
A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.
In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.

Privacy Tip #431 – DOGE Has Access to Our Personal Information: What You Need to Know

According to a highly critical article recently published by TechCrunch, the Department of Government Efficiency (DOGE), President Trump’s advisory board headed by Elon Musk, has “taken control of top federal departments and datasets” and has access to “sensitive data of millions of Americans and the nation’s closest allies.” The author calls this “the biggest breach of US government data.” He continues, “[w]hether a feat or a coup (which depends entirely on your point of view), a small group of mostly young, private-sector employees from Musk’s businesses and associates — many with no prior government experience — can now view and, in some cases, control the federal government’s most sensitive data on millions of Americans and our closest allies.”
According to USA Today, “The amount of sensitive data that Musk and his team could access is so vast it has historically been off limits to all but a handful of career civil servants.” The article points out that:
If you received a tax refund, Elon Musk could get your Social Security number and even your bank account and routing numbers. Paying off a student loan or a government-backed mortgage? Musk and his aides could dig through that data, too.
If you get a monthly Social Security check, receive Medicaid or other government benefits like SNAP (formerly known as food stamps), or work for the federal government, all of your personal information would be at the Musk team’s fingertips. The same holds true if you’ve been awarded a federal contract or grant.
Private medical history could potentially fall under the scrutiny of Musk and his assistants if your doctor or dentist provides that level of detail to the government when requesting Medicaid reimbursement for the cost of your care.
A federal judge in New York recently issued a preliminary injunction stopping Musk and his software engineers from accessing the data, despite Musk calling the judge “corrupt” on X. USA Today reports that the White House says Musk and his engineers only have “read-only” access to the data, but that is not very comforting from a security standpoint. The Treasury Department has reportedly admitted that one DOGE staffer, a 25-year-old software engineer, had been mistakenly granted “read/write” permission on February 5, 2025. That is just frightening to me as one who works hard to protects my personal information.
Tech Crunch reported that data security is not a priority for DOGE.
“For example, a DOGE staffer reportedly used a personal Gmail account to access a government call, and a newly filed lawsuit by federal whistleblowers claims DOGE ordered an unauthorized email server to be connected to the government network, which violates federal privacy law. DOGE staffers are also said to be feeding sensitive data from at least one government department into AI software.”
We all know that Musk loves AI. We are also well aware of the risks of using AI with highly sensitive data, including unauthorized disclosure and the ability to include it in outputs.
All of this has prompted questions about whether this advisory board has proper security clearance to access our data.
Should you be concerned? Absolutely. I understand the goal of cutting costs. But why do these employees have access to our most private information, including our full Social Security numbers and health information? Do they really need that specific data to determine fraud or overspending?
I argue no. A tenet of data security is proper access controls, only having access to the data needed for business purposes. DOGE’s unfettered access to our highly sensitive information is not limited to only data needed for a specific purpose. The security procedures for accessing the data are in question, and proper security protocols must be followed. According to Senator Ron Wyden of Oregon and Senator Jon Ossoff of Georgia, who is a member of the U.S. Senate Intelligence Committee, this is “a national security risk.” As a privacy and cybersecurity lawyer, I am very concerned. A hearing on an early lawsuit filed to prohibit this unrestricted access is scheduled for tomorrow. We will keep you apprised of developments as they progress.

Three States Ban DeepSeek Use on State Devices and Networks

New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.
Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”
According to the Texas Governor’s press release:
“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.” 

New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”
The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”
The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”
These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.

Reminder: Data Protection Impact Assessments May Be Required Under New State Privacy Laws

As we settle in to 2025, and five additional state privacy laws have or are about to go into effect, we wanted to put on your radar the obligation to conduct data protection impact assessments (DPIAs). In general, a DPIA should contain:

a systematic description of potential processing operations and the purpose of the processing, including where applicable, the legitimate interest pursued by the controller;
an assessment of the necessity and proportionality of the processing operations in relation to the purpose;
an assessment of the risks to the rights and freedoms of consumers; and
potential measures to address the risks, including safeguards, security measures, and mechanisms to ensure the protection of personal data.

As a reminder, most of the new state privacy laws require businesses to complete DPIAs if you do any of the following:

Cookies and pixels (i.e., browser-based targeted advertising)
Custom and lookalike audience (i.e., CRM-based targeted advertising)
CAPI (i.e., server-based targeted advertising)
App advertising (i.e., SDK-based targeted advertising)
Find-a-store (i.e., precise geolocation collection)
Other sensitive information collection (e.g., race, ethnicity, health, etc.)
Selling of personal data
Adaptive pricing (i.e., profiling that may cause financial injury)
Collecting credit cards number (New Jersey privacy statute only)

Chapter 93A Notice Requirements in Multi-State Class Actions

In Sowa v. Mercedes-Benz Grp. AG and Mercedes-Benz USA, LLC, purchasers of certain Mercedes Benz vehicles from 17 states brought a putative class action against the automaker alleging that certain vehicles contained a dangerous defect. Separate actions were consolidated in the Northern District of Georgia. In addition to nationwide class claims against defendants under Georgia law, plaintiffs also sought to maintain various sub-classes under various state consumer protection statutes, including Massachusetts’ Chapter 93A. 
A Massachusetts resident asserted Chapter 93A claims against Mercedes on behalf of one sub-class. Defendants moved to dismiss this claim for failure to provide a pre-suit demand letter 30 days before initiating the lawsuit. Plaintiffs did not dispute that they failed to provide such notice; they instead sent a demand letter on the Massachusetts resident’s behalf on April 28, 2023, after filing the lawsuit on February 10, 2023. The Northern District of Georgia determined that this ultimately did not bar plaintiffs’ Chapter 93A claims. Specifically, the court concluded that since the notice letter was delivered before defendants responded to the lawsuit and defendants were already on notice of nearly identical claims due to letters delivered to them pursuant to the Georgia Fair Business Practices Act, the sub-class had pled adequate notice.
Massachusetts state and federal courts likely would not have made the same decision because, except in limited circumstances, a pre-suit demand letter is a statutory prerequisite to filing a Chapter 93A claim. It is not a procedural nicety and failing to send a demand letter warrants dismissal of a Chapter 93A claim.1 That is because the pre-suit demand letter is part of a dispute resolution system included within Chapter 93A, Section 9 itself, which expressly requires a recipient of a demand letter to review the facts and the law to determine whether a settlement should be offered to the claimant. Failure to do so may expose the recipient of a demand letter to additional liability under Chapter 93A. The purpose, of course, is to encourage the parties to engage in meaningful settlement discussions to avoid the need to file a lawsuit seeking relief. Receiving a demand letter after the lawsuit has been filed defeats the purpose of this settlement consideration process.

1 See Rodi v. S. New Eng. Sch. of Law, 389 F.3d 5, 19 (1st Cir. 2004); City of Boston v. Aetna Life Ins., 399 Mass. 569, 574 (1987).