Minnesota AG Publishes Report on the Negative Effects of AI and Social Media Use on Minors

On February 4, 2025, the Minnesota Attorney General published the second volume of a report outlining the negative effects that AI and social media use is having on minors in Minnesota (the “Report”). The Report examines the harms experienced by minors caused by certain design features of these emerging technologies and advocates for legislation that would impose design specifications for such technologies.
Key findings from the Report include:

Minors are experiencing online harassment, bullying and unwanted contact as a result of their use of AI and social media.
Social media and AI platforms are enabling misuse of user information and images.
Lack of default privacy settings in these technologies is resulting in user manipulation and fraud.
Social media and AI platforms are designed to optimize user attention in ways that negatively impact minor users’ wellbeing.
Opt-out options generally have not been effective in addressing these harms.

In the final section of the Report, the Minnesota AG sets forth a number of recommendations to address the identified harms, including:

Develop policies that regulate technology design functions, rather than content published on such technologies.
Prohibit the use of dark patterns that compel certain user behavior (g., infinite scroll, auto-play, constant notifications).
Provide users with tools to limit deceptive design features.
Mandate a privacy by default approach for such technologies.
Limit engagement-based optimization algorithms designed to increase time spent on platforms.
Advocate for limited technology use in educational settings.

Trump Administration Says Title IX Does Not Apply to NIL Pay, Rescinds Recent Guidance

On February 12, 2025, the U.S. Department of Education under the Trump administration rescinded recent guidance that name, image, and likeness (NIL) payments to college athletes implicate the gender equal opportunity requirements of Title IX of the Education Amendments of 1972.

Quick Hits

The Department of Education has rescinded recent guidance that had warned NCAA schools that NIL payments could trigger the equal opportunity obligations of Title IX. 
This announcement indicated that the department interprets Title IX as not applying to how revenue-generating athletics programs allocate compensation among their athletes.

On February 12, 2025, the U.S. Department of Education’s Office for Civil Rights (OCR) announced that it had rescinded the nine-page Title IX guidance on NIL payments previously issued on January 16, 2025, in the final days of the Biden administration.
“The NIL guidance, rammed through by the Biden Administration in its final days, is overly burdensome, profoundly unfair, and goes well beyond what agency guidance is intended to achieve,” Acting Assistant Secretary for Civil Rights Craig Trainor said in a statement.“Without a credible legal justification, the Biden Administration claimed that NIL agreements between schools and student athletes are akin to financial aid and must, therefore, be proportionately distributed between male and female athletes under Title IX.”
“Enacted over 50 years ago, Title IX says nothing about how revenue-generating athletics programs should allocate compensation among student athletes,” Assistant Secretary Trainor’s statement continued. “The claim that Title IX forces schools and colleges to distribute student-athlete revenues proportionately based on gender equity considerations is sweeping and would require clear legal authority to support it. That does not exist. Accordingly, the Biden NIL guidance is rescinded.”
The move comes as the National Collegiate Athletic Association (NCAA) and major college sports conferences have agreed to pay nearly $2.8 billion in back pay to former athletes as part of a proposed settlement to end NIL litigation and to establish a revenue-sharing framework to share more than $20 million annually with athletes.
The rescinded Biden-era guidance had warned NCAA schools that NIL compensation provided by a school, even if provided by private third parties, would be considered by the department as “athletic financial assistance,” which must be distributed in a nondiscriminatory manner under Title IX. The guidance had assumed that “the receipt of financial assistance does not transform students, including student-athletes, into employees,” but it opened the possibility to reevaluate that position.
The Education Department announcement also follows the NCAA’s announcement that it is banning transgender athletes from competing in women’s sports to align with President Trump’s recent executive order (EO), EO 14201, titled “Keeping Men Out of Women’s Sports.” That order directed the Secretary of Education to “take all appropriate action to affirmatively protect all-female athletic opportunities and all-female locker rooms and thereby provide the equal opportunity guaranteed by Title IX.”
Next Steps
The Department of Education’s announcement will have significant implications for NCAA schools, which have been adjusting to the quick evolution of college athletics in recent years. Changes have included the removal of restrictions on athletes earning NIL pay, loosening restrictions on athlete transfers, and the potential for revenue-sharing between schools and their athletes. Such changes have raised concerns under Title IX, particularly with potential disparities in NIL pay between athletes in men’s and women’s sports.
While the prior guidance had interpreted NIL pay as subject to Title IX, the Department of Education under the Trump administration appears to interpret NIL payments, and even potentially revenue-sharing, as outside of the typical athletic financial assistance governed by Title IX. This could open the door for more payments to athletes in the sports that tend to generate the most revenue, typically college football and men’s basketball.
The announcement further signals more potential changes by the Trump administration with the enforcement of Title IX.
However, the rescission of the prior Title IX guidance may not be the end of the road. While some are praising the decision, others continue to argue that inequitable distribution of the settlement funds between men’s and women’s sports will violate Title IX. This could result in legal challenges as schools evaluate how best to distribute the payments. 

Operating Social Casino-Style Applications Continues to be Costly in Washington State

In the latest string of gambling cases involving social casino-style apps out of Washington state, a federal jury has awarded a class of players nearly $25 million for injuries arising from the use of two of High 5 Game’s mobile applications: High 5 Casino and High 5 Vegas.
The award comes after a U.S. District Court judge ruled last June that the two apps amount to illegal gambling under Washington law. Continuing a line of cases that started with the Ninth Circuit’s landmark decision in Kater v. Churchill Downs Inc., and the resultant $155 million settlement, the ruling stems from the specific statutory definition of “gambling” in Washington state, which broadly define “things of value” to include an extension of play. Following the Kater decision, courts applying Washington law have relied on this broad definition to reject defendants’ assertions that the activities do not fall within the purview of the state’s gambling statutes because the virtual currencies used in these social casino-style apps do not constitute “things of value,” as they are only usable within the particular platform or gameplay and cannot be exchanged for real currency.
Putting it into Practice: Businesses operating social casino-style applications or platforms should strongly consider excluding players from Washington state, as Kater and its progeny suggest that the statutory interpretation of the gambling statutes applied to such apps appears well-settled. Additionally, business should be aware that there are at least a handful of other states with similar “extension of play” langauge in their gambling statutes, and while the case law may not be as well-developed in these states, there is a risk that courts in those jurisdictions will take a similar position to that of Washington.

Race/Gender/Ethnicity Based Share Restrictions

Yesterday’s post took note of a proposed initial public offering by Bally’s Chicago, Inc. that would impose a stockholder qualification based on race, gender and ethnic status. This qualification requirement is intended to satisfy the requirements of a Host Community Agreement entered into with the City of Chicago.
I noted that Section 204(a)(3) of the California Corporations Code expressly allows a California corporation to include in its articles provisions that impose “special qualifications of persons who may be shareholders”. Section 102 of the Delaware General Corporation Law includes no similar authorization.
Stanley Keller kindly pointed me in the direction of Delaware Code Section 202 which authorizes restrictions on the transfer or registration of a Delaware corporation’s securities to be imposed by the certificate of incorporation, by the bylaws, or by agreement. Section 202(c)(5) permits a restriction on transfer or registration if it is not “manifestly unreasonable”. Section 205(d)(2) further provides that such a restriction shall be conclusively presumed to be for a reasonable purpose if it is for “complying with any statutory or regulatory requirements under applicable local, state, federal or foreign law”.
In the case of Bally’s, it might be argued that the restriction is for the purpose of complying with an agreement (the Host Community Agreement) rather than a statutory or regulatory requirement. However, the Host Community Agreement requirement was intended to meet the requirements of the Illinois Gambling Act, 230 ILCS 10/1, et seq. which requires that any applicant for a casino owners’ license demonstrate it “used its best efforts to reach a goal of 25% ownership representation by minority persons and 5% ownership representation by women.” 230 ILCS 10/6(a-5)(9).
If it is ultimately determined that the Host Community Agreement and/or the Illinois Gambling Act are unconstitutional, an interesting question will arise whether the conclusive presumption in Section 205(d)(2) should be applied to an unconstitutional requirement.

May Corporations Allocate Shares Based On Race, Gender, Or Ethnicity?

Last December, Bally’s Chicago, Inc., a Delaware corporation and indirect subsidiary of Bally’s Corporation filed a registration statement with the Securities and Exchange Commission to raise funds in connection with the development and operation of a casino in the City of Chicago (Amendment No. 4 filed on January 29, 2025 is available here). Bally’s Chicago had previously entered into a Host Community Agreement with the City that, among other things, imposes minority and women ownership requirements. To meet these requirements, the registration statement contemplates a rather unusual plan of distribution in which Bally’s Chicago will determine whether investors have attested to qualification criteria (see the “Plan of Distribution” section of the prospectus).
Given that these qualification criteria are based on race, gender and ethnicity, it may be no surprise that they are being challenged as violating the Fourteenth Amendment to the U.S. Constitution and federal civil rights statutes. Last week, U.S. District Court Judge Franklin U. Valderrama declined to issue a temporary restraining order, ruling that the plaintiff had shown neither a likelihood of success nor irreparable injury. Glennon v. Johnson, U.S. Dist. Ct. Case No. 1:25-cv-01057 (N.D. Ill. Jan. 6, 2025).
Perhaps an initial question is whether stockholder qualifications of any sort are permisable under applicable state corporate laws. The California General Corporation Law expressly permits the articles of incorporation of a California corporation to include “special qualifications of persons who may be shareholders”. Cal. Corp. Code § 204(a)(3). However, “[i]t would be a rare case in which any such special qualifications were desired, but it may happen occasionally in the case of a close corporation where it is desired to restrict the ownership of the corporation only to persons with certain specified characteristics or possibly in a special type of publicly held or semipublicly held corporation”. Harold Marsh, Jr., R. Roy Finkle & Keith Paul Bishop, Marsh’s California Corporation Law § 5.14 (Fifth Edition, 2025-1 Supp. 2020-2021). The only similar authorization that I could find in the Delaware General Corporation Law is Section 342(b) which pertains to close corporations (“The certificate of incorporation of a close corporation may set forth the qualifications of stockholders, either by specifying classes of persons who shall be entitled to be holders of record of stock of any class, or by specifying classes of persons who shall not be entitled to be holders of stock of any class or both.”) I am interested in hearing from any reader who is aware of similar authority with respect to corporations that are not close corporations.
Another question might be whether such a limitation is permissible under state civil rights laws such as California’s Unruh Civil Rights Act, Cal. Civ. Code § 51(b) (” All persons within the jurisdiction of this state are free and equal, and no matter what their sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, sexual orientation, citizenship, primary language, or immigration status are entitled to the full and equal accommodations, advantages, facilities, privileges, or services in all business establishments of every kind whatsoever.”) (emphasis added).
Finally, there is the question of whether the Securities and Exchange Commission will declare Bally Chicago’s registration statement effective. Late last week, one public interest firm had reportedly urged the SEC to withhold approval of the offering

Hangzhou Internet Court: Generative AI Output Infringes Copyright

On February 10, 2025, the Hangzhou Internet Court announced that an unnamed defendant’s generative artificial intelligence’s (AI) generating of images constituted contributory infringement of information network dissemination rights, and ordered the defendant to immediately stop the infringement and compensate for economic losses and reasonable expenses of 30,000 RMB. 

LoRA model training with Ultraman

Infringing image generated with the model.

The defendant operates an AI platform that provides Low-Rank Adaptation (LoRA) models, and supports many functions such as image generation and model online training. On the homepage of the platform and under “Recommendations” and “IP Works”, there are AI-generated pictures and LoRA models related to Ultraman, which can be applied, downloaded, published or shared. The Ultraman LoRA model was generated by users uploading Ultraman pictures, selecting the platform basic model, and adjusting parameters for training. Afterwards, other users could then input prompts, select the base model, and overlay the Ultraman LoRA model to generate images that closely resembled the Ultraman character.
The unnamed plaintiff (presumably Tsuburaya Productions) alleged that the defendant infringed on their information network dissemination rights by placing infringing images and models on the information network after training with input images. The defendant used generative AI technology to train the Ultraman LoRA model and generate infringing images, constituting unfair competition. The plaintiff demanded the defendant cease the infringement and compensate for economic damages of 300,000 RMB.
The defendant countered that their AI platform, by calling third-party open-source model code, integrates and deploys these models according to platform needs, offering a generative AI platform for users. However, the platform does not provide training data and only allows users to upload images to train the model, which falls within the “safe harbor” rule for platforms and does not constitute infringement.
The Court reasoned:
On the one hand, if the generative artificial intelligence platform directly implements actions protected by copyright, it may constitute direct infringement. However, in this case, there is no evidence to prove that the defendant and the user jointly provided infringing works, and the defendant did not directly implement actions protected by information network dissemination rights.
On the other hand, in this case, when the user inputs infringing images and other training materials and decides whether to generate and publish them, the defendant does not necessarily have an obligation to conduct prior review of the training images input by the user and the dissemination of the generated products. Only when it is at fault for the specific infringing behavior can it constitute aiding and abetting infringement.
Specifically, the following aspects are considered comprehensively:
First, the nature and profit model of generative AI services. The open source ecosystem is an important part of the AI industry, and the open source model provides a general basic algorithm. As a service provider directly facing end users at the application layer, the defendant has made targeted modifications and improvements based on the open source model in combination with specific application scenarios, and provided solutions and results that directly meet the use needs. Compared with the provider of the open source model, it directly participates in commercial practices and benefits from the content generated based on the targeted generation. From the perspective of service type, business logic and prevention cost, it should maintain sufficient understanding of the content in the specific application scenario and bear the corresponding duty of care. In addition, the defendant obtains income through users’ membership fees, and sets up incentives to encourage users to publish training models, etc. It can be considered that the defendant directly obtains economic benefits from the creative services provided by the platform.
Secondly, the popularity of the copyrighted work and the obviousness of the alleged infringement. Ultraman works are quite well-known. When browsing the platform homepage and specific categories, there are multiple infringing pictures, and the LoRA model cover or sample picture directly displays the infringing pictures, which is relatively obvious infringement.
Thirdly, the infringement consequences that generative AI may cause. Generally speaking, the results of user behavior using generative AI are not identifiable or intervenable, and the generated images are also random. However, in this case, because the Ultraman LoRA model is used, the characteristics of the character image can be stably output. At this time, the platform has enhanced the identifiability and intervention of the results of user behavior. And because of the convenience of technology, the pictures and LoRA models generated and published by users can be repeatedly used by other users. The trend of causing the spread of infringement consequences is already quite obvious, and the defendant should have foreseen the possibility of infringement.
Finally, whether reasonable measures have been taken to prevent infringement. The defendant stated in the platform user service agreement that it would not review the content uploaded and published by users. After receiving the lawsuit notice, it has taken measures such as blocking relevant content and conducting intellectual property review in the background, proving that it has the ability to take but has failed to take necessary measures to prevent infringement that are consistent with the technical level at the time of the infringement.
In summary, the defendant should have known that network users used its services to infringe upon the right of information network dissemination but did not take necessary prevention measures. It failed to fulfill his duty of reasonable care and was subjectively at fault, constituting aiding and abetting infringement.
Violations of the Anti-Unfair Competition Law did not need to be considered as copyright infringement was determined.
The full text of the announcement is available here (Chinese only).

Monday Morning (Advertising) Quarterback – Unprecedented Hims & Hers Super Bowl Ad Has Legislators Concerned

The Super Bowl is not just the biggest game of the year for football fans, but it is also one of advertising’s biggest nights. 
One commercial started gaining buzz even before kick-off. On Friday, Senators Richard Durbin (D-IL) and Roger Marshall (R-KS) asked the FDA to throw a penalty flag to stop Hims & Hers from marching down the field with their “life-changing” weight-loss solutions ad. According to the letter sent to FDA acting Commissioner Sara Brenner, the senators expressed concern that the Hims & Hers ad “risks misleading patients by omitting any safety or side effect information,” about the compounded weight loss drugs that it promotes.
Like other telehealth companies, Hims & Hers utilizes its internet platform to connect consumers to telehealth providers who issue prescriptions in appropriate cases for GLP-1 drugs, which are filled by compounding pharmacies at markedly lower prices than FDA-approved brand name products. GLP-1s are a class of medications that mimic gut hormones that regulate blood sugar and suppress appetite and are known for their weight loss benefits. However, due to insurance restrictions and the high retail price of these medications, they are not easily accessible for individuals seeking to utilize these products for weight loss.
Compounders have taken advantage of the popularity of these medications, which have been on FDA’s drug shortage list for several years. When a drug is on the shortage list, pharmacies are permitted to make compounded copies of the drug if they meet specific regulatory requirements outlined in sections 505A and 505B of the Federal Food, Drug, and Cosmetic Act (FD&C Act). By complying with these conditions, compounded drugs are exempted from several requirements, such as FDA approval and certain labeling conditions. This enables online pharmacies and telehealth companies to produce and sell cheaper versions of the same active ingredient without having to go through the costly FDA approval process.
As compounded drugs are not “approved” by FDA, these drugs technically have no approved label, indications, or uses. Therefore, pharmacies are exempted from section 502(f)(1) of the FD&C Act requiring adequate directions for use. Similarly, compounded drugs are exempt from the prescription drug advertisement rules outlined in 21 C.F.R. 202.1, which require, among other things, that all advertisements contain a fair balance between information relating to side effects and contraindications and information relating to the effectiveness of the drug. See 21 C.F.R. 202.1(2)(iii).
However, this does not give pharmacies and telehealth companies a “free pass” to run up the score in any way they want, as they are still restricted from making claims related to the therapeutic safety and effectiveness of the drug to remain within this exemption. Online pharmacies are also governed by the same general misbranding provisions under the FD&C Act and Federal Trade Commission rules that prohibit false and deceptive advertising. Therefore, considering the popularity of these platforms and FDA’s outspoken concerns regarding compounded GLP-1 drugs,1 companies should still consult with regulatory experts before disseminating expensive marketing campaigns in this space.
With the recent change in administration and Robert F. Kennedy, Jr.’s flip-flopping positions on the use of GLP-1s for weight loss, it is unclear whether FDA will seek to rein in telehealth companies promoting GLP-1 drugs. However, just like the “Brotherly Shove” that guided the Eagles to Super Bowl victory, you can either love it or hate it, but until the regulators modify the rules, the Polsinelli lawyers, upon further review, are calling no flag on the play.

[1] See FDA, FDA’s Concerns with Unapproved GLP-1 Drugs Used for Weight Loss (Dec. 18, 2024).

How NCAA Changes to Transgender Policy Following President Trump’s Executive Order Impact Schools

Takeaways

President Trump signed executive order “Keeping Men out of Women’s Sports,” barring transgender women from competing in women’s sports and citing fairness, safety, and privacy concerns. Schools that do not comply with the new federal policy risk losing federal funding under Title IX enforcement.
In response, the NCAA immediately revised its transgender participation policy, restricting competition in women’s sports to athletes assigned female at birth.
Legal challenges are expected, as some states and advocacy groups argue the policy is discriminatory and violates previous Title IX interpretations.

Background
On Feb. 5, 2025, President Donald Trump signed executive order “Keeping Men Out of Women’s Sports,” which prohibits transgender women from participating in female athletic categories at federally funded educational institutions. The order also directed the State Department to demand changes within the International Olympic Committee. The Committee has left eligibility rules up to the global federations that govern different sports.
The Trump Administration has made a push to redefine sex-based legal protections under Title IX of the Education Amendments of 1972, emphasizing biological sex as the deciding factor for athletic eligibility. Previously, on Jan. 20, 2025, the Administration issued an executive order declaring the federal government would recognize only two sexes, male and female, for all legal and regulatory purposes.
The NCAA has over 530,000 student-athletes, fewer than 10 of whom are transgender, according to a statement the NCAA’s president, Charlie Baker had provided to a Senate panel in December. In January, Baker called for greater legal clarity on the issue from regulators.
Finding that clarity in the form of the new executive order, in response, the NCAA Board of Governors voted to amend its transgender participation policy the day after Trump’s executive order was issued.
The new policy states that eligibility for NCAA women’s sports is now strictly limited to athletes assigned female at birth. Transgender men (those assigned female at birth but who have begun a medical transition) may still participate in men’s sports without restriction. However, an athlete taking testosterone for gender transition may only practice with a women’s team and is prohibited from competing in official NCAA-sanctioned events. If a team allows an ineligible athlete to compete, the entire team will be disqualified from NCAA championships.
Legal and Institutional Challenges
The executive order immediately ignited controversy as several states and legal groups vowed to challenge the order.
Pushback is expected, particularly in states like California, Connecticut, Massachusetts, and New York, where laws expressly protect transgender rights. Schools in these states now face a dilemma: Whether to comply with federal regulations or uphold state laws that recognize gender identity protections for student-athletes. Schools in these states may risk severe financial consequences if they refuse to comply with the new federal mandate, potentially losing millions in federal education funding.
More than two dozen states already bar transgender athletes from participating in school sports, whether in K-12 schools or at the collegiate level. In January, the House passed a bill barring transgender women and girls from sports programs for female students nationwide (the bill is not likely to pass in the Senate).
What Comes Next?
Some key questions remain:

Will federal courts uphold or strike down the new Title IX interpretation?
How will schools in certain states navigate the conflict between the executive order and new NCAA policy and state laws?

NCAA Bars Transgender Athletes from Women’s Sports Aligning With President Trump’s Executive Order

On February 6, 2025, the National Collegiate Athletic Association (NCAA) announced its new policy, prohibiting athletes assigned male at birth from participating in women’s sports competitions, aligning the NCAA eligibility rules with President Donald Trump’s recent executive order (EO) barring transgender athletes from women’s sports. The new rules reverse a prior policy of allowing athletes to participate in accordance with their gender identity.

Quick Hits

The NCAA Board of Governors voted to adopt an updated transgender athlete participation policy that prohibits athletes assigned male at birth from competing in NCAA women’s competitions.
The new policy aligns with President Donald Trump’s executive order barring transgender athletes from competing in women’s sports.
The executive order takes the position that allowing transgender participation in women’s sports undermines the fairness and opportunities for women and girls and threatens federal funding for educational programs that do not comply.
The NCAA aims to establish clear national eligibility standards in response to the differing state laws and court decisions surrounding this issue.

Under the updated participation policy for transgender athletes, “[r]egardless of sex assigned at birth or gender identity, a student-athlete may participate (practice and competition) in NCAA men’s sports, assuming they meet all other NCAA eligibility requirements.”
For women’s sports, “[a] student-athlete assigned male at birth may not compete for an NCAA women’s team.” However, such student-athletes “may continue practicing with a women’s team and receive all other benefits applicable to student-athletes.” “A student-athlete assigned female at birth who has begun hormone therapy (e.g., testosterone) may not compete on a women’s team.” However, they, too, may continue practicing with a women’s team and receive all other applicable benefits. 
“We strongly believe that clear, consistent, and uniform eligibility standards would best serve today’s student-athletes instead of a patchwork of conflicting state laws and court decisions. To that end, President Trump’s order provides a clear, national standard,” NCAA President Charlie Baker said in a statement.
The change comes a day after President Trump signed an EO titled “Keeping Men Out of Women’s Sports.” The EO states that allowing transgender athletes to compete in women’s sports “is demeaning, unfair, and dangerous to women and girls, and denies women and girls the equal opportunity to participate and excel in competitive sports.”
While not directly addressing the NCAA, the EO declared that it is “the policy of the United States to rescind all funds from educational programs that deprive women and girls of fair athletic opportunities, which results in the endangerment, humiliation, and silencing of women and girls and deprives them of privacy.”
The EO had significant implications for the NCAA schools, which rely on federal funding. After the EO, NCAA President Baker said that the Board of Governors would “take necessary steps to align NCAA policy” with the EO.
Under the NCAA’s prior policy, adopted by the Board of Governors in January 2022, transgender women athletes were allowed to compete in NCAA women’s sports after submitting documentation of “gender affirming treatment” by a medical professional and evidence that their testosterone levels are “within the allowable levels for the sport” in which they plan to compete.
In addition, NCAA schools are faced with shifting interpretations of Title IX of the Education Amendments of 1972, which requires that schools provide equal opportunity to students, regardless of sex, including in terms of sports participation and “athletic financial assistance.”
On January 9, 2025, a federal court in Kentucky vacated a Biden-era U.S. Department of Education rule on Title IX adopted in 2024, which expanded the definition of sex-based harassment to include sexual orientation and gender identity. The Department of Education has since confirmed that it will enforce Title IX under a 2020 rule issued during President Trump’s first term.
The recent actions align with President Trump’s inauguration day EO 14168, titled “Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government,” which directed federal agencies to “enforce laws governing sex-based rights, protections, opportunities, and accommodations to protect men and women as biologically distinct sexes.”
Next Steps
The new NCAA participation policy will have significant implications for transgender athletes currently competing or seeking to compete in NCAA sports, likely meaning that they will no longer be able to compete in NCAA women’s competitions. Baker’s statements further indicate the NCAA’s desire for national standards in an era that has seen major changes to college sports and eligibility rules pushed by antitrust litigation and an inconsistent patchwork of state laws and regulations, including changes to athlete transfers, the allowance of compensation for name, image, and likeness, and the potential adoption of revenue sharing.

Calling the Right DEI Play for the NFL

Today’s Wall Street Journal story about Roger Goodell’s decision to maintain the NFL’s DEI programs reported that Mr. Goodell stood by the football league’s diversity initiatives, which would not change in response to the political climate. The article noted that Mr. Goodell characterized the NFL’s programs as being both positive for the league and a “reflection of our fan base and our communities and our players.”  While the NFL’s decision to push back against the current anti-DEI trend is notable, it is clear that the NFL made this decision after conducting a thoughtful and introspective process, which included an understanding of a key DEI goal – which is to level both the literal and figurative playing field. 
Mr. Goodell noted that the purpose of the NFL’s DEI programs in reference to the talent pipeline was “about opening that funnel and bringing the best talent into the NFL.”  DEI detractors often base their attacks on the premise that DEI causes bias, rather than diminishing it. But in fact, and as I noted last year in Merit Unmasked, DEI’s true goal is to unmask overlooked talent. I posited then (and believe now) that DEI should be framed as talent-searching, and never as talent-diminishing. If we reframe the approach to DEI in this fashion – like the NFL and many other businesses that have engaged in meaningful introspection about their DEI programs – then it becomes much easier to understand, accept, and advocate for the reasons supporting DEI, and how to tailor it for each business.
The NFL has long offered Americans the joy (and misery) of competition, the celebration of (and frustration with) sport, and the community of (and discord among) fans. But it is interesting times indeed to see the NFL as a model for American business on how to best fully serve corporate communities, employees, and stakeholders.   

The BR Privacy & Security Download: February 2025

STATE & LOCAL LAWS & REGULATIONS
New York Legislature Passes Comprehensive Health Privacy Law: The New York state legislature passed SB-929 (the “Bill”), providing for the protection of health information. The Bill broadly defines “regulated health information” as “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual.” Regulated health information includes location and payment information, as well as inferences derived from an individual’s physical or mental health. The term “individual” is not defined. Accordingly, the Bill contains no terms restricting its application to consumers acting in an individual or household context. The Bill would apply to regulated entities, which are entities that (1) are located in New York and control the processing of regulated health information, or (2) control the processing of regulated health information of New York residents or individuals physically present in New York. Among other things, the Bill would restrict regulated entities to processing regulated health information only with a valid authorization, or when strictly necessary for certain specified activities. The Bill also provides for individual rights and requires the implementation of reasonable administrative, physical, and technical safeguards to protect regulated health information. The Bill would take effect one year after being signed into law and currently awaits New York Governor Kathy Hochul’s signature.
New York Data Breach Notification Law Updated: Two bills, SO2659 and SO2376, that amended the state’s data breach notification law were signed into law by New York Governor Kathy Hochul. The bills change the timing requirement in which notice must be provided to New York residents, add data elements to the definition of “private information,” and adds the New York Department of Financial Services to the list of regulators that must be notified. Previously, New York’s data breach notification statute did not have a hard deadline within which notice must be provided. The amendments now require affected individuals to be notified no later than 30 days after discovery of the breach, except for delays arising from the legitimate needs of law enforcement. Additionally, as of March 25, 2025, “private information” subject to the law’s notification requirements will include medical information and health insurance information.
California AG Issues Legal Advisory on Application of California Law to AI: California’s Attorney General has issued legal advisories to clarify that existing state laws apply to AI development and use, emphasizing that California is not an AI “wild west.” These advisories cover consumer protection, civil rights, competition, data privacy, and election misinformation. AI systems, while beneficial, present risks such as bias, discrimination, and the spread of disinformation. Therefore, entities that develop or use AI must comply with all state, federal, and local laws. The advisories highlight key laws, including the Unfair Competition Law and the California Consumer Privacy Act. The advisories also highlight new laws effective on January 1, 2025, which include disclosure requirements for businesses, restrictions on the unauthorized use of likeness, and regulations for AI use in elections and healthcare. These advisories stress the importance of transparency and compliance to prevent harm from AI.
New Jersey AG Publishes Guidance on Algorithmic Discrimination: On January 9, 2025, New Jersey’s Attorney General and Division on Civil Rights announced a new civil rights and technology initiative to address the risks of discrimination and bias-based harassment in AI and other advanced technologies. The initiative includes the publication of a Guidance Document, which addresses the applicability of New Jersey’s Law Against Discrimination (“LAD”) to automated decision-making tools and technologies. It focuses on the threats posed by automated decision-making technologies in the housing, employment, healthcare, and financial services contexts, emphasizing that the LAD applies to discrimination regardless of the technology at issue. Also included in the announcement is the launch of a new Civil Rights Innovation lab, which “will aim to leverage technology responsibly to advance [the Division’s] mission to prevent, address, and remedy discrimination.” The Lab will partner with experts and relevant industry stakeholders to identify and develop technology to enhance the Division’s enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of AI and related decision-making technology. This initiative, along with the recently effective New Jersey Data Protection Act, shows a significantly increased focus from the New Jersey Attorney General on issues relating to data privacy and automated decision-making technologies.
New Jersey Publishes Comprehensive Privacy Law FAQs: The New Jersey Division of Consumer Affairs Cyber Fraud Unit (“Division”) published FAQs that provide a general summary of the New Jersey Data Privacy Law (“NJDPL”), including its scope, key definitions, consumer rights, and enforcement. The NJDPL took effect on January 15, 2025, and the FAQs state that controllers subject to the NJDPL are expected to comply by such date. However, the FAQs also emphasize that until July 1, 2026, the Division will provide notice and a 30-day cure period for potential violations. The FAQs also suggest that the Division may adopt a stricter approach to minors’ privacy. While the text of the NJDPL requires consent for processing the personal data of consumers between the ages of 13 and 16 for purposes of targeted advertising, sale, and profiling, the FAQs state that when a controller knows or willfully disregards that a consumer is between the ages of 13 and 16, consent is required to process their personal data more generally.
CPPA Extends Formal Comment Period for Automated Decision-Making Technology Regulations: The California Privacy Protection Agency (“CPPA”) extended the public comment period for its proposed regulations on cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and insurance companies under the California Privacy Rights Act. The public comment period opened on November 22, 2024, and was set to close on January 14, 2025. However, due to the wildfires in Southern California, the public comment period was extended to February 19, 2025. The CPPA will also be holding a public hearing on that date for interested parties to present oral and written statements or arguments regarding the proposed regulations.
Oregon DOJ Publishes Toolkit for Consumer Privacy Rights: The Oregon Department of Justice announced the release of a new toolkit designed to help Oregonians protect their online information. The toolkit is designed to help families understand their rights under the Oregon Consumer Privacy Act. The Oregon DOJ reminded consumers how to submit complaints when businesses are not responsive to privacy rights requests. The Oregon DOJ also stated it has received 118 complaints since the Oregon Consumer Privacy Act took effect last July and had sent notices of violation to businesses that have been identified as non-compliant.
California, Colorado, and Connecticut AGs Remind Consumers of Opt-Out Rights: California Attorney General Rob Bonta published a press release reminding residents of their right to opt out of the sale and sharing of their personal information. The California Attorney General also cited the robust privacy protections of Colorado and Connecticut laws that provide for similar opt-out protections. The press release urged consumers to familiarize themselves with the Global Privacy Control (“GPC”), a browser setting or extension that automatically signals to businesses that they should not sell or share a consumer’s personal information, including for targeted advertising. The Attorney General also provided instructions for the use of the GPC and for exercising op-outs by visiting the websites of individual businesses.

FEDERAL LAWS & REGULATIONS
FTC Finalizes Updates to COPPA Rule: The FTC announced the finalization of updates to the Children’s Online Privacy Protection Rule (the “Rule”). The updated Rule makes a number of changes, including requiring opt-in consent to engage in targeted advertising to children and to disclose children’s personal information to third parties. The Rule also adds biometric identifiers to the definition of personal information and prohibits operators from retaining children’s personal information for longer than necessary for the specific documented business purposes for which it was collected. Operators must maintain a written data retention policy that documents the business purpose for data retention and the retention period for data. The Commission voted 5-0 to adopt the Rule, but new FTC Chair Andrew Ferguson filed a separate statement describing “serious problems” with the rule. Ferguson specifically stated that it was unclear whether an entirely new consent would be required if an operator added a new third party with whom personal information would be shared, potentially creating a significant burden for businesses. The Rule will be effective 60 days after its publication in the Federal Register.
Trump Rescinds Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: President Donald Trump took action to rescind former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). According to a Biden administration statement released in October, many action items from the AI EO have already been completed. Recommendations, reports, and opportunities for research that were completed prior to revocation of the AI EO may continue in place unless replaced by additional federal agency action. It remains unclear whether the Trump Administration will issue its own executive orders relating to AI.
U.S. Justice Department Issues Final Rule on Transfer of Sensitive Personal Data to Foreign Adversaries: The U.S. Justice Department issued final regulations to implement a presidential Executive Order regarding access to bulk sensitive personal data of U.S. citizens by foreign adversaries. The regulations restrict transfers involving designated countries of concern – China, Cuba, Iran, North Korea, Russia, and Venezuela. At a high level, transfers are restricted if they could result in bulk sensitive personal data access by a country of concern or a “covered person,” which is an entity that is majority-owned by a country of concern, organized under the laws of a country of concern, has its principle place of business in a country of concern, or is an individual whose primary residence is in a county of concern. Data covered by the regulation includes precise geolocation data, biometric identifiers, genetic data, health data, financial data, government-issued identification numbers, and certain other identifiers, including device or hardware-based identifiers, advertising identifiers, and demographic or contact data.
First Complaint Filed Under Protecting Americans’ Data from Foreign Adversaries Act: The Electronic Privacy Information Center (“EPIC”) and the Irish Counsel for Civil Liberties (“ICCL”) Enforce Unit filed the first-ever complaint under the Protecting Americans’ Data from Foreign Adversaries Act (“PADFAA”). PADFAA makes it unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, or otherwise make available specified personally identifiable sensitive data of individuals residing in the United States to North Korea, China, Russia, Iran, or an entity controlled by one of those countries. The complaint alleges that Google’s real-time bidding system data includes personally identifiable sensitive data, that Google executives were aware that data from its real-time bidding system may have been resold, and that Google’s public list of certified companies that receive real-time bidding bid request data include multiple companies based in foreign adversary countries.
FDA Issues Draft Guidance for AI-Enabled Device Software Functions: The U.S. Food and Drug Administration (“FDA”) published its January 2025 Draft Guidance for Industry and FDA Staff regarding AI-enabled device software functionality. The Draft provides recommendations regarding the contents of marketing submissions for AI-enabled medical devices, including documentation and information that will support the FDA’s evaluation of their safety and effectiveness. The Draft Guidance is designed to reflect a “comprehensive approach” to the management of devices through their total product life cycle and includes recommendations for the design, development, and implementation of AI-enabled devices. The FDA is accepting comments on the Draft Guidance, which may be submitted online until April 7, 2025.
Industry Coalition Pushes for Unified National Data Privacy Law: A coalition of over thirty industry groups, including the U.S. Chamber of Commerce, sent a letter to Congress urging it to enact a comprehensive national data privacy law. The letter highlights the urgent need for a cohesive federal standard to replace the fragmented state laws that complicate compliance and stifle competition. The letter advocates for legislation based on principles to empower startups and small businesses by reducing costs and improving consumer access to services. The letter supports granting consumers the right to understand, correct, and delete their data, and to opt out of targeted advertising, while emphasizing transparency by requiring companies to disclose data practices and secure consent for processing sensitive information. It also focuses on the principles of limiting data collection to essential purposes and implementing robust security measures. While the principles aim to override strong state laws like that in California, the proposal notably excludes data broker regulation, a previous point of contention. The coalition cautions against legislation that could lead to frivolous litigation, advocating for balanced enforcement and collaborative compliance. By adhering to these principles, the industry groups seek to ensure legal certainty and promote responsible data use, benefiting both businesses and consumers.
Cyber Trust Mark Unveiled: The White House launched a labeling scheme for internet-of-things devices designed to inform consumers when devices meet certain government-determined cybersecurity standards. The program has been in development for several months and involves collaboration between the White House, the National Institute of Standards and Technology, and the Federal Communications Commission. UL Solutions, a global safety and testing company headquartered in Illinois, has been selected as the lead administrator of the program along with 10 other firms as deputy administrators. With the main goal of helping consumers make more cyber-secure choices when purchasing products, the White House hopes to have products with the new cyber trust mark hit shelves before the end of 2025.

U.S. LITIGATION
Texas Attorney General Sues Insurance Company for Unlawful Collection and Sharing of Driving Data: Texas Attorney General Ken Paxton filed a lawsuit against Allstate and its data analytics subsidiary, Arity. The lawsuit alleges that Arity paid app developers to incorporate its software development kit that tracked location data from over 45 million consumers in the U.S. According to the lawsuit, Arity then shared that data with Allstate and other insurers, who would use the data to justify increasing car insurance premiums. The sale of precise geolocation data of Texans violated the Texas Data Privacy and Security Act (“TDPSA”) according to the Texas Attorney General. The TDPSA requires the companies to provide notice and obtain informed consent to use the sensitive data of Texas residents, which includes precise geolocation data. The Texas Attorney General sued General Motors in August of 2024, alleging similar practices relating to the collection and sale of driver data. 
Eleventh Circuit Overturns FCC’s One-to-One Consent Rule, Upholds Broader Telemarketing Practices: In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission, No. 24-10277, 2025 WL 289152 (11th Cir. Jan. 24, 2025), the Eleventh Circuit vacated the FCC’s one-to-one consent rule under the Telephone Consumer Protection Act (“TCPA”). The court found that the rule exceeded the FCC’s authority and conflicted with the statutory meaning of “prior express consent.” By requiring separate consent for each seller and topic-related call, the rule was deemed unnecessary. This decision allows businesses to continue using broader consent practices, maintaining shared consent agreements. The ruling emphasizes that consent should align with common-law principles rather than be restricted to a single entity. While the FCC’s next steps remain uncertain, the decision reduces compliance burdens and may challenge other TCPA regulations.
California Judge Blocks Enforcement of Social Media Addiction Law: The California Protecting Our Kids from Social Media Addiction Act (the “Act”) has been temporarily blocked. The Act was set to take effect on January 1, 2025. The law aims to prevent social media platforms from using algorithms to provide addictive content to children. Judge Edward J. Davila initially declined to block key parts of the law but agreed to pause enforcement until February 1, 2025, to allow the Ninth Circuit to review the case. NetChoice, a tech trade group, is challenging the law on First Amendment grounds. NetChoice argues that restricting minors’ access to personalized feeds violates the First Amendment. The group has appealed to the Ninth Circuit and is seeking an injunction to prevent the law from taking effect. Judge Davila’s decision recognized the “novel, difficult, and important” constitutional issues presented by the case. The law includes provisions to restrict minors’ access to personalized feeds, limit their ability to view likes and other feedback, and restrict third-party interaction.

U.S. ENFORCEMENT
FTC Settles Enforcement Action Against General Motors for Sharing Geolocation and Driving Behavior Data Without Consent: The Federal Trade Commission (“FTC”) announced a proposed order to settle FTC allegations against General Motors that it collected, used, and sold driver’s precise geolocation data and driving behavior information from millions of vehicles without adequately notifying consumers and obtaining their affirmative consent. The FTC specifically alleged General Motors used a misleading enrollment process to get consumers to sign up for its OnStar-connected vehicle service and Smart Driver feature without proper notice or consent during that process. The information was then sold to third parties, including consumer reporting agencies, according to the FTC. As part of the settlement, General Motors will be prohibited from disclosing driver data to consumer reporting agencies, required to allow consumers to obtain and delete their data, required to obtain consent prior to collection, and required to allow consumers to limit data collected from their vehicles.
FTC Releases Proposed Order Against GoDaddy for Alleged Data Security Failures: The Federal Trade Commission (“FTC”) has announced it had reached a proposed settlement in its action against GoDaddy Inc. (“GoDaddy”) for failing to implement reasonable and appropriate security measures, which resulted in several major data breaches between 2019 and 2022. According to the FTC’s complaint, GoDaddy misled customers of its data security practices, through claims on its websites and in email and social media ads, and by representing it was in compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. However, the FTC found that GoDaddy failed to inventory and manage assets and software updates, assess risks to its shared hosting services, adequately log and monitor security-related events, and segment its shared hosting from less secure environments. The FTC’s proposed order against GoDaddy prohibits GoDaddy from misleading its customers about its security practices and requires GoDaddy to implement a comprehensive information security program. GoDaddy must also hire a third-party assessor to conduct biennial reviews of its information security program.
CPPA Reaches Settlements with Additional Data Brokers: Following their announcement of a public investigative sweep of data broker registration compliance, the CPPA has settled with additional data brokers PayDae, Inc. d/b/a Infillion (“Infillion”), The Data Group, LLC (“The Data Group”), and Key Marketing Advantage, LLC (“KMA”) for failing to register as a data broker and pay an annual fee as required by California’s Delete Act. Infillion will pay $54,200 for failing to register between February 1, 2024, and November 4, 2024. The Data Group will pay $46,600 for failing to register between February 1, 2024, and September 20, 2024. KMA will pay $55,800 for failing to register between February 1, 2024, and November 5, 2024. In addition to the fines, the companies have agreed to injunctive terms. The Delete Act imposes fines of $200 per day for failing to register by the deadline.
Mortgage Company Fined by State Financial Regulators for Cybersecurity Breach: Bayview Asset Management LLC and three affiliates (collectively, “Bayview”) agreed to pay a $20 million fine and improve their cybersecurity programs to settle allegations from 53 state financial regulators. The Conference of State Bank Supervisors (“CSBS”) alleged that the mortgage companies had deficient cybersecurity practices and did not fully cooperate with regulators after a 2021 data breach. The data breach compromised data for 5.8 million customers. The coordinated enforcement action was led by financial regulators in California, Maryland, North Carolina, and Washington State. The regulators said the companies’ information technology and cybersecurity practices did not meet federal or state requirements. The firms also delayed the supervisory process by withholding requested information and providing redacted documents in the initial stages of a post-breach exam. The companies also agreed to undergo independent assessments and provide three years of additional reporting to the state regulators.
SEC Reaches Settlement over Misleading Cybersecurity Disclosures: The SEC announced it has settled charges with Ashford Inc., an asset management firm, over misleading disclosures related to a cybersecurity incident. This enforcement action stemmed from a ransomware attack in September 2023, compromising over 12 terabytes of sensitive hotel customer data, including driver’s licenses and credit card numbers. Despite the breach, Ashford falsely reported in its November 2023 filings that no customer information was exposed. The SEC alleged negligence in Ashford’s disclosures, citing violations of the Securities Act of 1933 and the Exchange Act of 1934. Without admitting or denying the allegations, Ashford agreed to a $115,231 penalty and an injunction. This case highlights the critical importance of accurate cybersecurity disclosures and demonstrates the SEC’s commitment to ensuring transparency and accountability in corporate reporting.
FTC Finalizes Data Breach-Related Settlement with Marriott: The FTC has finalized its order against Marriott International, Inc. (“Marriott”) and its subsidiary Starwood Hotels & Resorts Worldwide LLC (“Starwood”). As previously reported, the FTC entered into a settlement with Marriott and Starwood for three data breaches the companies experienced between 2014 and 2020, which collectively impacted more than 344 million guest records. Under the finalized order, Marriott and Starwood are required to establish a comprehensive information security program, implement a policy to retain personal information only for as long as reasonably necessary, and establish a link on their website for U.S. customers to request deletion of their personal information associated with their email address or loyalty rewards account number. The order also requires Marriott to review loyalty rewards accounts upon customer request and restore stolen loyalty points. The companies are further prohibited from misrepresenting their information collection practices and data security measures.
New York Attorney General Settles with Auto Insurance Company over Data Breach: The New York Attorney General settled with automobile insurance company, Noblr, for a data breach the company experienced in January 2021. Noblr’s online insurance quoting tool exposed full, plaintext driver’s license numbers, including on the backend of its website and in PDFs generated when a purchase was made. The data breach impacted the personal information of more than 80,000 New Yorkers. The data breach was part of an industry-wide campaign to steal personal information (e.g., driver’s license numbers and dates of birth) from online automobile insurance quoting applications to be used to file fraudulent unemployment claims during the COVID-19 pandemic. As part of its settlement, Noblr must pay the New York Attorney General $500,000 in penalties and strengthen its data security measures such as by enhancing its web application defenses and maintaining a comprehensive information security program, data inventory, access controls (e.g., authentication procedures), and logging and monitoring systems.
FTC Alleges Video Game Maker Violated COPPA and Engaged in Deceptive Marketing Practices: The Federal Trade Commission (“FTC”) has taken action against Cognosphere Pte. Ltd and its subsidiary Cognosphere LLC, also known as HoYoverse, the developer of the game Genshin Impact (“HoYoverse”). The FTC alleges that HoYoverse violated the Children’s Online Privacy Protection Act (“COPPA”) and engaged in deceptive marketing practices. Specifically, the company is accused of unfairly marketing loot boxes to children and misleading players about the odds of winning prizes and the true cost of in-game transactions. To settle these charges, HoYoverse will pay a $20 million fine and is prohibited from allowing children under 16 to make in-game purchases without parental consent. Additionally, the company must provide an option to purchase loot boxes directly with real money and disclose loot box odds and exchange rates. HoYoverse is also required to delete personal information collected from children under 13 without parental consent. The FTC’s actions aim to protect consumers, especially children and teens, from deceptive practices related to in-game purchases.
OCR Finalizes Several Settlements for HIPAA Violations: Prior to the inauguration of President Trump, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) brought enforcement actions against four entities, USR Holdings, LLC (“USR”), Elgon Information Systems (“Elgon”), Solara Medical Supplies, LLC (“Solara”) and Northeast Surgical Group, P.C. (“NESG”), for potential violations of the Health Insurance Portability and Accountability Act’s (“HIPAA”) Security Rule due to the data breaches the entities experienced. USR reported that between August 23, 2018, and December 8, 2018, a database containing the electronic protected health information (“ePHI”) of 2,903 individuals was accessed by an unauthorized third party who was able to delete the ePHI in the database. Elgon and NESG each discovered a ransomware attack in March 2023, which affected the protected health information (“PHI”) of approximately 31,248 individuals and 15,298 individuals, respectively. Solara experienced a phishing attack that allowed an unauthorized third party to gain access to eight of Solara’s employees’ email accounts between April and June 2019, resulting in the compromise of 114,007 individuals’ ePHI. As part of their settlements, each of the entities is required to pay a fine to OCR: USR $337,750, Elgon $80,000, Solara $3,000,000, and NESG $10,000. Additionally, each of the entities is required to implement certain data security measures such as conducting a risk analysis, implementing a risk management plan, maintaining written policies and procedures to comply with HIPAA, and distributing such policies or providing training on such policies to its workforce.  
Virgina Attorney General Sues TikTok for Addictive Fees and Allowing Chinese Government to Access Data: Virginia Attorney General Jason Miyares announced his office had filed a lawsuit against TikTok and ByteDance Ltd, the Chinese-based parent company of TikTok. The lawsuit alleges that TikTok was intentionally designed to be addictive for adolescent users and that the company deceived parents about TikTok content, including by claiming the app is appropriate for children over the age of 12 in violation of the Virginia Consumer Protection Act. 

INTERNATIONAL LAWS & REGULATIONS
UK ICO Publishes Guidance on Pay or Consent Model: On January 23, the UK’s Information Commissioner’s Office (“ICO”) published its Guidance for Organizations Implementing or Considering Implementing Consent or Pay Models. The guidance is designed to clarify how organizations can deploy ‘consent or pay’ models in a manner that gives users meaningful control over the privacy of their information while still supporting their economic viability. The guidance addresses the requirements of applicable UK laws, including PECR and the UK GDPR, and provides extensive guidance as to how appropriate fees may be calculated and how to address imbalances of power. The guidance includes a set of factors that organizations can use to assess their consent models and includes plans to further engage with online consent management platforms, which are typically used by businesses to manage the use of essential and non-essential online trackers. Businesses with operations in the UK should carefully review their current online tracker consent management tools in light of this new guidance.
EU Commission to Pay Damages for Sending IP Address to Meta: The European General Court has ordered the European Commission to pay a German citizen, Thomas Bindl, €400 in damages for unlawfully transferring his personal data to the U.S. This decision sets a new precedent regarding EU data protection litigation. The court found that the Commission breached data protection regulations by operating a website with a “sign in with Facebook” option. This resulted in Bindl’s IP address, along with other data, being transferred to Meta without ensuring adequate safeguards were in place. The transfer happened during the transition period between the EU-U.S. Privacy Shield and the EU-U.S. Data Protection Framework. The court determined that this left Bindl in a position of uncertainty about how his data was being processed. The ruling is significant because it recognizes “intrinsic harm” and may pave the way for large-scale collective redress actions.
European Data Protection Board Releases AI Bias Assessment and Data Subject Rights Tools: The European Data Protection Board (“EDPB”) released two AI tools as part of the AI: Complex Algorithms and effective Data Protection Supervision Projects. The EDPB launched the project in the context of the Support Pool of Experts program at the request of the German Federal Data Protection Authority. The Support Pool of Experts program aims to help data protection authorities increase their enforcement capacity by developing common tools and giving them access to a wide pool of experts. The new documents address best practices for bias evaluation and the effective implementation of data subject rights, specifically the rights to rectification and erasure when AI systems have been developed with personal data.
European Data Protection Board Adopts New Guidelines on Pseudonymization: The EDPB released new guidelines on pseudonymization for public consultation (the “Guidelines”). Although pseudonymized data still constitutes personal data under the GDPR, pseudonymization can reduce the risks to the data subjects by preventing the attribution of personal data to natural persons in the course of the processing of the data, and in the event of unauthorized access or use. In certain circumstances, the risk reduction resulting from pseudonymization may enable controllers to rely on legitimate interests as the legal basis for processing personal data under the GDPR, provided they meet the other requirements, or help guarantee an essentially equivalent level of protection for data they intend to export. The Guidelines provide real-world examples illustrating the use of pseudonymization in various scenarios, such as internal analysis, external analysis, and research.
CJEU Issues Ruling on Excessive Data Subject Requests: On January 9, the Court of Justice of the European Union (“CJEU”) issued its ruling in the case Österreichische Datenschutzbehörde (C‑416/23). The primary question before the Court was when a European data protection authority may deny consumer requests due to their excessive nature. Rather than specifying an arbitrary numerical threshold of requests received, the CJEU found that authorities must consider the relevant facts to determine whether the individual submitting the request has “an abusive intention.” While the number of requests submitted may be a factor in determining this intention, it is not the only factor. Additionally, the CJEU emphasized that Data Protection Authorities should strongly consider charging a “reasonable fee” for handling requests they suspect may be excessive prior to simply denying them.
Daniel R. Saeedi, Rachel L. Schaller Gabrielle N. Ganz, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article