New Jersey Attorney General: NJ’s Law Against Discrimination (LAD) Applies to Automated Decision-Making Tools

This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:
the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.
If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.
Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:
any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.

Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.

The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.
As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.

SEC Priorities for 2025: What Investment Advisers Should Know

The US Securities and Exchange Commission (SEC) recently released its priorities for 2025. As in recent years, the SEC is focusing on fiduciary duties and the development of compliance programs as well as emerging risk areas such as cybersecurity and artificial intelligence (AI). This alert details the key areas of focus for investment advisers.

1. Fiduciary Duties Standards of Conduct
The Investment Advisers Act of 1940 (Advisers Act) established that all investment advisers owe their clients the duties of care and loyalty. In 2025, the SEC will focus on whether investment advice to clients satisfies an investment adviser’s fiduciary obligations, particularly in relation to (1) high-cost products, (2) unconventional investments, (3) illiquid assets, (4) assets that are difficult to value, (5) assets that are sensitive to heightened interest rates and market conditions, and (6) conflicts of interests.
For investment advisers who are dual registrants or affiliated with broker-dealers, the SEC will focus on reviewing (1) whether investment advice is suitable for a client’s advisory accounts, (2) disclosures regarding recommendations, (3) account selection practices, and (4) disclosures regarding conflicts of interests.
2. Effectiveness of Advisers Compliance Programs
The Compliance Rule, Rule 206(4)-7, under the Advisers Act requires investment advisers to (1) implement written policies reasonably designed to prevent violations of the Advisers Act, (2) designate a Chief Compliance Officer, and (3) annually review such policies for adequacy and effectiveness.
In 2025, the SEC will focus on a variety of topics related to the Compliance Rule, including marketing, valuation, trading, investment management, disclosure, filings, and custody, as well as the effectiveness of annual reviews.
Among its top priorities is evaluating whether compliance policies and procedures are reasonably designed to prevent conflicts of interest. Such examination may include a focus on (1) fiduciary obligations related to outsourcing investment selection and management, (2) alternative sources of revenue or benefits received by advisers, and (3) fee calculations and disclosure.
Review under the Compliance Rule is fact-specific, meaning it will vary depending on each adviser’s practices and products. For example, advisers who utilize AI for management, trading, marketing, and compliance will be evaluated to determine the effectiveness of compliance programs related to the use of AI. The SEC may also focus more on advisers with clients that invest in difficult-to-value assets.
3. Examinations of Private Fund Advisers
The SEC will continue to focus on advisers to private funds, which constitute a significant portion of SEC-registered advisers. Specifically, the SEC will prioritize reviewing:

Disclosures to determine whether they are consistent with actual practices.
Fiduciary duties during volatile markets.
Exposure to interest rate fluctuations.
Calculations and allocations of fees and expenses.
Disclosures related to conflicts of interests and investment risks.
Compliance with recently adopted or amended SEC rules, such as Form PF (previously discussed here).

4. Never Examined Advisers, Recently Registered Advisers, and Advisers Not Recently Examined
Finally, the SEC will continue to prioritize recently registered advisers, advisers not examined recently, and advisers who have never been examined.
Key Takeaways
Investment advisers can expect SEC examinations in 2025 to focus heavily on fiduciary duties, compliance programs, and conflicts of interest. As such, advisers should review their policies and procedures related to fiduciary duties and conflicts of interest as well as evaluating the effectiveness of their compliance programs.

China’s National Intellectual Property Administration Issues Guidelines for Patent Applications for AI-Related Inventions

On December 31, 2024, China’s National Intellectual Property Administration (CNIPA) issued the Guidelines for Patent Applications for AI-Related Inventions (Trial Implementation) (人工智能相关发明专利申请指引(试行)). The Guidelines follow up on CNIPA’s draft for comments issued December 6, 2024 in which only a week for comments were provided. The short comment period implied CNIPA did not actually want comments and is in contravention of the not-yet-effective Regulations on the Procedures for Formulating Regulations of the CNIPA (国家知识产权局规章制定程序规定(局令第83号)) requiring a 30-day minimum comment period. Highlights follow including several examples regarding subject matter eligibility.
There are four types of AI-related patent applications:
Patent applications related to AI algorithms or models themselves
Artificial intelligence algorithms or models, that is, advanced statistical and mathematical model forms, include machine learning, deep learning, neural networks, fuzzy logic, genetic algorithms, etc. These algorithms or models constitute the core content of artificial intelligence. They can simulate intelligent decision-making and learning capabilities, enabling computing devices to handle complex problems and perform tasks that usually require human intelligence.
Accordingly, this type of patent application usually involves the artificial intelligence algorithm or model itself and its improvement or optimization, for example, model structure, model compression, model training, etc.
Patent applications related to functions or field applications based on artificial intelligence algorithms or models
Patent applications related to the functional or field application of artificial intelligence algorithms or models refer to the integration of artificial intelligence algorithms or models into inventions as an intrinsic part of the proposed solution for products, methods or their improvements. For example: a new type of electron microscope based on artificial intelligence image sharpening technology. This type of patent application usually involves the use of artificial intelligence algorithms or models to achieve specific functions or apply them to specific fields.
Functions based on artificial intelligence algorithms or models refer to functions implemented using one or more artificial intelligence algorithms or models. They usually include: natural language processing, which enables computers to understand and generate human language; computer vision, which enables computers to “see” and understand images or videos; speech processing, including speech recognition, speech synthesis, etc.; knowledge representation and reasoning, which represents information and enables computers to solve problems, including knowledge graphs, graph computing, etc.; data mining, which calculates and analyzes massive amounts of data to identify information or laws such as potential patterns, trends or relationships. Artificial intelligence algorithms or models can be applied to specific fields based on their functions.
Field applications based on artificial intelligence algorithms or models refer to the application of artificial intelligence to various scenarios, such as transportation, telecommunications, life and medical sciences, security, commerce, education, entertainment, finance, etc., to promote technological innovation and improve the level of intelligence in all walks of life.
Patent applications involving inventions made with the assistance of artificial intelligence
Inventions assisted by artificial intelligence are inventions that are made using artificial intelligence technology as an auxiliary tool in the invention process. In this case, artificial intelligence plays a role similar to that of an information processor or a drawing tool. For example, artificial intelligence is used to identify specific protein binding sites, and finally obtains a new drug compound.
Patent applications involving AI-generated inventions
AI-generated inventions refer to inventions and creations generated autonomously by AI without substantial human contribution, for example, a food container autonomously designed by AI technology.

AI cannot be an inventor:
1. The inventor must be a natural person
Section 4.1.2 of Chapter 1 of Part 1 of the Guidelines clearly states that “the inventor must be an individual, and the application form shall not contain an entity or collective, nor the name of artificial intelligence.”
The inventor named in the patent document must be a natural person. Artificial intelligence systems and other non-natural persons cannot be inventors. When there are multiple inventors, each inventor must be a natural person. The property rights to obtain income and the personal rights to sign enjoyed by the inventor are civil rights. Only civil subjects that meet the provisions of the civil law can be the rights holders of the inventor’s related civil rights. Artificial intelligence systems cannot currently enjoy civil rights as civil subjects, and therefore cannot be inventors.
2. The inventor should make a creative contribution to the essential features of the invention
For patent applications involving artificial intelligence algorithms or models, functions or field applications based on artificial intelligence algorithms or models, the inventor refers to the person who has made creative contributions to the essential features of the invention.
For inventions assisted by AI, a natural person who has made a creative contribution to the substantive features of the invention can be named as the inventor of the patent application. For inventions generated by AI, it is not possible to grant AI inventor status under the current legal context in my country.

Examples of subject matter eligibility:
The solution of the claim should reflect the use of technical means that follow the laws of nature to solve technical problems and achieve technical effects
The “technical solution” stipulated in Article 2, Paragraph 2 of the Patent Law refers to a collection of technical means that utilize natural laws to solve the technical problems to be solved. When a claim records that a technical means that utilizes natural laws is used to solve the technical problems to be solved, and a technical effect that conforms to natural laws is obtained thereby, the solution defined in the claim belongs to the technical solution. On the contrary, a solution that does not use technical means that utilize natural laws to solve technical problems to obtain technical effects that conform to natural laws does not belong to the technical solution.
As an example and not a limitation, the following content describes several common situations where related solutions belong to technical solutions.
Scenario 1: AI algorithms or models process data with specific technical meaning in the technical field
If the drafting of a claim can reflect that the object processed by the artificial intelligence algorithm or model is data with a definite technical meaning in the technical field, so that based on the understanding of those skilled in the art, they can know that the execution of the algorithm or model directly reflects the process of solving a certain technical problem by using natural laws, and obtains a technical effect, then the solution defined in the claim belongs to the technical solution. For example, a method for identifying and classifying images using a neural network model. Image data belongs to data with a definite technical meaning in the technical field. If those skilled in the art can know that the various steps of processing image features in the solution are closely related to the technical problem of identifying and classifying objects to be solved, and obtain corresponding technical effects, then the solution belongs to the technical solution.
Scenario 2: There is a specific technical connection between the AI algorithm or model and the internal structure of the computer system
If the drafting of a claim can reflect the specific technical connection between the artificial intelligence algorithm or model and the internal structure of the computer system, thereby solving the technical problem of how to improve the hardware computing efficiency or execution effect, including reducing the amount of data storage, reducing the amount of data transmission, increasing the hardware processing speed, etc., and can obtain the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, then the solution defined in the claim belongs to the technical solution.
This specific technical association reflects the mutual adaptation and coordination between algorithmic features and features related to the internal structure of a computer system at the technical implementation level, such as adjusting the architecture or related parameters of a computer system to support the operation of a specific algorithm or model, making adaptive improvements to the algorithm or model based on a specific internal structure or parameters of a computer system, or a combination of the two.
For example, a neural network model compression method for a memristor accelerator includes: step 1, adjusting the pruning granularity according to the actual array size of the memristor during network pruning through an array-aware regularized incremental pruning algorithm to obtain a regularized sparse model adapted to the memristor array; step 2, reducing the ADC accuracy requirements and the number of low-resistance devices in the memristor array through a power-of-two quantization algorithm to reduce overall system power consumption.
In this example, in order to solve the problem of excessive hardware resource consumption and high power consumption of ADC units and computing arrays when the original model is mapped to the memristor accelerator, the solution uses pruning algorithms and quantization algorithms to adjust the pruning granularity according to the actual array size of the memristor, reducing the number of low-resistance devices in the memristor array. The above means are algorithm improvements made to improve the performance of the memristor accelerator. They are constrained by hardware condition parameters, reflecting the specific technical relationship between the algorithm characteristics and the internal structure of the computer system. They use technical means that conform to the laws of nature to solve the technical problems of excessive hardware consumption and high power consumption of the memristor accelerator, and obtain the technical effect of improving the internal performance of the computer system that conforms to the laws of nature. Therefore, this solution belongs to the technical solution.
Specific technical associations do not mean that changes must be made to the hardware structure of the computer system. For solutions to improve artificial intelligence algorithms, even if the hardware structure of the computer system itself has not changed, the solution can achieve the technical effect of improving the internal performance of the computer system as a whole by optimizing the system resource configuration. In such cases, it can be considered that there is a specific technical association between the characteristics of the artificial intelligence algorithm and the internal structure of the computer system, which can improve the execution effect of the hardware.
For example, a training method for a deep neural network model includes: when the size of training data changes, for the changed training data, respectively calculating the training time of the changed training data in preset candidate training schemes; selecting a training scheme with the shortest training time from the preset candidate training schemes as the optimal training scheme for the changed training data, the candidate training schemes including a single-processor training scheme and a multi-processor training scheme based on data parallelism; and performing model training on the changed training data in the optimal training scheme.
In order to solve the problem of slow training speed of deep neural network models, this solution selects a single-processor training solution or a multi-processor training solution with different processing efficiency for training data of different sizes. This model training method has a specific technical connection with the internal structure of the computer system, which improves the execution effect of the hardware during the training process, thereby obtaining the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, thus constituting a technical solution.
However, if a claim merely utilizes a computer system as a carrier for implementing the operation of an artificial intelligence algorithm or model, and does not reflect the specific technical relationship between the algorithm features and the internal structure of the computer system, it does not fall within the scope of Scenario 2.
For example, a computer system for training a neural network includes a memory and a processor, wherein the memory stores instructions and the processor reads the instructions to train the neural network by optimizing a loss function.
In this solution, the memory and processor in the computer system are merely conventional carriers for algorithm storage and execution. There is no specific technical association between the algorithm features involved in training the neural network using the optimized loss function and the memory and processor contained in the computer system. This solution solves the problem of optimizing neural network training, which is not a technical problem. The effect obtained is only to improve the efficiency of model training, which is not a technical effect of improving the internal performance of the computer system. Therefore, it does not constitute a technical solution.
Scenario 3: Using artificial intelligence algorithms to mine the inherent correlations in big data in specific application fields that conform to the laws of nature
When artificial intelligence algorithms or models are applied in various fields, data analysis, evaluation, prediction or recommendation can be performed. For such applications, if the claims reflect that the big data in a specific application field is processed, and artificial intelligence algorithms such as neural networks are used to mine the inherent correlation between data that conforms to the laws of nature, and the technical problem of how to improve the reliability or accuracy of big data analysis in a specific application field is solved, and the corresponding technical effects are obtained, then the solution of the claim constitutes a technical solution.
The means of using artificial intelligence algorithms or models to conduct data mining and train artificial intelligence models that can obtain output results based on input data cannot directly constitute technical means. Only when the inherent correlation between the data mined based on artificial intelligence algorithms or models conforms to the laws of nature, the relevant means as a whole can constitute technical means that utilize the laws of nature. Therefore, it is necessary to clarify in the scheme recorded in the claims which indicators, parameters, etc. are used to reflect the characteristics of the analyzed object in order to obtain the analysis results, and whether the inherent correlation between these indicators, parameters, etc. (model input) mined by artificial intelligence algorithms or models and the result data (model output) conforms to the laws of nature.
For example, a food safety risk prediction method obtains and analyzes historical food safety risk events to obtain header entity data and tail entity data representing food raw materials, edible items, and food sampling poisonous substances, and their corresponding timestamp data; based on each header entity data and its corresponding tail entity data, and its corresponding entity relationship carrying timestamp data representing the content level, risk or intervention of each type of hazard, corresponding four-tuple data is constructed to obtain a corresponding knowledge graph; the knowledge graph is used to train a preset neural network to obtain a food safety knowledge graph model; and the food safety risk at the prediction time is predicted based on the food safety knowledge graph model.
The background technology of the program description records that the existing technology uses static knowledge graphs to predict food safety risks, which cannot reflect the fact that food data in actual situations changes over time and ignores the influence between data. Those skilled in the art know that food raw materials, edible items or food sampling poisons will gradually change over time. For example, the longer the food is stored, the more microorganisms there are in the food, and the content of food sampling poisons will increase accordingly. When the food contains a variety of raw materials that can react chemically, the chemical reaction may also cause food safety risks at some point in the future over time. This program predicts food safety risks based on the inherent characteristics of food changing over time, so that timestamps are added when constructing the knowledge graph, and a preset neural network is trained based on entity data related to food safety risks at each moment to predict food safety risks at the time to be predicted. It uses technical means that follow the laws of nature to solve the technical problem of inaccurate prediction of food safety risks at future time points, and can obtain corresponding technical effects, thus constituting a technical solution.
If the intrinsic correlation between the indicator parameters mined by artificial intelligence algorithms or models and the prediction results is only subject to economic laws or social laws, it is a case of not following the laws of nature. For example, a method of estimating the regional economic prosperity index using a neural network uses a neural network to mine the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index, and predicts the regional economic prosperity index based on the intrinsic correlation. Since the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index is subject to economic laws and not natural laws, this solution does not use technical means and does not constitute a technical solution.

The full text is available here (Chinese only).

FTC Blog Outlines Factors for Companies to Consider About AI — AI: The Washington Report

The FTC staff recently published a blog post outlining four factors for companies to consider when developing or deploying AI products to avoid running afoul of the nation’s consumer protection laws.
The blog post does not represent formal guidance but it likely articulates the FTC’s thinking and enforcement approach, particularly regarding deceptive claims about AI tools and due diligence when using AI-powered systems.
Although the blog post comes just days before current Republican Commissioner Andrew Ferguson becomes FTC Chair on January 20, the FTC is likely to continue the same focus on AI as it relates to consumer protection issues as it has under Chair Khan. Ferguson has voted in support of nearly all of the FTC’s AI consumer protection actions, but his one dissent underscores how he might dial back some of the current FTC’s aggressive AI consumer protection agenda.  

The FTC staff in the Office of Technology and the Division of Advertising Practices in the FTC Bureau of Consumer Protection released a blog outlining four factors that companies should consider when developing or deploying an AI-based product. These factors are not binding, but they underscore the FTC’s continued focus on enforcing the nation’s consumer protection laws as they relate to AI.
The blog comes just under two weeks before current Republican Commissioner Andrew Ferguson will become the FTC Chair. However, under Ferguson, as we discuss below, the FTC will likely continue its same focus on AI consumer protection issues, though it may take a more modest approach.
The Four Factors for Companies to Consider about AI
The blog post outlines four factors for companies to consider when developing or deploying AI:

Doing due diligence to prevent harm before and while developing or deploying an AI service or product   

In 2024, the FTC filed a complaint against a leading retail pharmacy alleging that it “failed to take reasonable measures to prevent harm to consumers in its use of facial recognition technology (FRT) that falsely tagged consumers in its stores, particularly women and people of color, as shoplifters.” The FTC has “highlighted that companies offering AI models need to assess and mitigate potential downstream harm before and during deployment of their tools, which includes addressing the use and impact of the technologies that are used to make decisions about consumers.”  

Taking preventative steps to detect and remove AI-generated deepfakes and fake images, including child sexual abuse material and non-consensual intimate imagery   

In April 2024, the FTC finalized its impersonation rule, and the FTC also launched a Voice Cloning Challenge to create ways to protect consumers from voice cloning software. The FTC has previously discussed deepfakes and their harms to Congress in its Combatting Online Harms Report.  

Avoiding deceptive claims about AI systems or services that result in people losing money or harm users   

The FTC’s Operation AI Comply, which we covered, as well as other enforcement actions have taken aim at companies that have made false or deceptive claims about the capabilities of their AI products or services. Many of the FTC’s enforcement actions have targeted companies that have falsely claimed that their AI products or services would help people make money or start a business.  

Protecting privacy and safety   

AI models, especially generative AI ones, run on large amounts of data, some of which may be highly sensitive. “The Commission has a long record of providing guidance to businesses about ensuring data security and protecting privacy,” as well as taking action against companies that have failed to do so.  

While the four factors highlight consumer protection issues that the FTC has focused on, FTC staff cautions that the four factors are “not a comprehensive overview of what companies should be considering when they design, build, test, and deploy their own products.”
New FTC Chair: New or Same Focus on AI Consumer Protection Issues?
The blog post comes under two weeks before President-elect Trump’s pick to lead the FTC, current FTC Commissioner Andrew Ferguson, becomes the FTC Chair. Under Chair Ferguson, the FTC’s focus on the consumer protection side of AI is unlikely to undergo significant changes; Ferguson has voted in support of nearly all of the FTC’s consumer protection AI enforcement actions.
However, Ferguson’s one dissent in a consumer protection case brought against an AI company illuminates how the FTC under his leadership could take a more modest approach to consumer protection issues related to AI. In his dissent, Commissioner Ferguson wrote: 
The Commission’s theory is that Section 5 prohibits products and services that could be used to facilitate deception or unfairness because such products and services are the means and instrumentalities of deception and unfairness. Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents … and risks strangling a potentially revolutionary technology in its cradle.
Commissioner Ferguson’s point seems well taken.Less clear is where he would draw the line.Moreover, as a practical matter, his ability to move the needle would likely need to wait until President Trump’s other nominee, Mark Meador, is confirmed, as expected, later this year.
Matthew Tikhonovsky also contributed to this article.

Black Box Issues [Podcast]

In part three of our series on potential pitfalls in the use of artificial intelligence (or AI) when it comes to employment decisions, partner Guy Brenner and senior counsel Jonathan Slowik dive into the concept of “black box” systems—AI tools whose internal decision-making processes are not transparent. The internal workings of such systems may not be well understood, even by the developers who create them. We explore the challenges this poses for employers seeking to ensure that their use of AI in employment decisions does not inadvertently introduce bias into the process. Be sure to tune in for a closer look at the complexities of this conundrum and what it means for employers.

McDermott+ Check-Up: January 10, 2025

THIS WEEK’S DOSE

119th Congress Begins. The new Congress began with key membership announcements for relevant healthcare committees.
Cures 2.1 White Paper Published. The document outlines the 21st Century Cures 2.1 legislative proposal, focusing on advancing healthcare technologies and fostering innovation.
Senate Budget Committee Members Release Report on Private Equity. The report, released by the committee’s chair and ranking member from the 118th Congress, includes findings from an investigation into private equity’s role in healthcare.
HHS OCR Proposes Significant Updates to HIPAA Security Rule. The US Department of Health & Human Services (HHS) Office for Civil Rights (OCR) seeks to address current cybersecurity concerns.
HHS Releases AI Strategic Plan. The plan outlines how HHS will prioritize resources and coordinate efforts related to artificial intelligence (AI).
CFPB Removes Medical Debt from Consumer Credit Reports. The Consumer Financial Protection Bureau (CFPB) finalized its 2024 proposal largely as proposed.
President Biden Signs Several Public Health Bills into Law. The legislation includes the reauthorization and creation of public health programs related to cardiomyopathy, autism, and emergency medical services for children.

CONGRESS

119th Congress Begins. The 119th Congress began on January 3, 2025. Lawmakers reelected Speaker Johnson in the first round of votes and adopted the House rules package. The first full week in session was slow-moving due to a winter storm in Washington, DC; funeral proceedings for President Jimmy Carter; and the certification of electoral college votes. Committees are still getting organized, and additions to key health committees include:

House Energy & Commerce: Reps. Bentz (R-OR), Houchin (R-IN), Fry (R-SC), Lee (R-FL), Langworthy (R-NY), Kean (R-NJ), Rulli (R-OH), Evans (R-CO), Goldman (R-TX), Fedorchak (R-ND), Ocasio-Cortez (D-NY), Mullin (D-CA), Carter (D-LA), McClellan (D-VA), Landsman (D-OH), Auchincloss (D-MA), and Menendez (D-NJ).
House Ways & Means: Reps. Moran (R-TX), Yakym (R-IN), Miller (R-OH), Bean (R-FL), Boyle (D-PA), Plaskett (D-VI), and Suozzi (D-NY).
Senate Finance: Sens. Marshall (R-KS), Sanders (I-VT), Smith (D-MN), Ray Luján (D-NM), Warnick (D-GA), and Welch (D-VT).
Senate Health, Education, Labor & Pensions: Sens. Scott (R-SC), Hawley (R-MO), Banks (R-IN), Crapo (R-ID), Blackburn (R-TN), Kim (D-NJ), Blunt Rochester (D-DE), and Alsobrooks (D-MD).

Congress has a busy year ahead. The continuing resolution (CR) enacted in December 2024 included several short-term extensions of health provisions (and excluded many others that had been included in an earlier proposed bipartisan health package), and these extensions will expire on March 14, 2025. Congress will need to complete action on fiscal year (FY) 2025 appropriations by this date, whether by passing another CR through the end of the FY, or by passing a full FY 2025 appropriations package. The short-term health extenders included in the December CR could be further extended in the next appropriations bill, and Congress also has the opportunity to revisit the bipartisan, bicameral healthcare package that was unveiled in December but ultimately left out of the CR because of pushback from Republicans about the overall bill’s size.
The 119th Congress will also be focused in the coming weeks on advancing key priorities – including immigration reform, energy policy, extending the 2017 tax cuts, and raising the debt limit – through the budget reconciliation process. This procedural maneuver allows the Senate to advance legislation with a simple majority, rather than the 60 votes needed to overcome the threat of a filibuster. Discussions are underway about the scope of this package and the logistics (will there be one reconciliation bill or two?), and we expect to learn more in the days and weeks ahead. It is possible that healthcare provisions could become a part of such a reconciliation package.
Cures 2.1 White Paper Published. Rep. Diana DeGette (D-CO) and former Rep. Larry Bucshon (R-IN) released a white paper on December 24, 2024, outlining potential provisions of the 21st Century Cures 2.1 legislative proposal expected to be introduced later this year. This white paper and the anticipated legislation are informed by responses to a 2024 request for information. The white paper is broad, discussing potential Medicare reforms relating to gene therapy access, coverage determinations, and fostering innovation. With Rep. Bucshon’s retirement, all eyes are focused on who will be the Republican lead on this effort.
Senate Budget Committee Members Release Report on Private Equity. The report contains findings from an investigation into private equity’s role in healthcare led by the leaders of the committee in the 118th Congress, then-Chair Whitehouse (D-RI) and then-Ranking Member Grassley (R-IA). The report includes two case studies and states that private equity firms have become increasingly involved in US hospitals. They write that this trend impacts quality of care, patient safety, and financial stability at hospitals across the United States, and the report calls for greater oversight, transparency, and reforms of private equity’s role in healthcare. A press release that includes more documents related to the case studies can be found here.
ADMINISTRATION

HHS OCR Proposes Significant Updates to HIPAA Security Rule. HHS OCR released a proposed rule, HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information (ePHI). HHS OCR proposes minimum cybersecurity standards that would apply to health plans, healthcare clearinghouses, most healthcare providers (including hospitals), and their business associates. Key proposals include:

Removing the distinction between “required” and “addressable” implementation specifications and making all implementation specifications required with specific, limited exceptions.
Requiring written documentation of all Security Rule policies, procedures, plans, and analyses.
Updating definitions and revising implementation specifications to reflect changes in technology and terminology.
Adding specific compliance time periods for many existing requirements.
Requiring the development and revision of a technology asset inventory and a network map that illustrates the movement of ePHI throughout the regulated entity’s electronic information system(s) on an ongoing basis, but at least once every 12 months and in response to a change in the regulated entity’s environment or operations that may affect ePHI.
Requiring notification of certain regulated entities within 24 hours when a workforce member’s access to ePHI or certain electronic information systems is changed or terminated.
Strengthening requirements for planning for contingencies and responding to security incidents.
Requiring regulated entities to conduct an audit at least once every 12 months to ensure their compliance with the Security Rule requirements.

The HHS OCR fact sheet is available here. Comments are due on March 7, 2025. Because this is a proposed rule, the incoming Administration will determine the content and next steps for the final rule.
HHS Releases AI Strategic Plan. In response to President Biden’s Executive Order on AI, HHS unveiled its AI strategic plan. The plan is organized into five primary domains:

Medical research and discovery
Medical product development, safety and effectiveness
Healthcare delivery
Human services delivery
Public health

Within each of these chapters, HHS discusses in-depth the context of AI, stakeholders engaged in the domain’s AI value chain, opportunities for the application of AI in the domain, trends in AI for the domain, potential use-cases and risks, and an action plan.
The report also highlights efforts related to cybersecurity and internal operations. Lastly, the plan outlines responsibility for AI efforts within HHS’s Office of the Chief Artificial Intelligence Officer.
CFPB Removes Medical Debt from Consumer Credit Reports. The final rule removes $49 billion in unpaid medical bills from the credit reports of 15 million Americans, building on the Biden-Harris Administration’s work with states and localities. The White House fact sheet can be found here. Whether the incoming Administration will intervene in this rulemaking remains an open question.
President Biden Signs Several Public Health Bills into Law. These bills from the 118th Congress include:

H.R. 6829, the HEARTS Act of 2024, which mandates that the HHS Secretary work with the Centers for Disease Control and Prevention, patient advocacy groups, and health professional organizations to develop and distribute educational materials on cardiomyopathy.
H.R. 6960, the Emergency Medical Services for Children Reauthorization Act of 2024, which reauthorizes through FY 2029 the Emergency Medical Services for Children State Partnership Program.
H.R. 7213, the Autism CARES Act of 2024, which reauthorizes, through FY 2029, the Developmental Disabilities Surveillance and Research Program and the Interagency Autism Coordinating Committee in HHS, among other HHS programs to support autism education, early detection, and intervention.

QUICK HITS

ACIMM Hosts Public Meeting. The HHS Advisory Committee on Infant and Maternal Mortality (ACIMM) January meeting included discussion and voting on draft recommendations related to preconception/interconception health, systems issues in rural health, and social drivers of health. The agenda can be found here.
CBO Releases Report on Gene Therapy Treatment for Sickle Cell Disease. The Congressional Budget Office (CBO) report did not estimate the federal budgetary effects of any policy, but instead discussed how CBO would assess related policies in the future.
CMS Reports Marketplace 2025 Open Enrollment Data. As of January 4, 2025, 23.6 million consumers had selected a plan for coverage in 2025, including more than three million new consumers. Read the fact sheet here.
CMS Updates Hospital Price Transparency Guidance. The agency posted updated frequently asked questions (FAQs) on hospital price transparency compliance requirements. Some of the FAQs are related to new requirements that took effect January 1, 2025, as finalized in the Calendar Year 2024 Outpatient Prospective Payment System/Ambulatory Services Center Final Rule, and others are modifications to existing requirements as detailed in previous FAQs.
GAO Releases Reports on Older Americans Act-Funded Services, ARPA-H Workforce. The US Government Accountability Office (GAO) report recommended that the Administration for Community Living develop a written plan for its work with the Interagency Coordinating Committee on Healthy Aging and Age-Friendly Communities to improve services funded under the Older Americans Act. In another report, the GAO recommended that the Advanced Research Projects Agency for Health (ARPA-H) develop a workforce planning process and assess scientific personnel data.
VA Expands Cancers Covered by PACT Act. The US Department of Veterans Affairs (VA) will add several new cancers to the list of those presumed to be related to burn pit exposure, lowering the burden of proof for veterans to receive disability benefits. Read the press release here.
HHS Announces $10M in Awards for Maternal Health. The $10 million in grants from the Substance Abuse and Mental Health Services Administration (SAMHSA) will go to a new community-based maternal behavioral health services grant program. Read the press release here.
Surgeon General Issues Advisory on Link Between Alcohol and Cancer Risk. The advisory includes a series of recommendations to increase awareness of the connection between alcohol consumption and cancer risk and update the existing Surgeon General’s health warning label on alcohol-containing beverages. Read the press release here.
SAMHSA Awards CCBHC Medicaid Demonstration Planning Grants. The grants will go to 14 states and Washington, DC, to plan a Certified Community Behavioral Health Clinic (CCBHC). Read the press release here.
HHS Announces Membership of Parkinson’s Advisory Council. The Advisory Council on Parkinson’s Research, Care, and Services will be co-chaired by Walter J. Koroshetz, MD, Director of the National Institutes of Health’s National Institute of Neurological Disorders and Stroke, and David Goldstein, MS, Associate Deputy Director for the Office of Science and Medicine for HHS’s Office of the Assistant Secretary for Health. Read the press release here.

NEXT WEEK’S DIAGNOSIS

The House and Senate are in session next week and will continue to organize for the 119th Congress. Confirmation hearings are expected to begin in the Senate for President-elect Trump’s nominees, although none in the healthcare space have been announced yet. On the regulatory front, CMS will publish the Medicare Advantage rate notice.

5 Trends to Watch: 2025 EU Data Privacy & Cybersecurity

Full Steam Ahead: The European Union’s (EU) Artificial Intelligence (AI) Act in Action — As the EU’s landmark AI Act officially takes effect, 2025 will be a year of implementation challenges and enforcement. Companies deploying AI across the EU will likely navigate strict rules on data usage, transparency, and risk management, especially for high-risk AI systems. Privacy regulators are expected to play a key role in monitoring how personal data is used in AI model training, with potential penalties for noncompliance. The interplay between the AI Act and the General Data Protection Regulation (GDPR) may add complexity, particularly for multinational organizations.
Network and Information Security Directive (NIS2) Matures: A New Era of Cybersecurity Regulation — The EU’s NIS2 Directive will enter its enforcement phase, expanding cybersecurity obligations for critical infrastructure and key sectors. Companies must adapt to stricter breach notification rules, risk management requirements, and supply-chain security mandates. Regulators are expected to focus on cross-border coordination in response to major incidents, with early cases likely setting important precedents. Organizations will likely face increasing scrutiny of their cybersecurity disclosures and incident response protocols.
The Evolution of Data Transfers: Toward a Unified Framework — After years of turbulence, 2025 may mark a turning point for transatlantic and global data flows. The EU-U.S. Data Privacy Framework will face ongoing reviews by the European Data Protection Board (EDPB) and potential legal challenges, but it offers a clearer path forward. Meanwhile, the EU may continue striking adequacy agreements with key trading partners, setting the stage for a harmonized approach to cross-border data transfers. Companies will need robust mechanisms, such as Standard Contractual Clauses and emerging Transfer Impact Assessments (TIAs), to maintain compliance.
Consumer Rights Expand Under the GDPR’s Influence — The GDPR continues to set the global benchmark for privacy laws, and 2025 will see the ripple effect of its influence as EU member states refine their own data protection frameworks. Enhanced consumer rights, such as the right to explanation in algorithmic decision-making and stricter opt-in requirements for data use, are anticipated. Regulators are also likely to target dark patterns and deceptive consent mechanisms, driving companies toward greater transparency in their user interfaces and data practices.
Digital Markets Act Meets GDPR: Privacy in the Platform Economy — The Digital Markets Act (DMA), fully enforceable in 2025, will bring sweeping changes to large online platforms, or “gatekeepers.” Interoperability mandates, restrictions on data combination across services, and limits on targeted advertising will intersect with GDPR compliance. The overlap between DMA and GDPR enforcement will challenge platforms to adapt their practices while balancing privacy obligations. This regulatory synergy may reshape data monetization strategies and set a precedent for digital market governance worldwide.

AI Versus MFA

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.
According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.
Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:
“1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.
2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…
3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.
4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.
5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”
Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,
Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.
In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.
In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.
Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

ESG and Supply Chains in 2024: Key Trends, Challenges, and Future Outlook

In 2024, supply chains remained a critical focal point for companies committed to environmental, social, and governance (ESG) principles. Given their significant contribution to a company’s environmental footprint and social impact, supply chains have become an essential area for implementing sustainable and ethical practices.
Advancements in technology, evolving regulatory frameworks, and innovative corporate strategies defined the landscape of ESG in supply chains this year. However, challenges such as data reliability, cost pressures, and geopolitical risks persisted in 2024. Here are seven observations highlighting progress, challenges, and potential future directions in ESG and supply chains.
1. Regulatory and Market Drivers
Governments and international organizations introduced stringent regulations in 2024, compelling companies to prioritize ESG considerations in their supply chains. These policies aimed to address environmental degradation, human rights abuses, and climate-related risks while fostering greater transparency and accountability.

EU’s Corporate Sustainability Due Diligence Directive (CSDDD): The European Union’s CSDDD came into force, mandating companies operating in the EU to identify, prevent and mitigate adverse human rights and environmental impacts throughout their supply chains. This regulation required businesses to map their suppliers, assess risks, and implement corrective actions, driving improvements in traceability and supplier accountability.
U.S. Uyghur Forced Labor Prevention Act (UFLPA): In the United States, the Department of Homeland Security’s enforcement of the UFLPA intensified. This act targeted goods produced with forced labor, particularly in China’s Xinjiang region, and placed the burden of proof on companies to demonstrate compliance. Businesses were required to adopt rigorous traceability systems to ensure their products were free from forced labor.
Carbon Border Adjustment Mechanisms (CBAMs): Carbon tariffs, implemented by the EU and other regions, incentivized companies to measure and reduce the carbon intensity of imported goods. These mechanisms encouraged businesses to collaborate with suppliers to lower emissions and adopt cleaner technologies.

2. Advances in Supply Chain Traceability and Transparency
Technological innovations were central to advancing supply chain traceability and transparency, enabling companies to identify risks, ensure compliance, and improve sustainability performance.

Blockchain Technology: Blockchain emerged as a cornerstone of supply chain transparency. By creating immutable records of transactions and product origins, blockchain technology provided stakeholders with verifiable proof of ethical sourcing and environmental compliance. Companies used blockchain to authenticate claims about sustainability, such as the origin of raw materials and the environmental credentials of finished goods.
Artificial Intelligence (AI): AI played a transformative role in supply chain management, helping companies analyze supplier risks, predict disruptions, and optimize logistics for lower emissions. AI-powered tools also enabled real-time monitoring of supply chain activities, such as emissions tracking, labor compliance, and waste reduction.
Internet of Things (IoT): IoT sensors provided granular, real-time data on supply chain metrics, such as energy consumption, shipping efficiency, and waste generation. This technology enabled companies to address inefficiencies and enhance the sustainability of their operations.

3. Responsible Sourcing Practices
Responsible sourcing became a cornerstone of supply chain ESG efforts, with companies adopting ethical and sustainable procurement practices to address environmental and social risks.

Raw Material Sourcing: Businesses focused on sourcing raw materials like cobalt, palm oil, and timber from certified suppliers to ensure compliance with environmental and labor standards. Industry-specific certifications, such as the Forest Stewardship Council and the Roundtable on Sustainable Palm Oil, gained prominence.
Fair Trade and Ethical Labor: Companies partnered with organizations promoting fair wages, equitable treatment, and safe working conditions. Certifications like Fair Trade and Sedex Responsible Business Practices helped businesses verify their commitment to ethical labor practices throughout their supply chains.
Local Sourcing: To reduce carbon footprints and enhance supply chain resilience, some companies prioritized local sourcing of raw materials and components. This shift minimized emissions from transportation and provided economic support to local communities.

4. Decarbonizing Supply Chains
As companies pursued net-zero commitments, decarbonizing supply chains became a top priority in 2024. Key strategies included:

Supplier Engagement: Companies collaborated with suppliers to reduce emissions through energy efficiency measures, renewable energy adoption, and low-carbon manufacturing techniques.
Sustainable Logistics: Businesses invested in cleaner transportation methods, such as electric vehicles, hydrogen-powered trucks, and optimized shipping routes. The rise of “green corridors” for shipping exemplified collaborative efforts to decarbonize freight transport.
Circular Economy Integration: Companies embraced circular economy principles, focusing on reusing materials, designing for recyclability, and minimizing waste. Circular supply chains not only reduced environmental impact, but also created cost-saving opportunities and new revenue streams.

5. Challenges in ESG Supply Chain Management
Despite progress, companies faced significant challenges in implementing ESG principles across their supply chains.

Data Gaps and Inconsistencies: Collecting reliable ESG data from multitiered supply chains remains a critical hurdle. Smaller suppliers often lack the tools or expertise to comply with reporting requirements, leading to incomplete transparency and inconsistent metrics.
Cost Pressures: Implementing sustainable practices, such as adopting renewable energy or traceability technologies, requires significant upfront investment. These costs are particularly burdensome for small and medium-sized enterprises (SMEs) and create financial tension for larger companies balancing competitive pricing.
Geopolitical Risks: Trade restrictions, regional conflicts, and sanctions disrupt global supply chains, complicating compliance with ESG regulations like forced labor bans or carbon tariffs. Navigating these challenges requires constant adaptation to volatile geopolitical landscapes.
Greenwashing Risks: Increasing regulatory and public scrutiny amplifies the consequences of unverified sustainability claims. Missteps in ESG disclosures expose companies to legal risks, reputational damage, and loss of stakeholder trust.
Supply Chain Complexity: Global supply chains are vast and intricate, often spanning multiple tiers and regions. Mapping these networks to monitor ESG compliance and identify risks such as labor violations or environmental harm is a resource-intensive challenge.
Technological Gaps Among Suppliers: While advanced technologies like blockchain improve traceability, many smaller suppliers lack access to these tools, creating disparities in ESG data collection and compliance across the supply chain.
Resistance to Change: Suppliers in regions with weaker regulatory frameworks often resist adopting ESG principles due to limited awareness, operational costs, or lack of incentives, requiring significant corporate investment in education and capacity-building.
Market Demand for Low-Cost Goods: Consumer demand for affordable products often conflicts with the higher costs of implementing sustainable practices, especially in competitive industries such as fast fashion and consumer electronics.
Resource Scarcity and Climate Impacts: Extreme weather events, rising energy costs, and material shortages – exacerbated by climate change – disrupt supply chains and increase the difficulty of maintaining ESG commitments.
Measurement and Reporting Challenges: A lack of universally accepted metrics for critical ESG indicators, such as Scope 3 emissions or biodiversity impact, complicates efforts to measure progress and report transparently across supply chains.

6. Leading Examples of ESG-Driven Supply Chains
In 2024, several organizations across various industries demonstrated innovative approaches to integrating ESG principles into their supply chains. These efforts highlighted best practices in sustainability, transparency, and ethical procurement, including a number of the recent advances noted above.

Outdoor Apparel Brand: A leading outdoor apparel company prioritized fair labor practices and reduction of environmental-related impacts in its supply chain. The brand collaborates with suppliers and other brands to develop and utilize tools to measure and communicate their environmental impacts, which allows for industry-wide benchmarking and large-scale improvement.
Global Food and Beverage Producer: A major food and beverage producer expanded its regenerative agriculture program by collaborating with farmers to enhance soil health, reduce greenhouse gas emissions, and promote biodiversity. Additionally, the company leveraged blockchain technology to ensure traceability in its supply chains for commodities such as coffee and cocoa, strengthening its commitment to sustainability.
Global Furniture Retailer: A prominent furniture retailer invested heavily in renewable energy and circular design principles to decarbonize its supply chain by reducing, replacing and rethinking. A formal due diligence system employs dozens of wood supply and forestry specialists to assure that wood is sourced from responsibly managed forests.
Multinational Technology Company: A technology giant implemented energy-efficient practices across its supply chain, including transitioning to renewable energy sources for manufacturing facilities and using AI-powered tools to optimize logistics, with a goal of becoming carbon neutral across its entire supply chain by 2030.
Consumer Goods Manufacturer: A global consumer goods manufacturer introduced water-saving technologies into its supply chain, particularly in regions facing water scarcity. The company also prioritized reducing plastic waste by incorporating recycled materials into its packaging and partnering with local recycling initiatives.
Global Shipping Firm: A logistics and shipping company adopted low-carbon transportation technologies, such as green fuel for its vessels, decarbonizing container terminals, electric powered vehicles for landside transport, and optimized routes to minimize emissions. The firm also collaborated with industry partners to develop “green corridors” that support cleaner and more sustainable freight transport.

7. Future Directions in ESG and Supply Chains
Integrating ESG principles into supply chain management is expected to continue evolving, with the following trends among those shaping the future:

AI-Powered Supply Chains: Artificial intelligence will transform supply chain management by predicting risks, optimizing logistics, and enhancing sustainability. Advanced analytics will enable businesses to identify inefficiencies and implement targeted improvements, reducing emissions and ensuring ethical practices. There will, however, be challenges accounting for the growing number of laws and regulations worldwide governing AI’s use and development.
Circular Economy Models: Supply chains will embrace circular economy principles, focusing on waste reduction, material reuse, and extended product life cycles. Closed-loop systems and upcycling initiatives will mitigate environmental impacts while creating new revenue streams.
Blockchain-Enabled Certification Programs: Blockchain technology will enhance transparency and accountability by providing real-time verification of ESG metrics, such as emissions reductions and ethical sourcing. This will foster trust among consumers, investors and regulators.
Supply Chain Readiness Level (SCRL) Analysis: ESG benefits will continue to flow from the steps taken by the Biden Administration to strengthen America’s supply chains over the past four years. Additionally, the Department of Energy’s Office of Manufacturing and Energy Supply Chains SCRL tool that was recently rolled out to evaluate global energy supply chain needs and gaps, quantify and eliminate risks and vulnerabilities, and strengthen U.S. energy supply chains is expected to facilitate decarbonization of supply chains.
Decentralized Energy Solutions: Decentralized energy systems, including on-site renewable energy installations and energy-sharing networks, will reduce dependence on traditional power grids. These solutions will decarbonize supply chains while promoting sustainability.
Nature-Based Solutions: Supply chains will integrate nature-based approaches, such as agroforestry partnerships and wetland restoration, to enhance biodiversity and provide environmental services like carbon sequestration and water filtration.
Advanced Water Stewardship: Companies will adopt innovative water management practices, including water recycling technologies and watershed restoration projects, to address water scarcity and ensure sustainable supplies for all stakeholders.
Scope 3 Emissions Reduction: Businesses will prioritize reducing emissions across their value chains by collaborating with suppliers, setting science-based targets, and implementing robust carbon accounting tools.
Industry-Wide Collaboration Platforms: Collaborative platforms will enable companies to share sustainability data and best practices and develop sector-specific solutions. This approach will help address systemic challenges, such as decarbonizing aviation or achieving sustainable fashion production.

Developments in ESG and supply chains in 2024 reflect a growing recognition of their critical role in achieving sustainability goals. From enhanced regulatory frameworks and technological innovations to responsible sourcing and decarbonization efforts, companies are making strides toward more sustainable and ethical supply chains.
However, challenges such as data gaps, cost pressures, and geopolitical risks highlight the complexities of this transformation. By addressing these issues and embracing future opportunities, businesses can create resilient, transparent, and sustainable supply chains that drive both success in business and environmental and social progress.

Legislative Update: 119th Congress Outlook on AI Policy

House Looks To Rep. Obernolte to Take Lead on AI
Representative Jay Obernolte (R-Calif.) has emerged as a pivotal figure in shaping the United States’ legislative response to artificial intelligence (AI). With a rare combination of technical expertise and political acumen, Obernolte’s leadership is poised to influence how Congress navigates both the opportunities and risks associated with AI technologies.
AI Expertise and Early Influence
Obernolte’s extensive background in AI distinguishes him among his congressional peers. Holding a graduate degree in AI and decades of experience as a technology entrepreneur, he brings firsthand knowledge to the legislative arena.
As the chair of a bipartisan House AI task force, Obernolte spearheaded the creation of a comprehensive roadmap addressing AI’s societal, economic, and national security implications. The collaborative environment he fostered, eschewing traditional seniority-based hierarchies, encouraged open dialogue and thoughtful debate among members. Co-chair Rep. Ted Lieu (D-Calif.) and other task force participants praised Obernolte’s inclusive approach to consensus building.
Legislative Priorities and Policy Recommendations
Obernolte’s leadership produced a robust policy framework encompassing:

Expanding AI Resource Accessibility: Advocating for broader access to AI tools for academic researchers and entrepreneurs to prevent monopolization of research by private companies.
Combatting Deepfake Harms: Supporting legislative efforts to address non-consensual explicit deepfakes, a growing issue affecting young people nationwide. Notably, he backed H.R. 5077 and H.R. 7569, which are expected to resurface in the 119th Congress.
Balancing Regulation and Innovation: Striving to create a regulatory environment that protects the public while fostering AI innovation.
National Data Privacy Standards: Advocating for comprehensive data privacy legislation to safeguard consumer information.
Advancing Quantum Computing: Supporting initiatives to enhance quantum technology development.

Maintaining Bipartisanship
Obernolte emphasizes the importance of bipartisan collaboration, a principle he upholds through relationship-building initiatives, including informal gatherings with task force members. His bipartisan approach is vital in developing durable AI regulations that endure beyond shifting political majorities. Speaker Mike Johnson (R-La.) recognized Obernolte’s ability to bridge divides, entrusting him with the leadership role.
Obernolte acknowledges the difficulty of balancing immediate GOP priorities, such as confirming Cabinet appointments and advancing tax reform, with the urgent need for AI governance. His strategy involves convincing leadership that AI policy proposals are well-reasoned and broadly supported.
Senate Republicans 119th Roadmap on AI
As the 119th Congress convenes under Republican leadership, Senate Republicans are expected to approach artificial intelligence (AI) legislation with a focus on fostering innovation while exercising caution regarding regulatory measures. This perspective aligns with the broader GOP emphasis on minimal government intervention in technology sectors.
Legislative Landscape and Priorities
During the 118th Congress, the Senate Bipartisan AI Working Group, which included Republican Senators Mike Rounds (R-S.D.) and Todd Young (R-Ind.), released a policy roadmap titled “Driving U.S. Innovation in Artificial Intelligence.” This document outlined strategies to promote AI development, address national security concerns, and consider ethical implications.
In the 119th Congress, Senate Republicans are anticipated to prioritize:

Promoting Innovation: Advocating for policies that support AI research and development to maintain the United States’ competitive edge in technology.
National Security: Focusing on the implications of AI in defense and security, ensuring that advancements do not compromise national safety.
Economic Growth: Encouraging the integration of AI in various industries to stimulate economic development and job creation.

Regulatory Approach
Senate Republicans generally favor a cautious approach to AI regulation, aiming to avoid stifling innovation. There is a preference for industry self-regulation and the development of ethical guidelines without extensive government mandates. This stance reflects concerns that premature or overly restrictive regulations could hinder technological progress and economic benefits associated with AI.
Bipartisan Considerations
While Republicans hold the majority, bipartisan collaboration remains essential for passing comprehensive AI legislation. Areas such as national security and economic competitiveness may serve as common ground for bipartisan efforts. However, topics like AI’s role in misinformation and election integrity could present challenges due to differing party perspectives on regulation and free speech.
Conclusion
In both the House and Senate, Republicans are approaching AI legislation with a focus on fostering innovation, enhancing national security, and promoting economic growth. Their preference leans toward industry self-regulation and minimal government intervention to avoid stifling innovation. Areas like national security offer potential bipartisan common ground, though debates around misinformation and election integrity may highlight partisan divides.
With House and Senate Republicans already working on a likely massive reconciliation package focused on top Republican priorities including tax, border security, and energy, AI advocates will be hard pressed to ensure their legislative goals find space in the final text. 

Change Management: How to Finesse Law Firm Adoption of Generative AI

Law firms today face a turning point. Clients demand more efficient, cost-effective services; younger associates are eager to leverage the latest technologies for legal tasks; and partners try to reconcile tradition with agility in a highly competitive marketplace. Generative artificial intelligence (AI), known for its capacity to produce novel content and insights, has emerged as a solution that promises better efficiency, improved work quality, and a real opportunity to differentiate the firm in the marketplace. Still, the question remains:
How can a law firm help its attorneys and staff to embrace AI while safeguarding the trust, ethical integrity, and traditional practices that lie at the heart of legal work?
Andrew Ng’s AI Transformation Playbook offers a valuable framework for introducing AI in ways that minimize risk and maximize organizational acceptance. Adopting these principles in a law-firm setting involves balancing the profession’s deep-seated practices with the potential of AI. From addressing cultural resistance to crafting a solid technical foundation, a thoughtful change-management plan is necessary for a sustainable and successful transition.

Overcoming Skepticism Through Pilot Projects

Law firms, governed by partnership models and a respect for precedent, tend to approach innovation cautiously. Partners who built their careers through meticulous research may worry that machine-generated insights compromise rigor and reliability. Associates might fear an AI-driven erosion of the apprenticeship model, wondering if their role will shrink as technology automates certain tasks. Concerns also loom regarding the firm’s reputation if clients suspect crucial responsibilities are being delegated to a mysterious black box.
The most direct method of quelling these doubts is to show proof of concept. Andrew Ng’s approach suggests starting with small, well-defined projects before scaling firm-wide. This tactic acknowledges that, with each successful pilot, more people become comfortable with technology that once felt like a threat. By methodically testing AI in narrower use cases, the firm ensures data security and strict confidentiality protocols remain intact. Early wins become the foundation for broader adoption.
Pilot projects help transform abstract AI potential into tangible benefits. For example, using AI to produce first drafts of nondisclosure agreements. Attorneys then refine these drafts, focusing on subtle nuances rather than repetitive details. Another natural entry point is e-discovery, where AI can sift through thousands of documents to categorize and surface relevant information more efficiently than human-only reviews. Each of these use cases is a manageable experiment. If AI truly delivers faster turnaround times and maintains accuracy, it provides evidence that can persuade skeptical stakeholders. Pilots also offer an opportunity to identify challenges, such as user training gaps or hiccups in data management, on a small scale before the technology is rolled out more broadly.

Creating a Dedicated AI Team

One of the first steps is assembling a cross-functional leadership group that aligns AI initiatives with overarching business objectives. This team typically includes partners who can advocate for AI at leadership levels, associates immersed in daily work processes, IT professionals responsible for infrastructure and cybersecurity, and compliance officers ensuring adherence to ethical mandates.
In large firms, a Chief AI Officer or Director of Legal Innovation may coordinate these efforts. In smaller firms, a few technology-minded attorneys might share multiple roles. The key is that this group does more than evaluate software. It crafts data governance policies, designs training programs, secures necessary budgets, and proactively tackles any ethical, reputational, or practical concerns that arise when introducing a technology as potentially disruptive as AI.

Training as the Core of Transformation

AI has limited value if the firm’s workforce does not know how to wield it effectively. Training must go beyond simple “tech demos,” offering interactive sessions in which legal professionals can apply AI tools to realistic tasks. For example, attorneys may practice using the system to draft a client memo or summarize case law. These hands-on experiences remove the mystique surrounding AI, giving participants a concrete understanding of its capabilities and boundaries.
Lawyers also need guidelines for verifying the AI’s output. Legally binding documents or briefs cannot be signed off without sufficient human oversight. For that reason, law firms often designate a “review attorney” role in the AI workflow, ensuring that each AI-generated product passes through a person who confirms it meets the firm’s rigorous standards. Partners benefit from shorter, strategically focused sessions that highlight how AI can influence client satisfaction, create new revenue streams, or boost efficiency in critical operations.

Developing a Coherent AI Strategy

Once the firm achieves early successes with pilot programs and begins to see a measurable return on smaller AI projects, it is time to formulate a broader vision. This strategic blueprint should identify the highest-value areas for further application of AI, whether it involves automating client intake, deploying predictive analytics for litigation, or streamlining contract drafting at scale. The key is to match AI initiatives with the firm’s core goals—boosting client satisfaction, refining operational efficiency, and ultimately reinforcing its reputation for accurate, ethical service.
But the firm’s AI strategy should never become a static directive. It must grow with the firm’s internal expertise, adjusting to real-world results, regulatory changes, and emerging AI capabilities. By regularly re-evaluating milestones and expected outcomes, the firm ensures its AI investments remain both relevant and impactful in serving clients’ evolving needs.

Communicating to Foster Trust and Transparency 

Change management thrives on dialogue. Andrew Ng’s playbook underscores the importance of transparent communication, especially in fields sensitive to reputational risk. Law firms can apply this principle by hosting informal gatherings where early adopters share their experiences—both positive and negative. These stories have a dual effect: they highlight successes that validate the technology, and they candidly address difficulties to keep expectations realistic.
Newsletters, lunch-and-learns, and internal portals all help disseminate updates and insights across different practice areas. Firms that operate multiple offices often hold virtual town halls, ensuring that attorneys and support staff everywhere can stay informed. Externally, clarity matters too. Clients who understand that a firm is leveraging AI to improve speed and accuracy (while retaining key ethical safeguards) are more likely to view the decision as innovative rather than risky.
Closing Thoughts
AI holds remarkable promise for law firms, but its full value emerges only through conscientious change management, which hinges on a delicate balance of diverse personalities. Nothing succeeds like success. By implementing small pilot projects, assembling an AI leadership team, focusing on thorough training, crafting a compelling business strategy, and clearly communicating its vision, a law firm can mitigate risks and harness AI’s transformative power.
The best outcomes result not from viewing AI as a magical shortcut, but from recognizing it as a partner that handles repetitive tasks and surfaces insights more swiftly than humans alone. This frees lawyers to direct their intellect and creativity toward high-level endeavors that deepen client relationships, identify new opportunities, and advance compelling arguments. When fused with a commitment to the highest professional and ethical standards, AI can become a catalyst for a dynamic and fruitful future—one where law firms deliver better service, operate more efficiently, and remain steadfastly true to their professional roots.

The BR Privacy & Security Download: January 2025

Must Read! The U.S. Department of Health and Human Services Office for Civil Rights recently proposed an amendment to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cyber security protections for electronic protected health information. Read the full alert to learn more about the first significant update to HIPAA’s Security Rule in over a decade. Read More > >

STATE & LOCAL LAWS & REGULATIONS
Five New State Comprehensive Privacy Laws Effective in January with Three More to Follow in 2025: With the start of the new year, five new state comprehensive privacy laws have become effective. The comprehensive privacy laws of Delaware, Iowa, Nebraska, and New Hampshire became effective on January 1, 2025, and New Jersey’s law will come into effect on January 15, 2025. Tennessee, Minnesota, and Maryland will follow suit and take effect on July 1, 2025, July 31, 2025, and October 1, 2025, respectively. Companies should review their privacy compliance programs to identify potential compliance gaps with differences in the increasing patchwork of state laws.
Colorado Issues Proposed Draft Amendments to CPA Rules: The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA. 
CPPA Announces Increased Fines and Penalties Under CCPA: The California Privacy Protection Agency (“CPPA”), the enforcement authority of the California Consumer Privacy Act (“CCPA”), has adjusted the fines and monetary thresholds of the CCPA. Under the CCPA, in January of every odd-numbered year, the CPPA must make this adjustment to account for changes in the Consumer Price Index. The CPPA has increased the monetary thresholds of the CCPA from $25,000,000 to $26,625,000. The CPPA also increased the range of monetary damages from between $100 to $750 per consumer per incident or actual damages (whichever is greater) to $107 to $799. The range of civil penalties and administrative fine amounts further increased from $2,500 for each violation of the CCPA or $7,500 for each intentional violation and violations involving the personal information of children under 16 to $2,663 and $7,988, respectively. The new amounts went into effect on January 1, 2025.
Connecticut State Senator Previews Proposed Legislation to Update State’s Comprehensive Privacy Law: Connecticut State Senator James Maroney (D) has announced that he is drafting a proposed update to the Connecticut Privacy Act that would expand its scope, provide enhanced data subject rights, include artificial intelligence (“AI”) provisions, and potentially eliminate certain exemptions currently available under the Act. Senator Maroney expects that the proposed bill could receive a hearing by late January or early February. Although Maroney has not published a draft, he indicated that the draft would likely (1) reduce the compliance threshold from the processing of the personal data of 100,000 consumers to 35,000 consumers; (2) include AI anti-discrimination measures, potentially in line with recent anti-discrimination requirements in California and Colorado; (3) expand the definition of sensitive data to include religious beliefs and ethnic origin, in line with other state laws; (4) expand the right to access personal data under the law to include a right to access a list of third parties to whom personal data was disclosed, mirroring similar rights in Delaware, Maryland, and Oregon; and (5) potentially eliminate or curtail categorical exemptions under the law, such as that for financial institutions subject to the Gramm-Leach-Bliley Act. 
CPPA Endorses Browser Opt-Out Law: The CPPA’s board voted to sponsor a legislative proposal that would make it easier for California residents to exercise their right to opt out of the sale of personal information and sharing of personal information for cross-context behavioral advertising purposes. Last year, Governor Newsome vetoed legislation with the same requirements. Just as last year’s vetoed legislation, the legislative proposal sponsored by the CPPA requires browser vendors to include a feature that allows users to exercise their opt-out right through opt-out preference signals. Under the CCPA, businesses are required to honor opt-out preference signals as valid opt-out requests. Opt-out preference signals allow a consumer to exercise their opt-out right with all businesses they interact with online without having to make individualized requests with each business. If the proposal is adopted, California would be the first state to require browser vendors to offer consumers the option to enable these signals. Six other states (Colorado, Connecticut, Delaware, Montana, Oregon, and Texas) require businesses to honor browser privacy signals as an opt-out request.

FEDERAL LAWS & REGULATIONS
HHS Proposes Updates to HIPAA Security Rule: The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). The NPRM proposes the first significant updates to HIPAA’s Security Rule in over a decade. The NPRM makes a number of updates to the administrative, physical, and technical safeguards specified by the Security Rule, removes the distinction between “required” and “addressable” implementation specifications, and makes all implementation specifications “required” with specific, limited exceptions. 
Trump Selects Andrew Ferguson as New FTC Chair: President-elect Donald Trump has selected current Federal Trade Commission (“FTC”) Commissioner Andrew Ferguson to replace Lina Khan as the new FTC Chair. Ferguson is one of two Republicans of the five FTC Commissioners and has been a Commissioner since April of 2024. Prior to becoming an FTC Commissioner, Ferguson served as Virginia’s solicitor general. During his time as an FTC Commissioner, Ferguson dissented from several of Khan’s rulemaking efforts, including a ban on non-compete clauses in employment contracts. Separately, Trump also selected Mark Meador to be an FTC Commissioner. Once Meador is confirmed to give the FTC a Republican majority, a Republican-led FTC under Ferguson may deprioritize rulemaking and enforcement efforts relating to privacy and AI. In a leaked memo first reported by Punchbowl News, Ferguson wrote to Trump that, under his leadership, the FTC would “stop abusing FTC enforcement authorities as a substitute for comprehensive privacy legislation” and “end the FTC’s attempt to become an AI regulator.”
FERC Updates and Consolidates Cybersecurity Requirements for Gas Pipelines : The U.S. Federal Energy Regulatory Commission (“FERC”) has issued a final rule to update and consolidate cybersecurity requirements for interstate natural gas pipelines. Effective February 7, 2025, the rule adopts Version 4.0 of the Standards for Business Practices of Interstate Natural Gas Pipelines, as approved by the North American Energy Standards Board (“NAESB”). This update aims to enhance the efficiency, reliability, and cybersecurity of the natural gas industry. The new standards consolidate existing cybersecurity protocols into a single manual, streamlining processes and strengthening protections against cyber threats. This consolidation is expected to make it easier and faster to revise cybersecurity standards in response to evolving threats. The rule also aligns with broader U.S. government efforts to prioritize cybersecurity across critical infrastructure sectors. Compliance filings are required by February 3, 2025, and the standards must be fully adhered to by August 1, 2025.
House Taskforce on AI Delivers Report to Address AI Advancements: The House Bipartisan Task Force on Artificial Intelligence (the “Task Force”) submitted its comprehensive report to Speaker Mike Johnson and Democratic Leader Hakeem Jeffries. The Task Force was created to ensure America’s continued global leadership in AI innovation with appropriate safeguards. The report advocates for a sectoral regulatory structure and an incremental approach to AI policy, ensuring that humans remain central to decision-making processes. The report provides a blueprint for future Congressional action to address advancements in AI and articulates guiding principles for AI adoption, innovation, and governance in the United States. Key areas covered in the report include government use of AI, federal preemption of state AI law, data privacy, national security, research and development, civil rights and liberties, education and workforce development, intellectual property, and content authenticity. The report aims to serve as a roadmap for Congressional action, addressing the potential of AI while mitigating its risks.
CFPB Proposes Rule to Restrict Sale of Sensitive Data: The Consumer Financial Protection Bureau (“CFPB”) proposed a rule that would require data brokers to comply with the Fair Credit Reporting Act (“FCRA”) when selling income and certain other consumer financial data. CFPB Director Rohit Chopra stated the new proposed rule seeks to limit “widespread evasion” of the FCRA by data brokers when selling sensitive personal and financial information of consumers. Under the proposed rule, data brokers could sell financial data only for permissible purposes under the FCRA, including checking on loan applications and fraud prevention. The proposed rule would also limit the sale of personally identifying information known as credit header data, which can include basic demographic details, including names, ages, addresses, and phone contacts. 
FTC Issues Technology Blog on Mitigating Security Risks through Data Management, Software Development and Product Design: The Federal Trade Commission (“FTC”) published a blog post identifying measures that companies can take to limit the risks of data breaches. These measures relate to security in data management, security in software development, and security in product design for humans. The FTC emphasizes comprehensive governance measures for data management, including (1) enforcing mandated data retention schedules; (2) mandating data deletion in accordance with these schedules; (3) controlling third-party data sharing; and (4) encrypting sensitive data both in transit and at rest. In the context of security in software development, the FTC identified (1) building products using memory-safe programming languages; (2) rigorous testing, including penetration and vulnerability testing; and (3) securing external product access to prevent unauthorized remote intrusions as key security measures. Finally, in the context of security in product design for humans, the FTC identified (1) enforcing least privilege access controls; (2) requiring phishing-resistant multifactor authentication; and (3) designing products and services without the use of dark patterns to reduce the over-collection of data. The blog post contains specific links to recent FTC enforcement actions specifically addressing each of these issues, providing users with insight into how the FTC has addressed these issues in the past. Companies reviewing their security and privacy governance programs should ensure that they consider these key issues.

U.S. LITIGATION
Texas District Court Prevents HHS from Enforcing Reproductive Health Privacy Rule Against Doctor: The U.S. District Court for the Northern District of Texas ruled that a Texas doctor is likely to prevail on her claim that HHS exceeded its statutory authority when it adopted an amendment to the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule that protects reproductive health care information and enjoined HHS from enforcing the rule against her. The 2024 amendment to the HIPAA Privacy Rule prohibits covered entities from disclosing information that could lead to an investigation or criminal, civil, or administrative liability for seeking, obtaining, providing, or facilitating reproductive health care. The Court stated that the rule likely unlawfully interfered with the plaintiff’s state-law duty to report suspected child abuse in violation of Congress’s delegation to the agency to enact rules interpreting HIPAA without limiting any law providing for such reporting. The plaintiff argued that, under Texas law, she is obligated to report instances of child abuse within 48 hours, and that relevant requests from Texas regulatory authorities demand the full, unredacted patient chart, which for female patients includes information about menstrual periods, number of pregnancies, and other reproductive health information, among other reproductive health information.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement: A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025.
Federal Court Upholds New Jersey’s Daniel’s Law, Dismissing Free Speech Challenges: A federal judge affirmed the constitutionality of New Jersey’s Daniel’s Law, dismissing First Amendment objections raised by data brokers. Enacted following the murder of Daniel Anderl, son of U.S. District Judge Esther Salas, the law permits covered individuals—including active, retired, and former judges, prosecutors, law enforcement officers, and their families—to request the removal of personal details, such as home addresses and unpublished phone numbers, from online platforms. Data brokerage firms that find themselves on the receiving end of such requests are mandated by the statute to comply within ten (10) business days, with penalties for non-compliance including actual damages or a $1,000 fine for each violation, as well as potential punitive damages for instances of willful disregard. Notably, in 2023, Daniel’s Law was amended to allow claim assignments to third parties, resulting in over 140 lawsuits filed by a single consumer data protection company: Atlas Data Privacy Corporation. Atlas Data, a New Jersey firm specializing in data deletion, has emerged as a significant force in this litigation, utilizing Daniel’s Law to challenge data brokers on behalf of around 19,000 individuals. The court, in upholding Daniel’s Law, emphasized its critical role in safeguarding public officials while concurrently ensuring public oversight remains strong. Although data brokers contended that the law infringed on free speech and unfairly targeted their operations, the court dismissed these claims as lacking merit, instead placing significant emphasis on the statute’s relatively focused scope and substantial state interest at play. Although unquestionably a significant victory for advocates of privacy rights, the judge permitted an immediate appeal by the data brokers. 
GoodRx Settles Class Action Suit Over Alleged Data Sharing Violations: GoodRx has agreed to a $25 million settlement in a class-action lawsuit alleging the company violated privacy laws by sharing users’ sensitive health data with advertisers like Meta Platforms, Google, and Criteo Corp. The settlement, if approved, would resolve a lawsuit filed in February 2023. The lawsuit followed an FTC action alleging that GoodRx shared information about users’ prescriptions and health conditions with advertising companies. GoodRx settled the FTC matter for $1.5 million. The proposed class in the class-action lawsuit is estimated to be in the tens of millions and would give each class member an average recovery ranging from $3.31 to $11.03. The settlement also allows the plaintiffs to use information from GoodRx to pursue their claims against the other defendants, including Meta, Google, and Criteo.
23andMe Data Breach Suit Settlement Approved: A federal judge approved a settlement to resolve claims that alleged 23andMe Inc. failed to secure the sensitive personal data causing a data breach in 2023. According to 23andMe, a threat actor was able to access roughly 14,000 user accounts through credential stuffing, which further enabled access to the personal information that approximately 6.9 million users made available through 23andMe’s DNA Relative and Family Tree profile features. Under the terms of the $30 million settlement, class members will receive cash compensation and three years of data monitoring services, including genetic services. 

U.S. ENFORCEMENT
FTC Takes Action Against Company for Deceptive Claims Regarding Facial Recognition Software: The Federal Trade Commission (“FTC”) announced that it has entered into a settlement with IntelliVision Technologies Corp. (“IntelliVision”), which provides facial recognition software used in home security systems and smart home touch panels. The FTC alleged that IntelliVision’s claims that it had one of the highest accuracy rates on the market, that its software was free of gender or racial bias, and was trained on millions of faces was false or misleading. The FTC further alleged that IntelliVision did not have adequate evidence to support its claim that its anti-spoofing technology ensures the system cannot be tricked by a photo or video image. The proposed order against IntelliVision specifically prohibits IntelliVision from misrepresenting the effectiveness, accuracy, or lack of bias of its facial recognition technology and its technology to detect spoofing, and the comparative performance of the technology with respect to individuals of different genders, ethnicities, and skin tones.
FTC Settles Enforcement Actions with Data Brokers for Selling Sensitive Location Data: The FTC announced settlements with data brokers Gravy Analytics Inc. (“Gravy Analytics”) and Mobilewalla, Inc. (“Mobilewalla”) related to the tracking and sale of sensitive location data of consumers. According to the FTC, Gravy Analytics violated the FTC Act by unfairly selling sensitive consumer location data, by collecting and using consumers’ location data without obtaining verifiable user consent for commercial and government uses, and by selling data regarding sensitive characteristics such as health or medical decisions, political activities, and religious views derived from location data. Under the proposed settlement, Gravy Analytics will be prohibited from selling, disclosing, or using sensitive location data in any product or service, delete all historic location data and data products using such data, and must establish a sensitive data location compliance program. Separately, the FTC settled allegations against Mobilewalla stemming from allegations that Mobilewalla collected location data from real-time bidding exchanges and third-party aggregators, including data related to health clinic visits and visits to places of worship, without the knowledge of consumers, and subsequently sold such data. According to the FTC, when Mobilewalla bid to place an ad for its clients on a real-time advertising bidding exchange, it unfairly collected and retained the information in the bid request, even when it didn’t have a winning bid. Under the proposed settlement, Mobilewalla will be prohibited from selling sensitive location data and from collecting consumer data from online advertising auctions for purposes other than participating in those auctions.
Texas Attorney General Issues New Warnings Under State’s Comprehensive Privacy Law: The Texas Attorney General issued warnings to satellite radio broadcaster Sirius XM and three mobile app providers that they appear to be sharing sensitive data of consumers, including location data, without proper notification or obtaining consent. The letter warnings did not come with a press release or other public statement and were reported by Recorded Future News, who obtained the notices through a public records request. The letter to Sirius XM stated that the Attorney General’s office found a number of violations of the Texas Data Privacy and Security Act by the Sirius XM privacy notice, including failing to provide reasonably clear notice of the categories of sensitive data being processed and processing sensitive data without appropriate consent. Similar letters were sent to mobile app providers stating that the providers failed to obtain consumer consent for data sharing or including information on how consumers could exercise their rights under Texas law. 
Texas Attorney General Launches Investigations Into 15 Companies for Children’s Privacy Practices: The Texas Attorney General’s office announced it had launched investigations into Character.AI and 14 other companies including Reddit, Instagram, and Discord. The Attorney General’s press release stated that the investigations related to the companies’ privacy and safety practices for minors pursuant to the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act (“TDPSA”). Details of the Attorney General’s allegations were not provided in the announcement. The TDPSA requires companies to provide notice and obtain consent to collect and use minors’ personal data. The SCOPE Act prohibits digital service providers from sharing, disclosing, or selling a minor’s personal identifying information without permission from the child’s parent or legal guardian and provides parents with tools to manage privacy settings on their child’s account.
HHS Imposes Penalty Against Medical Provider for Impermissible Access to PHI and Security Rule Violations: The U.S. Department of Health and Human Services Office of Civil Rights (“OCR”) announced that it imposed a $1.19 million civil penalty against Gulf Coast Pain Consultants, LLC d/b/a Clearway Pain Solutions Institute (“GCPC”) for violations of the HIPAA Security Rule arising from a data breach. GCPC’s former contractor had impermissibly accessed GCPC’s electronic medical record system to retrieve protected health information (“PHI”) for use in potential fraudulent Medicare claims. OCR’s investigation determined that the impermissible access occurred on three occasions, affecting approximately 34,310 individuals. The compromised PHI included patient names, addresses, phone numbers, email addresses, dates of birth, Social Security numbers, chart numbers, insurance information, and primary care information. OCR’s investigations revealed multiple potential violations of the HIPAA Security Rule, including failures to conduct a compliant risk analysis and implement procedures to regularly review records of activity in information systems and terminate former workforce members’ access to electronic PHI.
HHS Settles with Health Care Clearinghouse for HIPAA Security Rule Violations: OCR announced a settlement with Inmediata Health Group, LLC (“Inmediata”), a healthcare clearinghouse, for potential violations of the HIPAA Security Rule, following OCR’s receipt of a 2018 complaint that PHI was accessible to search engines like Google, on the Internet. OCR’s investigation determined that from May 2016 through January 2019, the PHI of 1,565,338 individuals was made publicly available online. The PHI disclosed included patient names, dates of birth, home addresses, Social Security numbers, claims information, diagnosis/conditions, and other treatment information. OCR’s investigation also identified multiple potential HIPAA Security Rule violations including failures to conduct a compliant risk analysis and to monitor and review Inmediata’s health information systems’ activity. Under the settlement, Inmediata paid OCR $250,000. OCR determined that a corrective action plan was not necessary in this resolution as Inmediata had previously agreed to a settlement with 33 states that included corrective actions that addressed OCR’s findings.
New York State Healthcare Provider Settles with Attorney General Regarding Allegations of Cybersecurity Failures: HealthAlliance, a division of Westchester Medical Center Health Network (“WMCHealth”), has agreed to pay a $1.4 million fine, with $850,000 suspended, due to a 2023 data breach affecting over 240,000 patients and employees in New York State. The breach at issue, which occurred between September and October 2023, was reportedly caused by a security flaw in Citrix NetScaler—a tool used by many organizations to optimize web application performance and availability by reducing server load—that went unpatched. Although HealthAlliance was made aware of the vulnerability, they were unsuccessful in patching it due to technical difficulties, ultimately resulting in the exposure of 196 gigabytes of data, including particularly sensitive information like Social Security numbers and medication records. As part of its agreement with New York State, HealthAlliance must enhance its cybersecurity practices by implementing a comprehensive information security program, developing a data inventory, and enforcing a patch management policy to address critical vulnerabilities within 72 hours. For more details, view the press release from the New York Attorney General’s office.
HHS Settles with Children’s Hospital for HIPAA Privacy and Security Violations: OCR announced a $548,265 civil monetary penalty against Children’s Hospital Colorado (“CHC”) for violations of the HIPAA Privacy and Security Rules arising from data breaches in 2017 and 2020. The 2017 data breach involved a phishing attack that compromised an email account containing 3,370 individuals’ PHI and the 2020 data breach compromised three email accounts containing 10,840 individuals’ PHI. OCR’s investigation determined that the 2017 data breach occurred because multi-factor authentication was disabled on the affected email account. The 2020 data breach occurred, in part, when workforce members gave permission to unknown third parties to access their email accounts. OCR found violations of the HIPAA Privacy Rule for failure to train workforce members on the HIPAA Privacy Rule, and the HIPAA Security Rule requirement to conduct a compliant risk analysis to determine the potential risks and vulnerabilities to ePHI in its systems.

INTERNATIONAL LAWS & REGULATIONS
Italy Imposes Landmark GDPR Fine on AI Provider for Data Violations: In the first reported EU penalty under the GDPR relating to generative AI, Italy’s data protection authority, the Garante, fined OpenAI 15 million euros for breaching the European Union’s General Data Protection Regulation (“GDPR”). The penalty was linked to three specific incidents involving OpenAI: (1) unauthorized use of personal data for ChatGPT training without user consent, (2) inadequate age verification risking exposure of minors to inappropriate content, and (3) failure to report a March 2023 data breach that exposed users’ contact and payment information. The investigation into OpenAI, which began after the Garante was made aware of the March 2023 breach, initially resulted in Italy temporarily blocking access to ChatGPT but eventually reinstated it after OpenAI made concrete improvements to its age verification and privacy policies. Alongside the monetary penalty, OpenAI is additionally mandated to conduct a six-month public awareness campaign in Italy to educate the Italian public on data collection and individual user rights under GDPR. OpenAI has said that it plans to appeal the Garante’s decision, arguing that the fine exceeds its revenue in Italy.
Australian Parliament Approves Privacy Act Reforms and Bans Social Media Use by Minors: The Australian Parliament passed a number of privacy bills in December. The bills include reforms to the Australian Privacy Act, a law requiring age verification by social media platforms, and a law banning social media use by minors under the age of 16. Privacy Act reforms include new enforcement powers for the Office of the Australian Information Commissioner that clarify when “serious” breaches of the Privacy Act occur and allow the OAIC to bring civil penalty proceedings for lesser breaches. Other reforms include requiring entities that use personal data for automated decision-making to include in their privacy notices information about what data is used for automated decision-making and what types of decisions are made using automated decision-making technology. 
EDPB Releases Opinion on Personal Data Use in AI Models: In response to a formal request from Ireland’s Data Protection Commission asking for clarity about how the EU General Data Protection Regulation (“GDPR”) applies to the training of large language models with personal data, the European Data Protection Board (“EDPB”) released its opinion regarding the lawful use of personal data for the development and deployment of artificial intelligence models (the “Opinion”). The Irish Data Protection Commission specifically requested EDPB to opine on: (1) when and how an AI model can be considered anonymous, (2) how legitimate interests can be used as the legal basis in the development and deployment phases of an AI model, and (3) the consequences of unlawful processing in the development phase of an AI model on its subsequent operation. With respect to anonymity, the EDPB stated this should be analyzed on a case-by-case basis taking into account the likelihood of obtaining personal data of individuals whose data was used to build the model and the likelihood of extracting personal data from queries. The Opinion describes certain methods that controllers can use to demonstrate anonymity. With respect to the use of legitimate interest as a legal basis for processing, the EDPB restated a three-part test to assess legitimate interest from its earlier guidance. Finally, the EDPB reviewed several scenarios in which personal data may be unlawfully processed to develop an AI model. 
Second Draft of General-Purpose AI Code of Practice Published: The European Commission announced that independent experts published the Second Draft of the General Purpose AI Code of Practice. The AI Code of Practice is designed to be a guiding document for providers of general-purpose AI models, allowing them to demonstrate compliance with the AI Act. Under the EU AI Act, providers are persons or entities that develop an AI system and place that system on the market. This second draft is based on the responses and comments received on the first draft and is designed to provide a “future-proof” code. The first part of the Code details transparency and copyright obligations for all providers of general-purpose AI models. The second part of the Code applies to providers of advanced general-purpose AI models that could pose systemic risks. This section outlines measures for systemic risk assessment and mitigation, including model evaluations, incident reporting, and cybersecurity obligations. The Second Draft will be open for comments until January 15, 2025.
NOYB Approved to Bring Collective Redress Claims: The Austrian-based non-profit organization None of Your Business (“NOYB”) has been approved as a Qualified Entity in Austria and Ireland, enabling it to pursue collective redress actions across the European Union (“EU”). Famous for challenging the EU-US data transfer framework through its Schrems I and II actions, NOYB intends to use the EU’s collective action redress system to challenge what it describes as unlawful processing without consent, use of deceptive dark patterns, data sales, international data transfers, and use of “absurd” language in privacy policies. Unlike US class actions, these EU actions are strictly non-profit. However, they do provide for both injunctive and monetary redress measures. NOYB intends to bring its first actions in 2025. Click here to learn more and read NOYB’s announcement.
EDPB Issues Guidelines on Third Country Authority Data Requests: The EDPB published draft guidelines on Article 48 of the GDPR relating to the transfer or disclosure of personal data to a governmental authority in a third country (the “Guidelines”). The Guidelines state that, as a general rule, requests from governmental authorities are recognizable and enforceable under applicable international agreements. The Guidelines further state that any such transfer must also comply with Article 6 with respect to legal basis for processing and Article 46 regarding legal mechanism for international data transfer. The Guidelines will be available for public consultation until January 27, 2025.
Irish DPC Fines Meta €251 Million for Violations of the GDPR: The Irish Data Protection Commission (DPC) fined Meta €251 million following a 2018 data breach that affected 29 million Facebook accounts globally, including 3 million in the European Union. The breach exposed personal data such as names, contact information, locations, birthdates, religious and political beliefs, and children’s data. The DPC found that Meta Ireland violated General Data Protection Regulation (GDPR) Articles 33(3) and 33(5) by failing to provide complete information in their breach notification and to properly document the breach. Furthermore, Meta Ireland infringed GDPR Articles 25(1) and 25(2) by neglecting to incorporate data protection principles into the design of their processing systems and by processing more data than necessary by default. 
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin