AI Drug Development: FDA Releases Draft Guidance
On January 6, 2025, the U.S. Food and Drug Administration (FDA) released draft guidance titled Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (“guidance”) explaining the types of information that the agency may seek during drug evaluation. In particular, the guidance outlines a risk framework based on a “context of use” of Artificial Intelligence (AI) technology and details the information that might be requested (or required) relating to AI technologies, the data used to train the technologies, and governance around the technologies, in order to approve their use. At a high level, the guidance underscores the FDA’s goals for establishing AI model credibility within the context of use.
This article provides an overview of the guidance, including example contexts of use and detailing the risk framework, while explaining how these relate to establishing AI model credibility through the suggested data and model-related disclosures. It further details legal strategy considerations, along with opportunities for innovation, that arise from the guidance. These considerations will be valuable to sponsors (i.e., of clinical investigations, such as Investigational New Drug Exemption applications), along with AI model developers and other firms in the drug development landscape.
Defining the Question of Interest
The first step in the guidance’s framework is defining the “question of interest:” the specific question, decision, or concern being addressed by the AI model. For example, questions of interest could involve the use of AI technology in human clinical trials, such as inclusion and exclusion criteria for the selection of participants, risk classification of participants, or determining procedures relating to clinical outcome measures of interest. Questions of interest could also relate to the use of AI technology in drug manufacturing processes, such as for quality control.
Contexts of Use
The guidance next establishes contexts of use – the specific scope and role of an AI model for addressing the question of interest – as a starting point for understanding any risks associated with the AI model, and in turn how credibility might be established.
The guidance emphasizes that it is limited to AI models (including for drug discovery) that impact patient safety, drug quality, or reliability of results from nonclinical or clinical studies. As such, firms that use AI models for discovering drugs but rely on more traditional processes to address factors that the FDA considers for approving a drug such as safety, quality, and stability, should be aware of the underlying principles of the guidance but might not need to modify their current AI governance. An important factor in defining the contexts of use is how much of a role the AI model plays relative to other automated or human-supervised processes; for example, processes in which a person is provided AI outputs for verification will be different from those that are designed to be fully automated.
Several types of contexts of use are introduced in the guidance, including:
Clinical trial design and management
Evaluating patients
Adjudicating endpoints
Analyzing clinical trial data
Digital health technologies for drug development
Pharmacovigilance
Pharmaceutical manufacturingGenerating real-world evidence (RWE)
Life cycle maintenance
Risk Framework for Determining Information Disclosure Degree
The guidance proposes that the risk level posed by the AI model dictates the extent and depth of information that must be disclosed about the AI model. The risk is determined based on two factors: 1) how much the AI model will influence decision-making (model influence risk), and 2) the consequences of the decision, such as patient safety risks (decision consequence risk).
For high-risk AI models—where outputs could impact patient safety or drug quality—comprehensive details regarding the AI model’s architecture, data sources, training methodologies, validation processes, and performance metrics may have to be submitted for FDA evaluation. Conversely, the required disclosure may be less detailed for AI models posing low risk. This tiered approach promotes credibility and avoids unnecessary disclosure burdens for lower-risk scenarios.
However, most AI models within the scope of this guidance will likely be considered high risk because they are being used for clinical trial management or drug manufacturing, so stakeholders should be prepared to disclose extensive information about an AI model used to support decision-making. Sponsors that use traditional (non-AI) methods to develop their drug products are required to submit complete nonclinical, clinical, and chemistry manufacturing and controls to support FDA review and ultimate approval of a New Drug Application. Those sponsors using AI models are required to submit the identical information, but in addition, are required to provide information on the AI model as outlined below.
High-Level Overview of Guidelines for Compliance Depending on Context of Use
The guidance further provides a detailed outline of steps to pursue in order to establish credibility of an AI model, given its context of use. The steps include describing: (1) the model, (2) the data used to develop the model, (3) model training, (4) and model evaluation, including test data, performance metrics, and reliability concerns such as bias, quality assurance, and code error management. Sponsors may be expected to be more detailed in disclosures as the risks associated with these steps increase, particularly where the impact on trial participants and/or patients increase.
In addition, the FDA specifically emphasizes special consideration for life cycle maintenance of the credibility of AI model outputs. For example, as the inputs to or deployment of a given AI model changes, there may be a need to reevaluate the model’s performance (and thus provide corresponding disclosures to support continued credibility).
Intellectual Property Considerations
Patent vs. Trade Secret
Stakeholders should carefully consider patenting the innovations underlying AI models used for decision-making. The FDA’s extensive requirements for transparency and submitting information about AI model architectures, training data, evaluation processes, and life cycle maintenance plans would pose a significant challenge for maintaining these innovations as trade secrets.
That said, trade secret protection of at least some aspects of AI models is an option when the AI model does not have to be disclosed. If the AI model is used for drug discovery or operations that do not impact patient safety or drug quality, it may be possible to keep the AI model or its training data secret. However, AI models used for decision-making will be subject to the FDA’s need for transparency and information disclosure that will likely jeopardize trade secret protection. By securing patent protection on the AI models, stakeholders can safeguard their intellectual property while satisfying FDA’s transparency requirements.
Opportunities for Innovation
The guidance requires rigorous risk assessments, data fitness standards, and model validation processes, which will set the stage for the creation of tools and systems to meet these demands. As noted above, innovative approaches for managing and validating AI models used for decision-making are not good candidates for trade secret protection, and stakeholders should ensure early identification and patenting of these inventions.
We have identified specific opportunities for AI innovation that are likely to be driven by FDA demands reflected in the guidance:
Requirements for transparency
Designing AI models with explainable AI capabilities that demonstrate how decisions or predictions are made
Bias and fitness of data
Systems for detecting bias in training data
Systems for correcting bias in training data
Systems for monitoring life cycle maintenance
Systems to detect data drift or changes in the AI model during life cycle of the drug
Systems to retrain or revalidate the AI model as needed because of data drift
Automated systems for tracking model performance
Testing methods
Developing models that can be tested against independent data sets and conditions to demonstrate generalizability
Integration of AI models in a practical workflow
Good Manufacturing Practices
Clinical decision support systems
Documentation systems
Automatic systems to generate reports of model development, evaluation, updates, and credibility assessments that can be submitted to FDA to meet regulatory requirements
The guidance provides numerous opportunities for innovations to enhance AI credibility, transparency, and regulatory compliance across the drug product life cycle. As demonstrated above, the challenges that the FDA seeks to address in order to validate AI use in drug development clearly map to potential innovations. Such innovations are likely valuable since they are needed to comply with FDA guidelines and offer significant opportunities for developing a competitive patent portfolio.
Conclusion
With this guidance, the FDA has proposed guidelines for establishing credibility in AI models that have risks for and impacts on clinical trial participants and patients. This guidance, while in draft, non-binding form, follows a step-by-step framework from defining the question of interest and establishing the context of use of the AI model to evaluating risks and in turn establishing the scope of disclosure that may be relevant. The guidance sets out the FDA’s most current thinking about the use of AI in drug development. Given such a framework and the corresponding level of disclosure that can be expected, sponsors may consider a shift in strategy towards using more patent protection for their innovations. Similarly, there may be more opportunities for identifying and protecting innovations associated with building governance around these models.
In addition to using IP protection as a backstop to greater disclosure, firms can also consider introducing more operational controls to mitigate the risks associated with AI model use and thus reduce their disclosure burden. For example, firms may consider supporting AI model credibility with other evidence sources, as well as integrating greater human engagement and oversight into their processes.
In meantime, sponsors that are uncertain about how their AI model usage might interact with future FDA requirements should consider the engagement options that the FDA has outlined for their specific context of use.
Comments on the draft guidance can be submitted online or mailed before April 7, 2025, and our team is available to assist interested stakeholders with drafting.
New Artificial Intelligence (AI) Regulations and Potential Fiduciary Implications
Fiduciaries should be aware of recent developments involving AI, including emerging and recent state law changes, increased state and federal government interest in regulating AI, and the role of AI in ERISA litigation. While much focus has been on AI’s impact on retirement plans, which we previously discussed here, plan fiduciaries of all types, including health and welfare benefit plans, must also stay informed about recent AI developments.
Recent State Law Changes
Numerous states recently codified new laws focusing on AI, some of which regulate employers’ human resource decision-making processes. Key examples include:
California – In 2024, California enacted over 10 AI-related laws, addressing topics such as:
The use of AI with datasets containing names, addresses, or biometric data;
How one communicates health care information to patients using AI; and
AI-driven decision-making in medical treatments and prior authorizations.
For additional information on California’s new AI laws, see Foley’s Client Alert, Decoding California’s Recent Flurry of AI Laws.
Illinois – Illinois passed legislation prohibiting employers from using AI in employment activities in ways that lead to discriminatory effects, regardless of intent. Under the law, employers are required to provide notice to employees and applicants if they are going to use AI for any workplace-related purpose.
For additional information on Illinois’ new AI law, see Foley’s Client Alert, Illinois Enacts Legislation to Protect against Discriminatory Implications of AI in Employment Activities.
Colorado – The Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, mandates “reasonable care” when employers use AI for certain applications.
For additional information on Colorado’s new AI law, see Foley’s Client Alert, Regulating Artificial Intelligence in Employment Decision-Making: What’s on the Horizon for 2025.
While these laws do not specifically target employee benefit plans, they reflect a trend toward states regulating human resource practices broadly, are aimed at regulating human resource decision-making processes, and are part of an evolving regulatory environment. Hundreds of additional state bills were proposed in 2024, along with AI-related executive orders, signaling more forthcoming regulation in 2025. Questions remain about how these laws intersect with employee benefit plans and whether federal ERISA preemption could apply to state attempts at regulation.
Recent Federal Government Actions
The federal government recently issued guidance aimed at preventing discrimination in the delivery of certain healthcare services and completed a request for information (RFI) for potential AI regulations involving the financial services industry.
U.S. Department of Health and Human Services (HHS) Civil Rights AI Nondiscrimination Guidance – HHS, through its Office for Civil Rights (OCR), recently issued a “Dear Colleague” letter titled Ensuring Nondiscrimination Through the Use of Artificial Intelligence and Other Emerging Technologies. This guidance emphasizes the importance of ensuring that the use of AI and other decision-support tools in healthcare complies with federal nondiscrimination laws, particularly under Section 1557 of the Affordable Care Act (Section 1557).
Section 1557 prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs and activities receiving federal financial assistance. OCR’s guidance underscores that healthcare providers, health plans, and other covered entities cannot use AI tools in a way that results in discriminatory impacts on patients. This includes decisions related to diagnosis, treatment, and resource allocation. Employers and plan sponsors should note that this guidance applies to a subset of health plans, including those that fall under Section 1557, but not to all employer-sponsored health plans.
Treasury Issues RFI for AI Regulation – In 2024, the U.S. Department of Treasury published an RFI on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. The RFI included several key considerations, including addressing AI bias and discrimination, consumer protection and data privacy, and risks to third-party users of AI. While the RFI has not yet led to concrete regulations, it underscores federal attention to AI’s impact on financial and employee benefit services. The ERISA Industry Committee, a nonprofit association representing large U.S. employers in their capacity as employee benefit plan sponsors, commented that AI is already being used for retirement readiness applications, chatbots, portfolio management, trade executions, and wellness programs. Future regulations may target these and related areas.
AI-Powered ERISA Litigation
Potential ERISA claims against plan sponsors and fiduciaries are being identified using AI. In just one example, an AI platform, Darrow AI, claims to be:
“designed to simplify the analysis of large volumes of data from plan documents, regulatory filings, and court cases. Our technology pinpoints discrepancies, breaches of fiduciary duty, and other ERISA violations with accuracy. Utilizing our advanced analytics allows you to quickly identify potential claims, assess their financial impact, and build robust cases… you can effectively advocate for employees seeking justice regarding their retirement and health benefits.”
Further, this AI platform claims it can find violations affecting many types of employers, whether a small business or a large corporation, by analyzing diverse data sources, including news, SEC filings, social networks, academic papers, and other third-party sources.
Notably, health and welfare benefit plans are also emerging as areas of focus for AI-powered ERISA litigation. AI tools are used to analyze claims data, provider networks, and administrative decisions, potentially identifying discriminatory practices or inconsistencies in benefit determinations. For example, AI could highlight patterns of bias in prior authorizations or discrepancies in how mental health parity laws are applied.
The increasing sophistication of these tools raises the stakes for fiduciaries, as they must now consider the possibility that potential claimants will use AI to scrutinize their decisions and plan operations with unprecedented precision.
Next Steps for Fiduciaries
To navigate this evolving landscape, fiduciaries should take proactive steps to manage AI-related risks while leveraging the benefits of these technologies:
Evaluate AI Tools: Undertake a formal evaluation of artificial intelligence tools utilized for plan administration, participant engagement, and compliance. This assessment includes an examination of the algorithms, data sources, and decision-making processes involved, including an assessment to ensure their products have been evaluated for compliance with nondiscrimination standards and do not inadvertently produce biased outcomes.
Audit Service Providers: Conduct comprehensive audits of plan service providers to evaluate their use of AI. Request detailed disclosures regarding the AI systems in operation, focusing on how they mitigate bias, ensure data security, and comply with applicable regulations.
Review and Update Policies: Formulate or revise internal policies and governance frameworks to monitor the utilization of AI in operational planning and compliance with nondiscrimination laws. These policies should outline guidelines pertaining to the adoption, monitoring, and compliance of AI technologies, thereby ensuring alignment with fiduciary responsibilities.
Enhance Risk Mitigation:
Fiduciary Liability Insurance: Consider obtaining or enhancing fiduciary liability insurance to address potential claims arising from the use of AI.
Data Privacy and Security: Enhance data privacy and security measures to safeguard sensitive participant information processed by AI tools.
Bias Mitigation: Establish procedures to regularly test and validate AI tools for bias, ensuring compliance with anti-discrimination laws.
Integrate AI Considerations into Requests for Proposals (RFPs): When selecting vendors, include specific AI-related criteria in RFPs. This may require vendors to demonstrate or certify compliance with state and federal regulations and adhere to industry best practices for AI usage.
Monitor Legal and Regulatory Developments: Stay informed about new state and federal AI regulations, along with the developing case law related to AI and ERISA litigation. Establish a process for routine legal reviews to assess how these developments impact plan operations.
Provide Training: Educate fiduciaries, administrators, and relevant staff on the potential risks and benefits of AI in plan administration, emerging technologies and the importance of compliance with applicable laws. The training should provide an overview of legal obligations, best practices for implementing AI, and strategies for mitigating risks.
Document Due Diligence: Maintain comprehensive documentation of all steps to assess and track AI tools. This includes records of audits, vendor communications, and updates to internal policies. Clear documentation can act as a crucial defense in the event of litigation.
Assess Applicability of Section 1557 to Your Plan: Health and welfare plan fiduciaries should determine whether your organization’s health plan is subject to Section 1557 and whether OCR’s guidance directly applies to your operations, and if not, confirm and document why not.
Fiduciaries must remain vigilant regarding AI’s increasing role in employee benefit plans, particularly amid regulatory uncertainty. Taking proactive measures and adopting robust risk management strategies can help mitigate risks and ensure compliance with current and anticipated legal standards. By dedicating themselves to diligence and transparency, fiduciaries can leverage the benefits of AI while safeguarding the interests of plan participants. At Foley & Lardner LLP, we have experts in AI, retirement planning, cybersecurity, labor and employment, finance, fintech, regulatory matters, healthcare, and ERISA. They regularly advise fiduciaries on potential risks and liabilities related to these and other AI-related issues.
Drilling Down into Venture Capital Financing in Artificial Intelligence
It should come as no surprise that venture capital (VC) investors are drilling down into startups building businesses with Artificial Intelligence (AI) at the core. New data from PitchBook actually shows that AI startups make up 22% of first-time VC financing. They note that $7 billion of first-time funding raised by startups in 2024 went to AI & machine learning (ML) startups (this is according to their data through Q3 of 2024).
Crunchbase data also showed that in Q3 of 2024, AI-related startups raised $19 billion in funding, accounting for 28% of all venture dollars for that quarter. They importantly point out that this excludes the $6.6 billion round raised by OpenAI, which was announced after Q3 closed. With this unprecedented level of investment in the AI vertical, there is increasing concern that i) some startups might be using AI as more of a buzzword to raise capital rather than truly focusing on this area, and/or ii) there are bubbles in certain sub-verticals.
PitchBook analysts also note that with limited funding available for startups, integrating AI into their offerings is crucial for founders to secure investment. However, this also makes it harder to distinguish which startups are genuinely engaging in meaningful AI work. For investors, the challenge lies in sifting through the AI “noise” to identify those startups that are truly transformative and focusing on key areas within the sector, which will be vital as we move into 2025.
A recent article in Forbes examined the themes that early-stage investors were targeting for the new year. When looking at investment in AI startups, these included the use of AI to help pharmaceutical companies optimize clinical trials, AI in fintech and personal finance, AI applications in healthcare to improve the patient to caregiver experience, and AI-driven vertical software that will disrupt incumbents.
According to the Financial Times (FT), this boom in AI investment comes at a time when the industry still has an “immense overhang of investments from venture’s Zirp era” (Zirp referring to the zero interest rate policy environment that existed between 2009 and 2022). This has led to approximately $2.5 trillion trapped in private unicorns, and we have not really seen what exit events or IPOs will materialize and what exit valuations will return to investors. Will investors get their capital back and see the returns they hope for? Only time will tell, but investors do not seem ready to slow down their investment in AI startups any time soon. As the FT says, this could be a pivotal year for the fate of VC investment in AI. We will all be watching closely.
Bridging the Gap: How AI is Revolutionizing Canadian Legal Tech
While Canadian law firms have traditionally lagged behind their American counterparts in adopting legal tech, the AI explosion is closing the gap. This slower adoption rate isn’t due to a lack of innovation—Canada boasts a thriving legal tech sector. Instead, factors like a smaller legal market and stricter privacy regulations have historically hindered technology uptake. This often resulted in a noticeable delay between a product’s US launch and its availability in Canada.
Although direct comparisons are challenging due to the continuous evolution of legal tech, the recent announcements and release timelines for major AI-powered tools point to a notable shift in how the Canadian market is being prioritized. For instance, Westlaw Edge was announced in the US in July 2018, but the Canadian launch wasn’t announced until September 2021—a gap of over three years. Similarly, Lexis+ was announced in the US in September 2020, with the Canadian announcement following in August 2022. However, the latest AI products show a different trend. Thomson Reuters’ CoCounsel Core was announced in the US in November 2023 and shortly followed in Canada in February 2024. The announcement for Lexis+ AI came in October 2023 in the US and July 2024 in Canada. This rapid succession of announcements suggests that the Canadian legal tech market is no longer an afterthought.
The Canadian federal government has demonstrated a strong commitment to fostering AI innovation. It has dedicated CAD$568 million to its national AI strategy, with the goals of fostering AI research and development, building a skilled workforce in the field, and creating robust industry standards for AI systems. This investment should help Canadian legal tech companies, such as Clio, Kira Systems, Spellbook, and Blue J Legal, all headquartered in Canada. With the Canadian government’s focus on establishing Canada as a hub for AI and innovation, these companies stand to benefit significantly from increased funding and talent attraction.
While the Canadian government is actively investing in AI innovation, it’s also taking steps to ensure responsible development through proposed legislation, which could impact the availability of AI legal tech products in Canada. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), which aims to regulate high-impact AI systems. While AI tools used by law firms for tasks like legal research and document review likely fall outside this initial scope, AIDA’s evolving framework could still impact the sector. For example, the Act’s emphasis on mitigating bias and discrimination may lead to greater scrutiny of AI algorithms used in legal research, requiring developers to demonstrate fairness and transparency.
While AIDA may present hurdles for US companies entering the Canadian market with AI products, it could conversely provide a competitive advantage for Canadian companies seeking to expand into Europe. This is because AIDA, despite having some material differences, aligns more closely with the comprehensive approach in the European Union’s Artificial Intelligence Act (EU AI Act).
While US companies are working to comply with the EU AI Act, Canadian companies may have an advantage. Although AIDA isn’t yet in force and has some differences from the EU AI Act, it provides a comprehensive regulatory framework that Canadian legal tech leaders are already engaging with. This engagement with AIDA could prove invaluable to Canadian legal tech companies as AI regulation continues to evolve globally.
Canadian companies looking to leverage their experiences with AIDA for European expansion will nonetheless encounter some material differences. For instance, the EU AI Act casts a wider net, regulating a broader range of AI systems than AIDA. The EU AI Act’s multi-tiered risk-based system is designed to address a wider spectrum of concerns, capturing even “limited-risk” AI systems with specific transparency obligations. Furthermore, tools used for legal interpretation could be classified as “high-risk” systems under the EU AI Act, triggering more stringent requirements.
In conclusion, the rise of generative AI is not only revolutionizing Canadian legal tech and closing the gap with the US, but it could also be positioning Canada as a key player in the global legal tech market. While AIDA’s impact remains to be seen, its emphasis on responsible AI could shape the development and deployment of AI-powered legal tools in Canada.
Litigation Minute: A Look Back and Ahead
What You Need to Know in a Minute or Less
Throughout 2024, we published three series highlighting emerging and evolving trends in litigation. From generative AI to ESG litigation, our lawyers continue to provide concise, timely updates on the issues most critical to our clients and their businesses.
In a minute or less, find our Litigation Minute highlights from the past year—as well as a look ahead to 2025.
Beauty and Wellness
Our first series of the year covered trends in the beauty and wellness industry, beginning with products categorized as “beauty from within,” including oral supplements focused on wellness. We outlined the risks of FDA enforcement and class action litigation arising from certain marketing claims associated these products.
We next reviewed the use of “clean” and “natural” marketing terminology. We assessed these labeling claims across a range of potentially impacted products and brands, as well as regulatory and litigation risks associated with such claims.
Alongside these marketing-focused issues, companies also face increased regulatory scrutiny, including new extended producer responsibility laws and the FTC Green Guides. We concluded our series by assessing product packaging and end-of-life considerations for beauty and wellness brands.
Generative AI
One of the most-discussed developments of 2024, generative AI was the focus of our second series of the year, which examined key legal, regulatory, and operational considerations associated with generative AI. We outlined education, training, and risk management frameworks in light of litigation trends targeting these systems.
2024 also saw several new state statutes regulating generative AI. From mandatory disclosures in Utah to Tennessee’s ELVIS Act, we examined how new state approaches would remain at the forefront of attention for companies currently utilizing or considering generative AI.
With the need for compliance and training in mind, we next discussed the potential for generative AI in discovery. With the ability to rapidly sort through data and provide timely requested outputs, we provided an overview of how generative AI has created valuable tools for lawyers as well as their clients.
ESG Litigation
2024 highlighted the impacts of extreme weather, as well as the importance of preparation for such natural disasters. With extreme weather events expected to increase in both frequency and intensity around the world, we provided insurance coverage considerations for policyholders seeking to restore business operations following these events and weather the consequential financial storms.
Further ESG headlines this year focused on the questions surrounding microplastics—including general definition, scientific risk factors, potential for litigation, and the hurdles complicating this litigation.
Greenwashing claims, on the other hand, have experienced fewer setbacks, with expanded litigation targeting manufacturers, distributors, and retailers of consumer products. Alleging false representation of companies or their products as “environmentally friendly,” we reviewed how the risk of such claims can be mitigated through proper substantiation and documentation of company claims and certifications.
The Texas Responsible AI Governance Act and Its Potential Impact on Employers
On 23 December 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act),1 adding Texas to the list of states seeking to regulate artificial intelligence (AI) in the absence of federal law. The Act establishes obligations for developers, deployers, and distributors of certain AI systems in Texas. While the Act covers a variety of areas, this alert focuses on the Act’s potential impact on employers.2
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act seeks to regulate employers’ and other deployers’ use of “high-risk artificial intelligence systems” in Texas. High-risk intelligence systems include AI tools that make or are a contributing factor in “consequential decisions.”3 In the employment space, this could include hiring, performance, compensation, discipline, and termination decisions.4 The Act does not cover several common intelligence systems, such as technology intended to detect decision-making patterns, anti-malware and antivirus programs, and calculators.
Under the Act, covered employers would have a general duty to use reasonable care to prevent algorithmic discrimination—including a duty to withdraw, disable, or recall noncompliant high-risk AI systems. To satisfy this duty, the Act requires covered employers and other covered deployers to do the following:
Human Oversight
Ensure human oversight of high-risk AI systems by persons with adequate competence, training, authority, and organizational support to oversee consequential decisions made by the system.5
Prompt Reporting of Discrimination Risks
Report discrimination risks promptly by notifying the Artificial Intelligence Council (which would be established under the Act) no later than 10 days after the date the deployer learns of such issues.6
Regular AI Tool Assessments
Assess high-risk AI systems regularly, including conducting a review on an annual basis, to ensure that the system is not causing algorithmic discrimination.7
Prompt Suspension
If a deployer considers or has reason to consider that a system does not comply with the Act’s requirements, suspend use of the system and notify the system’s developer of such concerns.8
Frequent Impact Assessments
Complete an impact assessment on a semi-annual basis and within 90 days after any intentional or substantial modification of the system.9
Clear Disclosure of AI Use
Before or at the time of interaction, disclose to any Texas-based individual:
That they are interacting with an AI system.
The purpose of the system.
That the system may or will make a consequential decision affecting them.
The nature of any consequential decision in which the system is or may be a contributing factor.
The factors used in making any consequential decisions.
Contact information of the deployer.
A description of the system. 10
Takeaways for Employers
The Act is likely to be a main topic of discussion in Texas’s upcoming legislative session, which is scheduled to begin on 14 January 2025. If enacted, the Act would establish a consumer protection-focused framework for AI regulation. Employers should track the Act’s progress and any amendments to the proposed bill while also taking steps to prepare for the Act’s passage. For example, employers using or seeking to use high-risk AI systems in Texas can:
Develop policies and procedures that govern the use of AI systems to make or impact employment decisions:
Include in these policies and procedures clear explanations of (i) the systems’ uses and purposes, (ii) the system’s decision-making processes, (iii) the permitted uses of such systems, (iv) the approved users of such systems, (v) training requirements for approved users, and (vi) the governing body overseeing the responsible use of such systems.
Develop and implement an AI governance and risk-management framework with internal policies, procedures, and systems for review, flagging risks, and reporting.
Ensure human oversight over AI systems.
Train users and those tasked with overseeing the AI systems.
Ensure there are sufficient resources committed to, and an adequate budget assigned to, overseeing and deploying AI systems and complying with the Act.
Conduct due diligence on any AI vendors and developers before engagement and on any AI systems before use, including relating to how AI vendors and developers and AI systems test for, avoid, and remedy algorithmic bias, and to ensure AI vendors and developers are compliant with the Act’s requirements relating to developers of high-risk AI systems.
Footnotes
1 A copy of HB 1709 is available at: https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB01709I.pdf (last accessed: 9 January 2025).
2 Section 551.001(8).
3 Section 551.001(13). The Act defines a “consequential decision” as “a decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of: (A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision; (B) education enrollment or an education opportunity; (C) employment or an employment opportunity; (D) a financial service; (E) an essential government service; (F) residential utility services; (G) a health-care service or treatment; (H) housing; (I) insurance; (J) a legal service; (K) a transportation service; (L) constitutionally protected services or products; or (M) elections or voting process.”
4 Id.
5 Section 551.005
6 Section 551.011
7 Section 551.006(d)
8 Section 551.005
9 Section 551.006(a)
10 Section 551.007(a)
New Jersey Attorney General: NJ’s Law Against Discrimination (LAD) Applies to Automated Decision-Making Tools
This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:
the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.
If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.
Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:
any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.
Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.
The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.
As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.
SEC Priorities for 2025: What Investment Advisers Should Know
The US Securities and Exchange Commission (SEC) recently released its priorities for 2025. As in recent years, the SEC is focusing on fiduciary duties and the development of compliance programs as well as emerging risk areas such as cybersecurity and artificial intelligence (AI). This alert details the key areas of focus for investment advisers.
1. Fiduciary Duties Standards of Conduct
The Investment Advisers Act of 1940 (Advisers Act) established that all investment advisers owe their clients the duties of care and loyalty. In 2025, the SEC will focus on whether investment advice to clients satisfies an investment adviser’s fiduciary obligations, particularly in relation to (1) high-cost products, (2) unconventional investments, (3) illiquid assets, (4) assets that are difficult to value, (5) assets that are sensitive to heightened interest rates and market conditions, and (6) conflicts of interests.
For investment advisers who are dual registrants or affiliated with broker-dealers, the SEC will focus on reviewing (1) whether investment advice is suitable for a client’s advisory accounts, (2) disclosures regarding recommendations, (3) account selection practices, and (4) disclosures regarding conflicts of interests.
2. Effectiveness of Advisers Compliance Programs
The Compliance Rule, Rule 206(4)-7, under the Advisers Act requires investment advisers to (1) implement written policies reasonably designed to prevent violations of the Advisers Act, (2) designate a Chief Compliance Officer, and (3) annually review such policies for adequacy and effectiveness.
In 2025, the SEC will focus on a variety of topics related to the Compliance Rule, including marketing, valuation, trading, investment management, disclosure, filings, and custody, as well as the effectiveness of annual reviews.
Among its top priorities is evaluating whether compliance policies and procedures are reasonably designed to prevent conflicts of interest. Such examination may include a focus on (1) fiduciary obligations related to outsourcing investment selection and management, (2) alternative sources of revenue or benefits received by advisers, and (3) fee calculations and disclosure.
Review under the Compliance Rule is fact-specific, meaning it will vary depending on each adviser’s practices and products. For example, advisers who utilize AI for management, trading, marketing, and compliance will be evaluated to determine the effectiveness of compliance programs related to the use of AI. The SEC may also focus more on advisers with clients that invest in difficult-to-value assets.
3. Examinations of Private Fund Advisers
The SEC will continue to focus on advisers to private funds, which constitute a significant portion of SEC-registered advisers. Specifically, the SEC will prioritize reviewing:
Disclosures to determine whether they are consistent with actual practices.
Fiduciary duties during volatile markets.
Exposure to interest rate fluctuations.
Calculations and allocations of fees and expenses.
Disclosures related to conflicts of interests and investment risks.
Compliance with recently adopted or amended SEC rules, such as Form PF (previously discussed here).
4. Never Examined Advisers, Recently Registered Advisers, and Advisers Not Recently Examined
Finally, the SEC will continue to prioritize recently registered advisers, advisers not examined recently, and advisers who have never been examined.
Key Takeaways
Investment advisers can expect SEC examinations in 2025 to focus heavily on fiduciary duties, compliance programs, and conflicts of interest. As such, advisers should review their policies and procedures related to fiduciary duties and conflicts of interest as well as evaluating the effectiveness of their compliance programs.
China’s National Intellectual Property Administration Issues Guidelines for Patent Applications for AI-Related Inventions
On December 31, 2024, China’s National Intellectual Property Administration (CNIPA) issued the Guidelines for Patent Applications for AI-Related Inventions (Trial Implementation) (人工智能相关发明专利申请指引(试行)). The Guidelines follow up on CNIPA’s draft for comments issued December 6, 2024 in which only a week for comments were provided. The short comment period implied CNIPA did not actually want comments and is in contravention of the not-yet-effective Regulations on the Procedures for Formulating Regulations of the CNIPA (国家知识产权局规章制定程序规定(局令第83号)) requiring a 30-day minimum comment period. Highlights follow including several examples regarding subject matter eligibility.
There are four types of AI-related patent applications:
Patent applications related to AI algorithms or models themselves
Artificial intelligence algorithms or models, that is, advanced statistical and mathematical model forms, include machine learning, deep learning, neural networks, fuzzy logic, genetic algorithms, etc. These algorithms or models constitute the core content of artificial intelligence. They can simulate intelligent decision-making and learning capabilities, enabling computing devices to handle complex problems and perform tasks that usually require human intelligence.
Accordingly, this type of patent application usually involves the artificial intelligence algorithm or model itself and its improvement or optimization, for example, model structure, model compression, model training, etc.
Patent applications related to functions or field applications based on artificial intelligence algorithms or models
Patent applications related to the functional or field application of artificial intelligence algorithms or models refer to the integration of artificial intelligence algorithms or models into inventions as an intrinsic part of the proposed solution for products, methods or their improvements. For example: a new type of electron microscope based on artificial intelligence image sharpening technology. This type of patent application usually involves the use of artificial intelligence algorithms or models to achieve specific functions or apply them to specific fields.
Functions based on artificial intelligence algorithms or models refer to functions implemented using one or more artificial intelligence algorithms or models. They usually include: natural language processing, which enables computers to understand and generate human language; computer vision, which enables computers to “see” and understand images or videos; speech processing, including speech recognition, speech synthesis, etc.; knowledge representation and reasoning, which represents information and enables computers to solve problems, including knowledge graphs, graph computing, etc.; data mining, which calculates and analyzes massive amounts of data to identify information or laws such as potential patterns, trends or relationships. Artificial intelligence algorithms or models can be applied to specific fields based on their functions.
Field applications based on artificial intelligence algorithms or models refer to the application of artificial intelligence to various scenarios, such as transportation, telecommunications, life and medical sciences, security, commerce, education, entertainment, finance, etc., to promote technological innovation and improve the level of intelligence in all walks of life.
Patent applications involving inventions made with the assistance of artificial intelligence
Inventions assisted by artificial intelligence are inventions that are made using artificial intelligence technology as an auxiliary tool in the invention process. In this case, artificial intelligence plays a role similar to that of an information processor or a drawing tool. For example, artificial intelligence is used to identify specific protein binding sites, and finally obtains a new drug compound.
Patent applications involving AI-generated inventions
AI-generated inventions refer to inventions and creations generated autonomously by AI without substantial human contribution, for example, a food container autonomously designed by AI technology.
AI cannot be an inventor:
1. The inventor must be a natural person
Section 4.1.2 of Chapter 1 of Part 1 of the Guidelines clearly states that “the inventor must be an individual, and the application form shall not contain an entity or collective, nor the name of artificial intelligence.”
The inventor named in the patent document must be a natural person. Artificial intelligence systems and other non-natural persons cannot be inventors. When there are multiple inventors, each inventor must be a natural person. The property rights to obtain income and the personal rights to sign enjoyed by the inventor are civil rights. Only civil subjects that meet the provisions of the civil law can be the rights holders of the inventor’s related civil rights. Artificial intelligence systems cannot currently enjoy civil rights as civil subjects, and therefore cannot be inventors.
2. The inventor should make a creative contribution to the essential features of the invention
For patent applications involving artificial intelligence algorithms or models, functions or field applications based on artificial intelligence algorithms or models, the inventor refers to the person who has made creative contributions to the essential features of the invention.
For inventions assisted by AI, a natural person who has made a creative contribution to the substantive features of the invention can be named as the inventor of the patent application. For inventions generated by AI, it is not possible to grant AI inventor status under the current legal context in my country.
Examples of subject matter eligibility:
The solution of the claim should reflect the use of technical means that follow the laws of nature to solve technical problems and achieve technical effects
The “technical solution” stipulated in Article 2, Paragraph 2 of the Patent Law refers to a collection of technical means that utilize natural laws to solve the technical problems to be solved. When a claim records that a technical means that utilizes natural laws is used to solve the technical problems to be solved, and a technical effect that conforms to natural laws is obtained thereby, the solution defined in the claim belongs to the technical solution. On the contrary, a solution that does not use technical means that utilize natural laws to solve technical problems to obtain technical effects that conform to natural laws does not belong to the technical solution.
As an example and not a limitation, the following content describes several common situations where related solutions belong to technical solutions.
Scenario 1: AI algorithms or models process data with specific technical meaning in the technical field
If the drafting of a claim can reflect that the object processed by the artificial intelligence algorithm or model is data with a definite technical meaning in the technical field, so that based on the understanding of those skilled in the art, they can know that the execution of the algorithm or model directly reflects the process of solving a certain technical problem by using natural laws, and obtains a technical effect, then the solution defined in the claim belongs to the technical solution. For example, a method for identifying and classifying images using a neural network model. Image data belongs to data with a definite technical meaning in the technical field. If those skilled in the art can know that the various steps of processing image features in the solution are closely related to the technical problem of identifying and classifying objects to be solved, and obtain corresponding technical effects, then the solution belongs to the technical solution.
Scenario 2: There is a specific technical connection between the AI algorithm or model and the internal structure of the computer system
If the drafting of a claim can reflect the specific technical connection between the artificial intelligence algorithm or model and the internal structure of the computer system, thereby solving the technical problem of how to improve the hardware computing efficiency or execution effect, including reducing the amount of data storage, reducing the amount of data transmission, increasing the hardware processing speed, etc., and can obtain the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, then the solution defined in the claim belongs to the technical solution.
This specific technical association reflects the mutual adaptation and coordination between algorithmic features and features related to the internal structure of a computer system at the technical implementation level, such as adjusting the architecture or related parameters of a computer system to support the operation of a specific algorithm or model, making adaptive improvements to the algorithm or model based on a specific internal structure or parameters of a computer system, or a combination of the two.
For example, a neural network model compression method for a memristor accelerator includes: step 1, adjusting the pruning granularity according to the actual array size of the memristor during network pruning through an array-aware regularized incremental pruning algorithm to obtain a regularized sparse model adapted to the memristor array; step 2, reducing the ADC accuracy requirements and the number of low-resistance devices in the memristor array through a power-of-two quantization algorithm to reduce overall system power consumption.
In this example, in order to solve the problem of excessive hardware resource consumption and high power consumption of ADC units and computing arrays when the original model is mapped to the memristor accelerator, the solution uses pruning algorithms and quantization algorithms to adjust the pruning granularity according to the actual array size of the memristor, reducing the number of low-resistance devices in the memristor array. The above means are algorithm improvements made to improve the performance of the memristor accelerator. They are constrained by hardware condition parameters, reflecting the specific technical relationship between the algorithm characteristics and the internal structure of the computer system. They use technical means that conform to the laws of nature to solve the technical problems of excessive hardware consumption and high power consumption of the memristor accelerator, and obtain the technical effect of improving the internal performance of the computer system that conforms to the laws of nature. Therefore, this solution belongs to the technical solution.
Specific technical associations do not mean that changes must be made to the hardware structure of the computer system. For solutions to improve artificial intelligence algorithms, even if the hardware structure of the computer system itself has not changed, the solution can achieve the technical effect of improving the internal performance of the computer system as a whole by optimizing the system resource configuration. In such cases, it can be considered that there is a specific technical association between the characteristics of the artificial intelligence algorithm and the internal structure of the computer system, which can improve the execution effect of the hardware.
For example, a training method for a deep neural network model includes: when the size of training data changes, for the changed training data, respectively calculating the training time of the changed training data in preset candidate training schemes; selecting a training scheme with the shortest training time from the preset candidate training schemes as the optimal training scheme for the changed training data, the candidate training schemes including a single-processor training scheme and a multi-processor training scheme based on data parallelism; and performing model training on the changed training data in the optimal training scheme.
In order to solve the problem of slow training speed of deep neural network models, this solution selects a single-processor training solution or a multi-processor training solution with different processing efficiency for training data of different sizes. This model training method has a specific technical connection with the internal structure of the computer system, which improves the execution effect of the hardware during the training process, thereby obtaining the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, thus constituting a technical solution.
However, if a claim merely utilizes a computer system as a carrier for implementing the operation of an artificial intelligence algorithm or model, and does not reflect the specific technical relationship between the algorithm features and the internal structure of the computer system, it does not fall within the scope of Scenario 2.
For example, a computer system for training a neural network includes a memory and a processor, wherein the memory stores instructions and the processor reads the instructions to train the neural network by optimizing a loss function.
In this solution, the memory and processor in the computer system are merely conventional carriers for algorithm storage and execution. There is no specific technical association between the algorithm features involved in training the neural network using the optimized loss function and the memory and processor contained in the computer system. This solution solves the problem of optimizing neural network training, which is not a technical problem. The effect obtained is only to improve the efficiency of model training, which is not a technical effect of improving the internal performance of the computer system. Therefore, it does not constitute a technical solution.
Scenario 3: Using artificial intelligence algorithms to mine the inherent correlations in big data in specific application fields that conform to the laws of nature
When artificial intelligence algorithms or models are applied in various fields, data analysis, evaluation, prediction or recommendation can be performed. For such applications, if the claims reflect that the big data in a specific application field is processed, and artificial intelligence algorithms such as neural networks are used to mine the inherent correlation between data that conforms to the laws of nature, and the technical problem of how to improve the reliability or accuracy of big data analysis in a specific application field is solved, and the corresponding technical effects are obtained, then the solution of the claim constitutes a technical solution.
The means of using artificial intelligence algorithms or models to conduct data mining and train artificial intelligence models that can obtain output results based on input data cannot directly constitute technical means. Only when the inherent correlation between the data mined based on artificial intelligence algorithms or models conforms to the laws of nature, the relevant means as a whole can constitute technical means that utilize the laws of nature. Therefore, it is necessary to clarify in the scheme recorded in the claims which indicators, parameters, etc. are used to reflect the characteristics of the analyzed object in order to obtain the analysis results, and whether the inherent correlation between these indicators, parameters, etc. (model input) mined by artificial intelligence algorithms or models and the result data (model output) conforms to the laws of nature.
For example, a food safety risk prediction method obtains and analyzes historical food safety risk events to obtain header entity data and tail entity data representing food raw materials, edible items, and food sampling poisonous substances, and their corresponding timestamp data; based on each header entity data and its corresponding tail entity data, and its corresponding entity relationship carrying timestamp data representing the content level, risk or intervention of each type of hazard, corresponding four-tuple data is constructed to obtain a corresponding knowledge graph; the knowledge graph is used to train a preset neural network to obtain a food safety knowledge graph model; and the food safety risk at the prediction time is predicted based on the food safety knowledge graph model.
The background technology of the program description records that the existing technology uses static knowledge graphs to predict food safety risks, which cannot reflect the fact that food data in actual situations changes over time and ignores the influence between data. Those skilled in the art know that food raw materials, edible items or food sampling poisons will gradually change over time. For example, the longer the food is stored, the more microorganisms there are in the food, and the content of food sampling poisons will increase accordingly. When the food contains a variety of raw materials that can react chemically, the chemical reaction may also cause food safety risks at some point in the future over time. This program predicts food safety risks based on the inherent characteristics of food changing over time, so that timestamps are added when constructing the knowledge graph, and a preset neural network is trained based on entity data related to food safety risks at each moment to predict food safety risks at the time to be predicted. It uses technical means that follow the laws of nature to solve the technical problem of inaccurate prediction of food safety risks at future time points, and can obtain corresponding technical effects, thus constituting a technical solution.
If the intrinsic correlation between the indicator parameters mined by artificial intelligence algorithms or models and the prediction results is only subject to economic laws or social laws, it is a case of not following the laws of nature. For example, a method of estimating the regional economic prosperity index using a neural network uses a neural network to mine the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index, and predicts the regional economic prosperity index based on the intrinsic correlation. Since the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index is subject to economic laws and not natural laws, this solution does not use technical means and does not constitute a technical solution.
The full text is available here (Chinese only).
FTC Blog Outlines Factors for Companies to Consider About AI — AI: The Washington Report
The FTC staff recently published a blog post outlining four factors for companies to consider when developing or deploying AI products to avoid running afoul of the nation’s consumer protection laws.
The blog post does not represent formal guidance but it likely articulates the FTC’s thinking and enforcement approach, particularly regarding deceptive claims about AI tools and due diligence when using AI-powered systems.
Although the blog post comes just days before current Republican Commissioner Andrew Ferguson becomes FTC Chair on January 20, the FTC is likely to continue the same focus on AI as it relates to consumer protection issues as it has under Chair Khan. Ferguson has voted in support of nearly all of the FTC’s AI consumer protection actions, but his one dissent underscores how he might dial back some of the current FTC’s aggressive AI consumer protection agenda.
The FTC staff in the Office of Technology and the Division of Advertising Practices in the FTC Bureau of Consumer Protection released a blog outlining four factors that companies should consider when developing or deploying an AI-based product. These factors are not binding, but they underscore the FTC’s continued focus on enforcing the nation’s consumer protection laws as they relate to AI.
The blog comes just under two weeks before current Republican Commissioner Andrew Ferguson will become the FTC Chair. However, under Ferguson, as we discuss below, the FTC will likely continue its same focus on AI consumer protection issues, though it may take a more modest approach.
The Four Factors for Companies to Consider about AI
The blog post outlines four factors for companies to consider when developing or deploying AI:
Doing due diligence to prevent harm before and while developing or deploying an AI service or product
In 2024, the FTC filed a complaint against a leading retail pharmacy alleging that it “failed to take reasonable measures to prevent harm to consumers in its use of facial recognition technology (FRT) that falsely tagged consumers in its stores, particularly women and people of color, as shoplifters.” The FTC has “highlighted that companies offering AI models need to assess and mitigate potential downstream harm before and during deployment of their tools, which includes addressing the use and impact of the technologies that are used to make decisions about consumers.”
Taking preventative steps to detect and remove AI-generated deepfakes and fake images, including child sexual abuse material and non-consensual intimate imagery
In April 2024, the FTC finalized its impersonation rule, and the FTC also launched a Voice Cloning Challenge to create ways to protect consumers from voice cloning software. The FTC has previously discussed deepfakes and their harms to Congress in its Combatting Online Harms Report.
Avoiding deceptive claims about AI systems or services that result in people losing money or harm users
The FTC’s Operation AI Comply, which we covered, as well as other enforcement actions have taken aim at companies that have made false or deceptive claims about the capabilities of their AI products or services. Many of the FTC’s enforcement actions have targeted companies that have falsely claimed that their AI products or services would help people make money or start a business.
Protecting privacy and safety
AI models, especially generative AI ones, run on large amounts of data, some of which may be highly sensitive. “The Commission has a long record of providing guidance to businesses about ensuring data security and protecting privacy,” as well as taking action against companies that have failed to do so.
While the four factors highlight consumer protection issues that the FTC has focused on, FTC staff cautions that the four factors are “not a comprehensive overview of what companies should be considering when they design, build, test, and deploy their own products.”
New FTC Chair: New or Same Focus on AI Consumer Protection Issues?
The blog post comes under two weeks before President-elect Trump’s pick to lead the FTC, current FTC Commissioner Andrew Ferguson, becomes the FTC Chair. Under Chair Ferguson, the FTC’s focus on the consumer protection side of AI is unlikely to undergo significant changes; Ferguson has voted in support of nearly all of the FTC’s consumer protection AI enforcement actions.
However, Ferguson’s one dissent in a consumer protection case brought against an AI company illuminates how the FTC under his leadership could take a more modest approach to consumer protection issues related to AI. In his dissent, Commissioner Ferguson wrote:
The Commission’s theory is that Section 5 prohibits products and services that could be used to facilitate deception or unfairness because such products and services are the means and instrumentalities of deception and unfairness. Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents … and risks strangling a potentially revolutionary technology in its cradle.
Commissioner Ferguson’s point seems well taken.Less clear is where he would draw the line.Moreover, as a practical matter, his ability to move the needle would likely need to wait until President Trump’s other nominee, Mark Meador, is confirmed, as expected, later this year.
Matthew Tikhonovsky also contributed to this article.
Black Box Issues [Podcast]
In part three of our series on potential pitfalls in the use of artificial intelligence (or AI) when it comes to employment decisions, partner Guy Brenner and senior counsel Jonathan Slowik dive into the concept of “black box” systems—AI tools whose internal decision-making processes are not transparent. The internal workings of such systems may not be well understood, even by the developers who create them. We explore the challenges this poses for employers seeking to ensure that their use of AI in employment decisions does not inadvertently introduce bias into the process. Be sure to tune in for a closer look at the complexities of this conundrum and what it means for employers.
McDermott+ Check-Up: January 10, 2025
THIS WEEK’S DOSE
119th Congress Begins. The new Congress began with key membership announcements for relevant healthcare committees.
Cures 2.1 White Paper Published. The document outlines the 21st Century Cures 2.1 legislative proposal, focusing on advancing healthcare technologies and fostering innovation.
Senate Budget Committee Members Release Report on Private Equity. The report, released by the committee’s chair and ranking member from the 118th Congress, includes findings from an investigation into private equity’s role in healthcare.
HHS OCR Proposes Significant Updates to HIPAA Security Rule. The US Department of Health & Human Services (HHS) Office for Civil Rights (OCR) seeks to address current cybersecurity concerns.
HHS Releases AI Strategic Plan. The plan outlines how HHS will prioritize resources and coordinate efforts related to artificial intelligence (AI).
CFPB Removes Medical Debt from Consumer Credit Reports. The Consumer Financial Protection Bureau (CFPB) finalized its 2024 proposal largely as proposed.
President Biden Signs Several Public Health Bills into Law. The legislation includes the reauthorization and creation of public health programs related to cardiomyopathy, autism, and emergency medical services for children.
CONGRESS
119th Congress Begins. The 119th Congress began on January 3, 2025. Lawmakers reelected Speaker Johnson in the first round of votes and adopted the House rules package. The first full week in session was slow-moving due to a winter storm in Washington, DC; funeral proceedings for President Jimmy Carter; and the certification of electoral college votes. Committees are still getting organized, and additions to key health committees include:
House Energy & Commerce: Reps. Bentz (R-OR), Houchin (R-IN), Fry (R-SC), Lee (R-FL), Langworthy (R-NY), Kean (R-NJ), Rulli (R-OH), Evans (R-CO), Goldman (R-TX), Fedorchak (R-ND), Ocasio-Cortez (D-NY), Mullin (D-CA), Carter (D-LA), McClellan (D-VA), Landsman (D-OH), Auchincloss (D-MA), and Menendez (D-NJ).
House Ways & Means: Reps. Moran (R-TX), Yakym (R-IN), Miller (R-OH), Bean (R-FL), Boyle (D-PA), Plaskett (D-VI), and Suozzi (D-NY).
Senate Finance: Sens. Marshall (R-KS), Sanders (I-VT), Smith (D-MN), Ray Luján (D-NM), Warnick (D-GA), and Welch (D-VT).
Senate Health, Education, Labor & Pensions: Sens. Scott (R-SC), Hawley (R-MO), Banks (R-IN), Crapo (R-ID), Blackburn (R-TN), Kim (D-NJ), Blunt Rochester (D-DE), and Alsobrooks (D-MD).
Congress has a busy year ahead. The continuing resolution (CR) enacted in December 2024 included several short-term extensions of health provisions (and excluded many others that had been included in an earlier proposed bipartisan health package), and these extensions will expire on March 14, 2025. Congress will need to complete action on fiscal year (FY) 2025 appropriations by this date, whether by passing another CR through the end of the FY, or by passing a full FY 2025 appropriations package. The short-term health extenders included in the December CR could be further extended in the next appropriations bill, and Congress also has the opportunity to revisit the bipartisan, bicameral healthcare package that was unveiled in December but ultimately left out of the CR because of pushback from Republicans about the overall bill’s size.
The 119th Congress will also be focused in the coming weeks on advancing key priorities – including immigration reform, energy policy, extending the 2017 tax cuts, and raising the debt limit – through the budget reconciliation process. This procedural maneuver allows the Senate to advance legislation with a simple majority, rather than the 60 votes needed to overcome the threat of a filibuster. Discussions are underway about the scope of this package and the logistics (will there be one reconciliation bill or two?), and we expect to learn more in the days and weeks ahead. It is possible that healthcare provisions could become a part of such a reconciliation package.
Cures 2.1 White Paper Published. Rep. Diana DeGette (D-CO) and former Rep. Larry Bucshon (R-IN) released a white paper on December 24, 2024, outlining potential provisions of the 21st Century Cures 2.1 legislative proposal expected to be introduced later this year. This white paper and the anticipated legislation are informed by responses to a 2024 request for information. The white paper is broad, discussing potential Medicare reforms relating to gene therapy access, coverage determinations, and fostering innovation. With Rep. Bucshon’s retirement, all eyes are focused on who will be the Republican lead on this effort.
Senate Budget Committee Members Release Report on Private Equity. The report contains findings from an investigation into private equity’s role in healthcare led by the leaders of the committee in the 118th Congress, then-Chair Whitehouse (D-RI) and then-Ranking Member Grassley (R-IA). The report includes two case studies and states that private equity firms have become increasingly involved in US hospitals. They write that this trend impacts quality of care, patient safety, and financial stability at hospitals across the United States, and the report calls for greater oversight, transparency, and reforms of private equity’s role in healthcare. A press release that includes more documents related to the case studies can be found here.
ADMINISTRATION
HHS OCR Proposes Significant Updates to HIPAA Security Rule. HHS OCR released a proposed rule, HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information (ePHI). HHS OCR proposes minimum cybersecurity standards that would apply to health plans, healthcare clearinghouses, most healthcare providers (including hospitals), and their business associates. Key proposals include:
Removing the distinction between “required” and “addressable” implementation specifications and making all implementation specifications required with specific, limited exceptions.
Requiring written documentation of all Security Rule policies, procedures, plans, and analyses.
Updating definitions and revising implementation specifications to reflect changes in technology and terminology.
Adding specific compliance time periods for many existing requirements.
Requiring the development and revision of a technology asset inventory and a network map that illustrates the movement of ePHI throughout the regulated entity’s electronic information system(s) on an ongoing basis, but at least once every 12 months and in response to a change in the regulated entity’s environment or operations that may affect ePHI.
Requiring notification of certain regulated entities within 24 hours when a workforce member’s access to ePHI or certain electronic information systems is changed or terminated.
Strengthening requirements for planning for contingencies and responding to security incidents.
Requiring regulated entities to conduct an audit at least once every 12 months to ensure their compliance with the Security Rule requirements.
The HHS OCR fact sheet is available here. Comments are due on March 7, 2025. Because this is a proposed rule, the incoming Administration will determine the content and next steps for the final rule.
HHS Releases AI Strategic Plan. In response to President Biden’s Executive Order on AI, HHS unveiled its AI strategic plan. The plan is organized into five primary domains:
Medical research and discovery
Medical product development, safety and effectiveness
Healthcare delivery
Human services delivery
Public health
Within each of these chapters, HHS discusses in-depth the context of AI, stakeholders engaged in the domain’s AI value chain, opportunities for the application of AI in the domain, trends in AI for the domain, potential use-cases and risks, and an action plan.
The report also highlights efforts related to cybersecurity and internal operations. Lastly, the plan outlines responsibility for AI efforts within HHS’s Office of the Chief Artificial Intelligence Officer.
CFPB Removes Medical Debt from Consumer Credit Reports. The final rule removes $49 billion in unpaid medical bills from the credit reports of 15 million Americans, building on the Biden-Harris Administration’s work with states and localities. The White House fact sheet can be found here. Whether the incoming Administration will intervene in this rulemaking remains an open question.
President Biden Signs Several Public Health Bills into Law. These bills from the 118th Congress include:
H.R. 6829, the HEARTS Act of 2024, which mandates that the HHS Secretary work with the Centers for Disease Control and Prevention, patient advocacy groups, and health professional organizations to develop and distribute educational materials on cardiomyopathy.
H.R. 6960, the Emergency Medical Services for Children Reauthorization Act of 2024, which reauthorizes through FY 2029 the Emergency Medical Services for Children State Partnership Program.
H.R. 7213, the Autism CARES Act of 2024, which reauthorizes, through FY 2029, the Developmental Disabilities Surveillance and Research Program and the Interagency Autism Coordinating Committee in HHS, among other HHS programs to support autism education, early detection, and intervention.
QUICK HITS
ACIMM Hosts Public Meeting. The HHS Advisory Committee on Infant and Maternal Mortality (ACIMM) January meeting included discussion and voting on draft recommendations related to preconception/interconception health, systems issues in rural health, and social drivers of health. The agenda can be found here.
CBO Releases Report on Gene Therapy Treatment for Sickle Cell Disease. The Congressional Budget Office (CBO) report did not estimate the federal budgetary effects of any policy, but instead discussed how CBO would assess related policies in the future.
CMS Reports Marketplace 2025 Open Enrollment Data. As of January 4, 2025, 23.6 million consumers had selected a plan for coverage in 2025, including more than three million new consumers. Read the fact sheet here.
CMS Updates Hospital Price Transparency Guidance. The agency posted updated frequently asked questions (FAQs) on hospital price transparency compliance requirements. Some of the FAQs are related to new requirements that took effect January 1, 2025, as finalized in the Calendar Year 2024 Outpatient Prospective Payment System/Ambulatory Services Center Final Rule, and others are modifications to existing requirements as detailed in previous FAQs.
GAO Releases Reports on Older Americans Act-Funded Services, ARPA-H Workforce. The US Government Accountability Office (GAO) report recommended that the Administration for Community Living develop a written plan for its work with the Interagency Coordinating Committee on Healthy Aging and Age-Friendly Communities to improve services funded under the Older Americans Act. In another report, the GAO recommended that the Advanced Research Projects Agency for Health (ARPA-H) develop a workforce planning process and assess scientific personnel data.
VA Expands Cancers Covered by PACT Act. The US Department of Veterans Affairs (VA) will add several new cancers to the list of those presumed to be related to burn pit exposure, lowering the burden of proof for veterans to receive disability benefits. Read the press release here.
HHS Announces $10M in Awards for Maternal Health. The $10 million in grants from the Substance Abuse and Mental Health Services Administration (SAMHSA) will go to a new community-based maternal behavioral health services grant program. Read the press release here.
Surgeon General Issues Advisory on Link Between Alcohol and Cancer Risk. The advisory includes a series of recommendations to increase awareness of the connection between alcohol consumption and cancer risk and update the existing Surgeon General’s health warning label on alcohol-containing beverages. Read the press release here.
SAMHSA Awards CCBHC Medicaid Demonstration Planning Grants. The grants will go to 14 states and Washington, DC, to plan a Certified Community Behavioral Health Clinic (CCBHC). Read the press release here.
HHS Announces Membership of Parkinson’s Advisory Council. The Advisory Council on Parkinson’s Research, Care, and Services will be co-chaired by Walter J. Koroshetz, MD, Director of the National Institutes of Health’s National Institute of Neurological Disorders and Stroke, and David Goldstein, MS, Associate Deputy Director for the Office of Science and Medicine for HHS’s Office of the Assistant Secretary for Health. Read the press release here.
NEXT WEEK’S DIAGNOSIS
The House and Senate are in session next week and will continue to organize for the 119th Congress. Confirmation hearings are expected to begin in the Senate for President-elect Trump’s nominees, although none in the healthcare space have been announced yet. On the regulatory front, CMS will publish the Medicare Advantage rate notice.