California Attorney General Issues Two Advisories Summarizing Law Applicable to AI
If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The second advisory focuses on healthcare entities.
“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI.” Attorney General Bonta
The advisories summarize existing California laws that may apply to entities who develop, sell, or use AI. They also address several new California AI laws that went into effect on January 1, 2025.
The first advisory points to several existing laws, such as California’s Unfair Competition Law and Civil Rights Laws, designed to protect consumers from unfair and fraudulent business practices, anticompetitive harm, discrimination and bias, and abuse of their data.
California’s Unfair Competition Law, for example, protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. The advisory notes that “AI provides new tools for businesses and consumers alike, and also creates new opportunity to deceive Californians.” Under a similar federal law, the Federal Trade Commission (FTC) recently ordered an online marketer to pay $1 million resulting from allegations concerning deceptive claims that the company’s AI product could make websites compliant with accessibility guidelines. Considering the explosive growth of AI products and services, organizations should be revisiting their procurement and vendor assessment practices to be sure they are appropriately vetting vendors of AI systems.
Additionally, the California Fair Employment and Housing Act (FEHA) protects Californians from harassment or discrimination in employment or housing based on a number of protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status. These FEHA protections extend to uses of AI systems when developed for and used in the workplace. Expect new regulations soon as the California Civil Rights Counsel continues to mull proposed AI regulations under the FEHA.
Recognizing that “data is the bedrock underlying the massive growth in AI,” the advisory points to the state’s constitutional right to privacy, applicable to both government and private entities, as well as to the California Consumer Privacy Act (CCPA). Of course, California has several other privacy laws that may need to be considered when developing and deploying AI systems – the California Invasion of Privacy Act (CIPA), the Student Online Personal Information Protection Act (SOPIPA), and the Confidentiality of Medical Information Act (CMIA).
Beyond these existing laws, the advisory also summarizes new laws in California directed at AI, including:
Disclosure Requirements for Businesses
Unauthorized Use of Likeness
Use of AI in Election and Campaign Materials
Prohibition and Reporting of Exploitative Uses of AI
The second advisory recounts many of the same risks and concerns about AI as relevant to the healthcare sector. Consumer protection, anti-discrimination, patient privacy and other concerns all are challenges entities in the healthcare sector face when developing or deploying AI. The advisory provides examples of applications of AI systems in healthcare that may be unlawful, here are a couple:
Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides doctors’ views about necessary treatment.
Use generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.
The advisory also addresses data privacy, reminding readers that the state’s CMIA may be more protective in some respects than the popular federal healthcare privacy law, HIPAA. It also discusses recent changes to the CMIA that require providers and electronic health records (EHR) and digital health companies enable patients to keep their reproductive and sexual health information confidential and separate from the rest of their medical records. These and other requirements need to be taken into account when incorporating AI into EHRs and related applications.
In both advisories, the Attorney General makes clear that in addition to the laws referenced above, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply to AI. In short:
Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.
Both advisories provide a helpful summary of laws potentially applicable to AI systems, and can be useful resources when building policies and procedures around the development and/or deployment of AI systems.
New Jersey AG Says Anti-Discrimination Law Covers Algorithmic Discrimination
Last week, New Jersey Attorney General Matthew Platkin announced new guidance that the New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination, i.e., when automated systems treat people differently or negatively based on protected characteristics. This can happen with algorithms trained on biased data or with systems designed with biases in mind. LAD prohibits discrimination based on a protected characteristic like race, religion, national origin, sex, pregnancy, and gender identity, among other things. According to the guidance, employers, housing providers, and places of public accommodation who make discriminatory decisions using automated decision-making tools, like artificial intelligence (AI), would violate LAD. LAD is not an intent-based statute. Therefore, a party can violate LAD even if it uses an automated decision-maker with no intent to discriminate or uses a discriminatory algorithm developed by a third party. The guidance does not create any new rights or obligations. However, in noting that the law covers automated decision-making, the guidance encourages companies to carefully design, test, and evaluate any AI system they seek to employ to help avoid producing discriminatory impacts.
The CIO-CMO Collaboration: Powering Ethical AI and Customer Engagement
The rapid advancement of artificial intelligence (AI) technologies is reshaping the corporate landscape, offering unparalleled opportunities to enhance customer experiences and streamline operations. At the intersection of this digital transformation lie two key executives—the Chief Information Officer (CIO) and the Chief Marketing Officer (CMO). This dynamic duo, when aligned, can drive ethical AI adoption, ensure compliance, and foster personalized customer engagement powered by innovation and responsibility.
This blog explores how the collaboration between CIOs and CMOs is essential in balancing ethical AI implementations with compelling customer experiences. From data governance to technology infrastructure and cybersecurity, below is a breakdown of the critical aspects of this partnership and why organizations must align these roles to remain competitive in the AI-driven world.
Understanding Ethical AI: Balancing Innovation with Responsibility
Ethical AI isn’t just a buzzword; it’s a guiding principle that ensures AI solutions respect user privacy, avoid bias, and operate transparently. To create meaningful customer experiences while addressing the societal concerns surrounding AI, CIOs, and CMOs must collaborate to design AI applications that are innovative and responsible.
CMOs focus on delivering dynamic, real-time, and personalized interactions to meet rising customer expectations. However, achieving this requires vast amounts of personal data, potentially risking violations of privacy regulations like the General Data Protection Regulation and the California Consumer Privacy Act. Enter the CIO, who ensures the technical infrastructure adheres to these laws while safeguarding the organization’s reputation. Together, the CIO and CMO can delicately balance between leveraging AI for customer engagement and adhering to responsible AI practices.
The Role of Data Governance in AI-Driven Strategies
Data governance is the backbone of ethical AI and compelling customer engagement. CMOs rely on customer data to craft hyper-personalized campaigns, while CIOs are charged with maintaining that data’s the security, accuracy, and ethical usage. Without proper governance, organizations risk breaches, regulatory fines, and, perhaps most damagingly, a loss of trust among consumers.
Collaboration between CIOs and CMOs is necessary to establish clear data management protocols; this includes ensuring that all collected data is anonymized as needed, securely stored, and utilized in compliance with emerging AI content labeling regulations. The result is a transparent system that reassures customers and consistently delivers high-quality experiences.
Robust Technology Infrastructure for AI-Powered Customer Engagement
For AI to deliver on its promise of customer engagement, organizations require scalable, secure, and agile technology infrastructure. A close alignment between CIOs and CMOs ensures that marketing campaigns are supported by IT systems capable of handling diverse AI workloads.
Platforms driven by machine learning and big data analytics allow marketing teams to create real-time, omnichannel campaigns. Meanwhile, CIOs ensure these platforms integrate seamlessly into the organization’s technology stack without sacrificing security or performance. This partnership allows marketers to focus on innovative strategies while IT supports them with reliable and forward-thinking infrastructure.
Cybersecurity Challenges and the Integrated Approach of CIOs and CMOs
Customer engagement strategies powered by AI rely heavily on consumer trust, but cybersecurity threats lurk around every corner. According to Palo Alto Networks’ predictions, customer data is central to modern marketing initiatives. However, without an early alignment between CIOs and CMOs, the organization is exposed to risks like data breaches, compliance violations, and AI-related controversies.
A proactive collaboration between CIOs and CMOs ensures that potential vulnerabilities are identified and mitigated before they evolve into full-blown crises. Measures such as end-to-end data encryption, regular cybersecurity audits, and robust AI content labeling policies can protect the organization’s digital assets and reputation. This integrated approach enables businesses to foster lasting customer trust in a world of increasingly sophisticated cyber threats.
Case Studies: Successful CIO-CMO Collaborations
Case Study 1: A Retail Giant’s TransformationOne of the world’s largest retail chains successfully transformed its customer experience through the CIO-CMO collaboration. The CIO rolled out a scalable AI-driven recommendation engine, while the CMO used this tool to craft personalized shopping experiences. The result? A 35% increase in customer retention within a year and significant growth in lifetime customer value.
Case Study 2: Financial Services LeaderA financial services firm adopted an AI-powered chatbot to enhance its customer service. The CIO ensured compliance with strict financial regulations, while the CMO leveraged customer insights to refine the chatbot’s conversational design. Together, they created a seamless, trustworthy digital service channel that improved customer satisfaction scores by 28%.
These examples reinforce the advantages of partnership. By uniting their expertise, CIOs and CMOs deliver next-generation strategies that drive measurable business outcomes.
Future Trends in AI, Compliance, and Executive Collaboration
The evolving landscape of AI, compliance, and customer engagement is reshaping the roles of CIOs and CMOs. Here are a few trends to watch for in the coming years:
AI Transparency: Regulations will increasingly require companies to disclose how AI models were trained and how customer data is used. Alignment between CIOs and CMOs will be vital in meeting these demands without derailing marketing campaigns.
Hyper-Personalization: Advances in machine learning will allow marketers to offer even more granular personalization, but this will require sophisticated data-centric systems designed by CIOs.
AI Content Labeling: From machine-generated text to synthetic media, organizations must adopt clear labeling practices to distinguish between AI-driven and human-generated content.
By staying ahead of these trends, organizations can cement themselves as leaders in ethical AI and customer engagement.
Forging a Path to Sustainable AI Innovation The digital transformation of business will continue to deepen the interconnected roles of the CIO and CMO. These two leaders occupy the dual pillars required for success in the AI era—technology prowess and customer-centric creativity. By aligning their goals and strategies early on, they can power ethical AI innovation, ensure compliance, and elevate customer experiences to new heights.
California AG Issues AI-Related Legal Guidelines for Developers and Healthcare Entities
The California Attorney General published two legal advisories this week:
Legal Advisory on the Application of Existing California Laws to Artificial Intelligence
Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare
These advisories seek to remind businesses of consumer rights under the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA), and to advise developers who create, sell, or use artificial intelligence (AI) about their obligations under the CCPA.
Attorney General Rob Bonta said, “California is an economic powerhouse built in large part on technological innovation. And right alongside that economic might is a strong commitment to economic justice, workers’ rights, and competitive markets. We’re not successful in spite of that commitment — we’re successful because of it [. . .] AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.”
Advisory No. 1: Application of Existing California Laws to Artificial Intelligence
This advisory:
Provides an overview of existing California laws (i.e., consumer protection, civil rights, competition, data protection laws, and election misinformation laws) that may apply to companies that develop, sell, or use AI;
Summarizes the new California AI law that went into effect on January 1, 2025, such as:
Disclosure Requirements for Businesses
Unauthorized Use of Likeness
Use of AI in Election and Campaign Materials
Prohibition and Reporting of Exploitative Uses of AI
Advisory No. 2: Application of Existing California Law to Artificial Intelligence in Healthcare
AI tools are used for tasks such as appointment scheduling, medical risk assessment, and medical diagnosis and treatment decisions. This advisory:
Provides guidance under California law, i.e., consumer protection, civil rights, data privacy, and professional licensing laws—for healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems;
Reminds such entities that AI carries harmful risks and that all AI systems must be tested, validated, and audited for safe, ethical, and lawful use;
Informs such entities that they must be transparent about using patient data to train AI systems and alert patients on how they are using AI to make decisions affecting their health and/or care;
This is yet another example of how issues related to the safe and ethical use of AI will likely be at the forefront for many regulators across many industries.
Biden Administration Releases Executive Order Advancing Artificial Intelligence
Highlights
The Biden administration’s latest executive order represents a transformative step in the U.S.’ approach to AI, integrating innovation with sustainability and security
Businesses will have an opportunity to align with this strategic vision, contribute to an ecosystem that will sustain U.S. leadership, and encourage economic competitiveness
The principles outlined in the executive order will guide federal agencies to ensure AI infrastructure supports national priorities while fostering innovation, sustainability, and inclusivity
On Jan. 14, 2025, President Biden issued an executive order on advancing the United States’ position as a leader in the creation of artificial intelligence (AI) infrastructure.
AI is a transformative technology with critical implications for national security and economic competitiveness. Recent advancements highlight AI’s growing role in industries and areas including logistics, military capabilities, intelligence analysis, and cybersecurity. Developing AI domestically could be essential in preventing adversaries from exploiting powerful systems, maintaining national security, and avoiding reliance on foreign infrastructure.
The executive order posits that to secure U.S. leadership in AI development, significant private sector investments are needed to build advanced computing clusters, expand energy infrastructure, and establish secure supply chains for critical components. AI’s increasing computational and energy demands necessitate innovative solutions, including advancements in clean energy technologies such as geothermal, solar, wind, and nuclear power.
The executive order notes:
National Security and Leadership
AI infrastructure development should enhance U.S. national security and leadership in AI, including collaboration between the federal government and the private sector; ensuring safeguards for cybersecurity, supply chains, and physical security; and managing risks from future frontier AI capabilities.
The Secretary of State, in coordination with key federal officials and agencies, will create a plan to engage allies and partners in accelerating the global development of trusted AI infrastructure. The plan will focus on advancing collaboration on building trusted AI infrastructure worldwide.
Economic Competitiveness
AI infrastructure should also strengthen U.S. economic competitiveness by fostering a fair, open, and innovative technology ecosystem by supporting small developers, securing reliable supply chains, and ensuring that AI benefits all Americans.
Clean Energy Leadership
The U.S. aims to lead in operating AI data centers powered by clean energy to help ensure that new data center electricity demands do not take clean power away from other end users or increase grid emissions. This involves modernizing energy infrastructure, streamlining permitting processes, and advancing clean energy technologies, ensuring AI infrastructure development aligns with new clean electricity generation.
The Department of Energy, in coordination with other agencies, will expand research and development efforts to improve AI data center efficiency, focusing on building systems, energy use, cooling infrastructure, software, and wastewater heat reuse. A report will be submitted to the president with recommendations for advancing industry-wide efficiency, including innovations like server consolidation, hardware optimization, and power management.
The Secretary of Energy will provide technical assistance to state public utility commissions on rate structures, such as clean transition tariffs, to enable AI infrastructure to use clean energy without raising electricity or water costs unnecessarily.
Cost and Community Considerations
Because building AI in the U.S. requires enormous private-sector investments, the AI infrastructure must be developed without increasing energy costs for consumers and businesses. Companies participating in AI development, clean energy technology, and grid and semiconductor development can work with federal agencies to strategically further these initiatives that align with broader ethical and operational standards.
The Secretaries of Defense and Energy will each identify at least three federally managed sites suitable for leasing to non-federal entities for the construction and operation of frontier AI data centers and clean energy facilities. These sites should aim to be fully permitted for construction by the end of 2025 and operational by the end of 2027.
Priority will be given to locations that 1) have appropriate terrain, land gradients, and soil conditions for AI data centers; 2) minimize adverse impacts on local communities, natural or cultural resources, and protected species; and 3) are near communities seeking to host AI infrastructure, supporting local employment opportunities in design, construction, and operations.
Worker and Community Benefits
AI infrastructure projects should uphold high labor standards, involve close collaboration with affected communities, and prioritize safety and equity, ensuring the broader population benefits from technological innovation.
The Director of the Office of Management and Budget, in consultation with the Council on Environmental Quality, will evaluate best practices for public participation in siting and energy-related infrastructure decisions for AI data centers. Recommendations will be made to the Secretaries of Defense and Energy, who will incorporate these into their decision-making processes to ensure effective governmental engagement and meaningful community input on health, safety, and environmental impacts.
Relevant agencies will prioritize measures to keep electricity costs low for households, consumers, and businesses when implementing AI infrastructure on Federal sites.
Takeaways
The U.S. is committed to enabling the development and operation of AI infrastructure, including data centers, guided by five key principles: 1) national security and leadership; 2) economic competitiveness; 3) leadership in clean energy; 4) cost and community consideration; and 5) workforce and community benefits.
The Biden administration’s latest initiative aims to foster a competitive technology ecosystem, enable small and large companies to thrive, keep electricity costs low for consumers, and ensure that AI infrastructure development benefits workers and their local communities.
FTC to Hold Hearing on Impersonation Rule Amendment
The Federal Trade Commission (FTC) will hold an informal hearing at 1:00pm EST on January 17, regarding the proposed amendment to its existing impersonation rule.
We first wrote about the proposed changes to the FTC rule in an article in February 2024. The current impersonation rule, which governs only government and business impersonation, first went into effect in April 2024, and is aimed at combatting impersonation fraud resulting in part from artificial intelligence- (AI) generated deepfakes. When announcing the rule, the FTC also stated that it was accepting public comments for a supplemental notice of proposed rulemaking aimed at prohibiting impersonation of individuals. In essence, the rule makes the impersonation of a government entity or official or company unfair or deceptive.
The FTC announced the January hearing date in December 2024. The purpose of the hearing is to address amending the existing rule to include an individual impersonation ban and allow interested parties an opportunity to provide oral statements. There are nine parties participating in the hearing, including: the Abundance Institute, Andreesen Horowitz, the Consumer Technology Association, the Software & Information Industry Association, TechFreedom, TechNet, the Electronic Privacy Information Center; the Internet & Television Association, and Truth in Advertising.
While the original announcement of the proposed amendment indicated that the FTC would be accept public comments on the addition of both a prohibition of individual impersonation and a prohibition on providing scammers with the means and instrumentalities to execute these types of scams, the FTC has decided not to proceed with the proposed means and instrumentalities provision at this time. The sole purpose of the January 17 hearing is to “address issues relating to the proposed prohibition on impersonating individuals.” The public is invited to join the hearing live via webcast using this link.
AI Drug Development: FDA Releases Draft Guidance
On January 6, 2025, the U.S. Food and Drug Administration (FDA) released draft guidance titled Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (“guidance”) explaining the types of information that the agency may seek during drug evaluation. In particular, the guidance outlines a risk framework based on a “context of use” of Artificial Intelligence (AI) technology and details the information that might be requested (or required) relating to AI technologies, the data used to train the technologies, and governance around the technologies, in order to approve their use. At a high level, the guidance underscores the FDA’s goals for establishing AI model credibility within the context of use.
This article provides an overview of the guidance, including example contexts of use and detailing the risk framework, while explaining how these relate to establishing AI model credibility through the suggested data and model-related disclosures. It further details legal strategy considerations, along with opportunities for innovation, that arise from the guidance. These considerations will be valuable to sponsors (i.e., of clinical investigations, such as Investigational New Drug Exemption applications), along with AI model developers and other firms in the drug development landscape.
Defining the Question of Interest
The first step in the guidance’s framework is defining the “question of interest:” the specific question, decision, or concern being addressed by the AI model. For example, questions of interest could involve the use of AI technology in human clinical trials, such as inclusion and exclusion criteria for the selection of participants, risk classification of participants, or determining procedures relating to clinical outcome measures of interest. Questions of interest could also relate to the use of AI technology in drug manufacturing processes, such as for quality control.
Contexts of Use
The guidance next establishes contexts of use – the specific scope and role of an AI model for addressing the question of interest – as a starting point for understanding any risks associated with the AI model, and in turn how credibility might be established.
The guidance emphasizes that it is limited to AI models (including for drug discovery) that impact patient safety, drug quality, or reliability of results from nonclinical or clinical studies. As such, firms that use AI models for discovering drugs but rely on more traditional processes to address factors that the FDA considers for approving a drug such as safety, quality, and stability, should be aware of the underlying principles of the guidance but might not need to modify their current AI governance. An important factor in defining the contexts of use is how much of a role the AI model plays relative to other automated or human-supervised processes; for example, processes in which a person is provided AI outputs for verification will be different from those that are designed to be fully automated.
Several types of contexts of use are introduced in the guidance, including:
Clinical trial design and management
Evaluating patients
Adjudicating endpoints
Analyzing clinical trial data
Digital health technologies for drug development
Pharmacovigilance
Pharmaceutical manufacturingGenerating real-world evidence (RWE)
Life cycle maintenance
Risk Framework for Determining Information Disclosure Degree
The guidance proposes that the risk level posed by the AI model dictates the extent and depth of information that must be disclosed about the AI model. The risk is determined based on two factors: 1) how much the AI model will influence decision-making (model influence risk), and 2) the consequences of the decision, such as patient safety risks (decision consequence risk).
For high-risk AI models—where outputs could impact patient safety or drug quality—comprehensive details regarding the AI model’s architecture, data sources, training methodologies, validation processes, and performance metrics may have to be submitted for FDA evaluation. Conversely, the required disclosure may be less detailed for AI models posing low risk. This tiered approach promotes credibility and avoids unnecessary disclosure burdens for lower-risk scenarios.
However, most AI models within the scope of this guidance will likely be considered high risk because they are being used for clinical trial management or drug manufacturing, so stakeholders should be prepared to disclose extensive information about an AI model used to support decision-making. Sponsors that use traditional (non-AI) methods to develop their drug products are required to submit complete nonclinical, clinical, and chemistry manufacturing and controls to support FDA review and ultimate approval of a New Drug Application. Those sponsors using AI models are required to submit the identical information, but in addition, are required to provide information on the AI model as outlined below.
High-Level Overview of Guidelines for Compliance Depending on Context of Use
The guidance further provides a detailed outline of steps to pursue in order to establish credibility of an AI model, given its context of use. The steps include describing: (1) the model, (2) the data used to develop the model, (3) model training, (4) and model evaluation, including test data, performance metrics, and reliability concerns such as bias, quality assurance, and code error management. Sponsors may be expected to be more detailed in disclosures as the risks associated with these steps increase, particularly where the impact on trial participants and/or patients increase.
In addition, the FDA specifically emphasizes special consideration for life cycle maintenance of the credibility of AI model outputs. For example, as the inputs to or deployment of a given AI model changes, there may be a need to reevaluate the model’s performance (and thus provide corresponding disclosures to support continued credibility).
Intellectual Property Considerations
Patent vs. Trade Secret
Stakeholders should carefully consider patenting the innovations underlying AI models used for decision-making. The FDA’s extensive requirements for transparency and submitting information about AI model architectures, training data, evaluation processes, and life cycle maintenance plans would pose a significant challenge for maintaining these innovations as trade secrets.
That said, trade secret protection of at least some aspects of AI models is an option when the AI model does not have to be disclosed. If the AI model is used for drug discovery or operations that do not impact patient safety or drug quality, it may be possible to keep the AI model or its training data secret. However, AI models used for decision-making will be subject to the FDA’s need for transparency and information disclosure that will likely jeopardize trade secret protection. By securing patent protection on the AI models, stakeholders can safeguard their intellectual property while satisfying FDA’s transparency requirements.
Opportunities for Innovation
The guidance requires rigorous risk assessments, data fitness standards, and model validation processes, which will set the stage for the creation of tools and systems to meet these demands. As noted above, innovative approaches for managing and validating AI models used for decision-making are not good candidates for trade secret protection, and stakeholders should ensure early identification and patenting of these inventions.
We have identified specific opportunities for AI innovation that are likely to be driven by FDA demands reflected in the guidance:
Requirements for transparency
Designing AI models with explainable AI capabilities that demonstrate how decisions or predictions are made
Bias and fitness of data
Systems for detecting bias in training data
Systems for correcting bias in training data
Systems for monitoring life cycle maintenance
Systems to detect data drift or changes in the AI model during life cycle of the drug
Systems to retrain or revalidate the AI model as needed because of data drift
Automated systems for tracking model performance
Testing methods
Developing models that can be tested against independent data sets and conditions to demonstrate generalizability
Integration of AI models in a practical workflow
Good Manufacturing Practices
Clinical decision support systems
Documentation systems
Automatic systems to generate reports of model development, evaluation, updates, and credibility assessments that can be submitted to FDA to meet regulatory requirements
The guidance provides numerous opportunities for innovations to enhance AI credibility, transparency, and regulatory compliance across the drug product life cycle. As demonstrated above, the challenges that the FDA seeks to address in order to validate AI use in drug development clearly map to potential innovations. Such innovations are likely valuable since they are needed to comply with FDA guidelines and offer significant opportunities for developing a competitive patent portfolio.
Conclusion
With this guidance, the FDA has proposed guidelines for establishing credibility in AI models that have risks for and impacts on clinical trial participants and patients. This guidance, while in draft, non-binding form, follows a step-by-step framework from defining the question of interest and establishing the context of use of the AI model to evaluating risks and in turn establishing the scope of disclosure that may be relevant. The guidance sets out the FDA’s most current thinking about the use of AI in drug development. Given such a framework and the corresponding level of disclosure that can be expected, sponsors may consider a shift in strategy towards using more patent protection for their innovations. Similarly, there may be more opportunities for identifying and protecting innovations associated with building governance around these models.
In addition to using IP protection as a backstop to greater disclosure, firms can also consider introducing more operational controls to mitigate the risks associated with AI model use and thus reduce their disclosure burden. For example, firms may consider supporting AI model credibility with other evidence sources, as well as integrating greater human engagement and oversight into their processes.
In meantime, sponsors that are uncertain about how their AI model usage might interact with future FDA requirements should consider the engagement options that the FDA has outlined for their specific context of use.
Comments on the draft guidance can be submitted online or mailed before April 7, 2025, and our team is available to assist interested stakeholders with drafting.
New Artificial Intelligence (AI) Regulations and Potential Fiduciary Implications
Fiduciaries should be aware of recent developments involving AI, including emerging and recent state law changes, increased state and federal government interest in regulating AI, and the role of AI in ERISA litigation. While much focus has been on AI’s impact on retirement plans, which we previously discussed here, plan fiduciaries of all types, including health and welfare benefit plans, must also stay informed about recent AI developments.
Recent State Law Changes
Numerous states recently codified new laws focusing on AI, some of which regulate employers’ human resource decision-making processes. Key examples include:
California – In 2024, California enacted over 10 AI-related laws, addressing topics such as:
The use of AI with datasets containing names, addresses, or biometric data;
How one communicates health care information to patients using AI; and
AI-driven decision-making in medical treatments and prior authorizations.
For additional information on California’s new AI laws, see Foley’s Client Alert, Decoding California’s Recent Flurry of AI Laws.
Illinois – Illinois passed legislation prohibiting employers from using AI in employment activities in ways that lead to discriminatory effects, regardless of intent. Under the law, employers are required to provide notice to employees and applicants if they are going to use AI for any workplace-related purpose.
For additional information on Illinois’ new AI law, see Foley’s Client Alert, Illinois Enacts Legislation to Protect against Discriminatory Implications of AI in Employment Activities.
Colorado – The Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, mandates “reasonable care” when employers use AI for certain applications.
For additional information on Colorado’s new AI law, see Foley’s Client Alert, Regulating Artificial Intelligence in Employment Decision-Making: What’s on the Horizon for 2025.
While these laws do not specifically target employee benefit plans, they reflect a trend toward states regulating human resource practices broadly, are aimed at regulating human resource decision-making processes, and are part of an evolving regulatory environment. Hundreds of additional state bills were proposed in 2024, along with AI-related executive orders, signaling more forthcoming regulation in 2025. Questions remain about how these laws intersect with employee benefit plans and whether federal ERISA preemption could apply to state attempts at regulation.
Recent Federal Government Actions
The federal government recently issued guidance aimed at preventing discrimination in the delivery of certain healthcare services and completed a request for information (RFI) for potential AI regulations involving the financial services industry.
U.S. Department of Health and Human Services (HHS) Civil Rights AI Nondiscrimination Guidance – HHS, through its Office for Civil Rights (OCR), recently issued a “Dear Colleague” letter titled Ensuring Nondiscrimination Through the Use of Artificial Intelligence and Other Emerging Technologies. This guidance emphasizes the importance of ensuring that the use of AI and other decision-support tools in healthcare complies with federal nondiscrimination laws, particularly under Section 1557 of the Affordable Care Act (Section 1557).
Section 1557 prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs and activities receiving federal financial assistance. OCR’s guidance underscores that healthcare providers, health plans, and other covered entities cannot use AI tools in a way that results in discriminatory impacts on patients. This includes decisions related to diagnosis, treatment, and resource allocation. Employers and plan sponsors should note that this guidance applies to a subset of health plans, including those that fall under Section 1557, but not to all employer-sponsored health plans.
Treasury Issues RFI for AI Regulation – In 2024, the U.S. Department of Treasury published an RFI on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. The RFI included several key considerations, including addressing AI bias and discrimination, consumer protection and data privacy, and risks to third-party users of AI. While the RFI has not yet led to concrete regulations, it underscores federal attention to AI’s impact on financial and employee benefit services. The ERISA Industry Committee, a nonprofit association representing large U.S. employers in their capacity as employee benefit plan sponsors, commented that AI is already being used for retirement readiness applications, chatbots, portfolio management, trade executions, and wellness programs. Future regulations may target these and related areas.
AI-Powered ERISA Litigation
Potential ERISA claims against plan sponsors and fiduciaries are being identified using AI. In just one example, an AI platform, Darrow AI, claims to be:
“designed to simplify the analysis of large volumes of data from plan documents, regulatory filings, and court cases. Our technology pinpoints discrepancies, breaches of fiduciary duty, and other ERISA violations with accuracy. Utilizing our advanced analytics allows you to quickly identify potential claims, assess their financial impact, and build robust cases… you can effectively advocate for employees seeking justice regarding their retirement and health benefits.”
Further, this AI platform claims it can find violations affecting many types of employers, whether a small business or a large corporation, by analyzing diverse data sources, including news, SEC filings, social networks, academic papers, and other third-party sources.
Notably, health and welfare benefit plans are also emerging as areas of focus for AI-powered ERISA litigation. AI tools are used to analyze claims data, provider networks, and administrative decisions, potentially identifying discriminatory practices or inconsistencies in benefit determinations. For example, AI could highlight patterns of bias in prior authorizations or discrepancies in how mental health parity laws are applied.
The increasing sophistication of these tools raises the stakes for fiduciaries, as they must now consider the possibility that potential claimants will use AI to scrutinize their decisions and plan operations with unprecedented precision.
Next Steps for Fiduciaries
To navigate this evolving landscape, fiduciaries should take proactive steps to manage AI-related risks while leveraging the benefits of these technologies:
Evaluate AI Tools: Undertake a formal evaluation of artificial intelligence tools utilized for plan administration, participant engagement, and compliance. This assessment includes an examination of the algorithms, data sources, and decision-making processes involved, including an assessment to ensure their products have been evaluated for compliance with nondiscrimination standards and do not inadvertently produce biased outcomes.
Audit Service Providers: Conduct comprehensive audits of plan service providers to evaluate their use of AI. Request detailed disclosures regarding the AI systems in operation, focusing on how they mitigate bias, ensure data security, and comply with applicable regulations.
Review and Update Policies: Formulate or revise internal policies and governance frameworks to monitor the utilization of AI in operational planning and compliance with nondiscrimination laws. These policies should outline guidelines pertaining to the adoption, monitoring, and compliance of AI technologies, thereby ensuring alignment with fiduciary responsibilities.
Enhance Risk Mitigation:
Fiduciary Liability Insurance: Consider obtaining or enhancing fiduciary liability insurance to address potential claims arising from the use of AI.
Data Privacy and Security: Enhance data privacy and security measures to safeguard sensitive participant information processed by AI tools.
Bias Mitigation: Establish procedures to regularly test and validate AI tools for bias, ensuring compliance with anti-discrimination laws.
Integrate AI Considerations into Requests for Proposals (RFPs): When selecting vendors, include specific AI-related criteria in RFPs. This may require vendors to demonstrate or certify compliance with state and federal regulations and adhere to industry best practices for AI usage.
Monitor Legal and Regulatory Developments: Stay informed about new state and federal AI regulations, along with the developing case law related to AI and ERISA litigation. Establish a process for routine legal reviews to assess how these developments impact plan operations.
Provide Training: Educate fiduciaries, administrators, and relevant staff on the potential risks and benefits of AI in plan administration, emerging technologies and the importance of compliance with applicable laws. The training should provide an overview of legal obligations, best practices for implementing AI, and strategies for mitigating risks.
Document Due Diligence: Maintain comprehensive documentation of all steps to assess and track AI tools. This includes records of audits, vendor communications, and updates to internal policies. Clear documentation can act as a crucial defense in the event of litigation.
Assess Applicability of Section 1557 to Your Plan: Health and welfare plan fiduciaries should determine whether your organization’s health plan is subject to Section 1557 and whether OCR’s guidance directly applies to your operations, and if not, confirm and document why not.
Fiduciaries must remain vigilant regarding AI’s increasing role in employee benefit plans, particularly amid regulatory uncertainty. Taking proactive measures and adopting robust risk management strategies can help mitigate risks and ensure compliance with current and anticipated legal standards. By dedicating themselves to diligence and transparency, fiduciaries can leverage the benefits of AI while safeguarding the interests of plan participants. At Foley & Lardner LLP, we have experts in AI, retirement planning, cybersecurity, labor and employment, finance, fintech, regulatory matters, healthcare, and ERISA. They regularly advise fiduciaries on potential risks and liabilities related to these and other AI-related issues.
Drilling Down into Venture Capital Financing in Artificial Intelligence
It should come as no surprise that venture capital (VC) investors are drilling down into startups building businesses with Artificial Intelligence (AI) at the core. New data from PitchBook actually shows that AI startups make up 22% of first-time VC financing. They note that $7 billion of first-time funding raised by startups in 2024 went to AI & machine learning (ML) startups (this is according to their data through Q3 of 2024).
Crunchbase data also showed that in Q3 of 2024, AI-related startups raised $19 billion in funding, accounting for 28% of all venture dollars for that quarter. They importantly point out that this excludes the $6.6 billion round raised by OpenAI, which was announced after Q3 closed. With this unprecedented level of investment in the AI vertical, there is increasing concern that i) some startups might be using AI as more of a buzzword to raise capital rather than truly focusing on this area, and/or ii) there are bubbles in certain sub-verticals.
PitchBook analysts also note that with limited funding available for startups, integrating AI into their offerings is crucial for founders to secure investment. However, this also makes it harder to distinguish which startups are genuinely engaging in meaningful AI work. For investors, the challenge lies in sifting through the AI “noise” to identify those startups that are truly transformative and focusing on key areas within the sector, which will be vital as we move into 2025.
A recent article in Forbes examined the themes that early-stage investors were targeting for the new year. When looking at investment in AI startups, these included the use of AI to help pharmaceutical companies optimize clinical trials, AI in fintech and personal finance, AI applications in healthcare to improve the patient to caregiver experience, and AI-driven vertical software that will disrupt incumbents.
According to the Financial Times (FT), this boom in AI investment comes at a time when the industry still has an “immense overhang of investments from venture’s Zirp era” (Zirp referring to the zero interest rate policy environment that existed between 2009 and 2022). This has led to approximately $2.5 trillion trapped in private unicorns, and we have not really seen what exit events or IPOs will materialize and what exit valuations will return to investors. Will investors get their capital back and see the returns they hope for? Only time will tell, but investors do not seem ready to slow down their investment in AI startups any time soon. As the FT says, this could be a pivotal year for the fate of VC investment in AI. We will all be watching closely.
Bridging the Gap: How AI is Revolutionizing Canadian Legal Tech
While Canadian law firms have traditionally lagged behind their American counterparts in adopting legal tech, the AI explosion is closing the gap. This slower adoption rate isn’t due to a lack of innovation—Canada boasts a thriving legal tech sector. Instead, factors like a smaller legal market and stricter privacy regulations have historically hindered technology uptake. This often resulted in a noticeable delay between a product’s US launch and its availability in Canada.
Although direct comparisons are challenging due to the continuous evolution of legal tech, the recent announcements and release timelines for major AI-powered tools point to a notable shift in how the Canadian market is being prioritized. For instance, Westlaw Edge was announced in the US in July 2018, but the Canadian launch wasn’t announced until September 2021—a gap of over three years. Similarly, Lexis+ was announced in the US in September 2020, with the Canadian announcement following in August 2022. However, the latest AI products show a different trend. Thomson Reuters’ CoCounsel Core was announced in the US in November 2023 and shortly followed in Canada in February 2024. The announcement for Lexis+ AI came in October 2023 in the US and July 2024 in Canada. This rapid succession of announcements suggests that the Canadian legal tech market is no longer an afterthought.
The Canadian federal government has demonstrated a strong commitment to fostering AI innovation. It has dedicated CAD$568 million to its national AI strategy, with the goals of fostering AI research and development, building a skilled workforce in the field, and creating robust industry standards for AI systems. This investment should help Canadian legal tech companies, such as Clio, Kira Systems, Spellbook, and Blue J Legal, all headquartered in Canada. With the Canadian government’s focus on establishing Canada as a hub for AI and innovation, these companies stand to benefit significantly from increased funding and talent attraction.
While the Canadian government is actively investing in AI innovation, it’s also taking steps to ensure responsible development through proposed legislation, which could impact the availability of AI legal tech products in Canada. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), which aims to regulate high-impact AI systems. While AI tools used by law firms for tasks like legal research and document review likely fall outside this initial scope, AIDA’s evolving framework could still impact the sector. For example, the Act’s emphasis on mitigating bias and discrimination may lead to greater scrutiny of AI algorithms used in legal research, requiring developers to demonstrate fairness and transparency.
While AIDA may present hurdles for US companies entering the Canadian market with AI products, it could conversely provide a competitive advantage for Canadian companies seeking to expand into Europe. This is because AIDA, despite having some material differences, aligns more closely with the comprehensive approach in the European Union’s Artificial Intelligence Act (EU AI Act).
While US companies are working to comply with the EU AI Act, Canadian companies may have an advantage. Although AIDA isn’t yet in force and has some differences from the EU AI Act, it provides a comprehensive regulatory framework that Canadian legal tech leaders are already engaging with. This engagement with AIDA could prove invaluable to Canadian legal tech companies as AI regulation continues to evolve globally.
Canadian companies looking to leverage their experiences with AIDA for European expansion will nonetheless encounter some material differences. For instance, the EU AI Act casts a wider net, regulating a broader range of AI systems than AIDA. The EU AI Act’s multi-tiered risk-based system is designed to address a wider spectrum of concerns, capturing even “limited-risk” AI systems with specific transparency obligations. Furthermore, tools used for legal interpretation could be classified as “high-risk” systems under the EU AI Act, triggering more stringent requirements.
In conclusion, the rise of generative AI is not only revolutionizing Canadian legal tech and closing the gap with the US, but it could also be positioning Canada as a key player in the global legal tech market. While AIDA’s impact remains to be seen, its emphasis on responsible AI could shape the development and deployment of AI-powered legal tools in Canada.
Litigation Minute: A Look Back and Ahead
What You Need to Know in a Minute or Less
Throughout 2024, we published three series highlighting emerging and evolving trends in litigation. From generative AI to ESG litigation, our lawyers continue to provide concise, timely updates on the issues most critical to our clients and their businesses.
In a minute or less, find our Litigation Minute highlights from the past year—as well as a look ahead to 2025.
Beauty and Wellness
Our first series of the year covered trends in the beauty and wellness industry, beginning with products categorized as “beauty from within,” including oral supplements focused on wellness. We outlined the risks of FDA enforcement and class action litigation arising from certain marketing claims associated these products.
We next reviewed the use of “clean” and “natural” marketing terminology. We assessed these labeling claims across a range of potentially impacted products and brands, as well as regulatory and litigation risks associated with such claims.
Alongside these marketing-focused issues, companies also face increased regulatory scrutiny, including new extended producer responsibility laws and the FTC Green Guides. We concluded our series by assessing product packaging and end-of-life considerations for beauty and wellness brands.
Generative AI
One of the most-discussed developments of 2024, generative AI was the focus of our second series of the year, which examined key legal, regulatory, and operational considerations associated with generative AI. We outlined education, training, and risk management frameworks in light of litigation trends targeting these systems.
2024 also saw several new state statutes regulating generative AI. From mandatory disclosures in Utah to Tennessee’s ELVIS Act, we examined how new state approaches would remain at the forefront of attention for companies currently utilizing or considering generative AI.
With the need for compliance and training in mind, we next discussed the potential for generative AI in discovery. With the ability to rapidly sort through data and provide timely requested outputs, we provided an overview of how generative AI has created valuable tools for lawyers as well as their clients.
ESG Litigation
2024 highlighted the impacts of extreme weather, as well as the importance of preparation for such natural disasters. With extreme weather events expected to increase in both frequency and intensity around the world, we provided insurance coverage considerations for policyholders seeking to restore business operations following these events and weather the consequential financial storms.
Further ESG headlines this year focused on the questions surrounding microplastics—including general definition, scientific risk factors, potential for litigation, and the hurdles complicating this litigation.
Greenwashing claims, on the other hand, have experienced fewer setbacks, with expanded litigation targeting manufacturers, distributors, and retailers of consumer products. Alleging false representation of companies or their products as “environmentally friendly,” we reviewed how the risk of such claims can be mitigated through proper substantiation and documentation of company claims and certifications.
The Texas Responsible AI Governance Act and Its Potential Impact on Employers
On 23 December 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act),1 adding Texas to the list of states seeking to regulate artificial intelligence (AI) in the absence of federal law. The Act establishes obligations for developers, deployers, and distributors of certain AI systems in Texas. While the Act covers a variety of areas, this alert focuses on the Act’s potential impact on employers.2
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act seeks to regulate employers’ and other deployers’ use of “high-risk artificial intelligence systems” in Texas. High-risk intelligence systems include AI tools that make or are a contributing factor in “consequential decisions.”3 In the employment space, this could include hiring, performance, compensation, discipline, and termination decisions.4 The Act does not cover several common intelligence systems, such as technology intended to detect decision-making patterns, anti-malware and antivirus programs, and calculators.
Under the Act, covered employers would have a general duty to use reasonable care to prevent algorithmic discrimination—including a duty to withdraw, disable, or recall noncompliant high-risk AI systems. To satisfy this duty, the Act requires covered employers and other covered deployers to do the following:
Human Oversight
Ensure human oversight of high-risk AI systems by persons with adequate competence, training, authority, and organizational support to oversee consequential decisions made by the system.5
Prompt Reporting of Discrimination Risks
Report discrimination risks promptly by notifying the Artificial Intelligence Council (which would be established under the Act) no later than 10 days after the date the deployer learns of such issues.6
Regular AI Tool Assessments
Assess high-risk AI systems regularly, including conducting a review on an annual basis, to ensure that the system is not causing algorithmic discrimination.7
Prompt Suspension
If a deployer considers or has reason to consider that a system does not comply with the Act’s requirements, suspend use of the system and notify the system’s developer of such concerns.8
Frequent Impact Assessments
Complete an impact assessment on a semi-annual basis and within 90 days after any intentional or substantial modification of the system.9
Clear Disclosure of AI Use
Before or at the time of interaction, disclose to any Texas-based individual:
That they are interacting with an AI system.
The purpose of the system.
That the system may or will make a consequential decision affecting them.
The nature of any consequential decision in which the system is or may be a contributing factor.
The factors used in making any consequential decisions.
Contact information of the deployer.
A description of the system. 10
Takeaways for Employers
The Act is likely to be a main topic of discussion in Texas’s upcoming legislative session, which is scheduled to begin on 14 January 2025. If enacted, the Act would establish a consumer protection-focused framework for AI regulation. Employers should track the Act’s progress and any amendments to the proposed bill while also taking steps to prepare for the Act’s passage. For example, employers using or seeking to use high-risk AI systems in Texas can:
Develop policies and procedures that govern the use of AI systems to make or impact employment decisions:
Include in these policies and procedures clear explanations of (i) the systems’ uses and purposes, (ii) the system’s decision-making processes, (iii) the permitted uses of such systems, (iv) the approved users of such systems, (v) training requirements for approved users, and (vi) the governing body overseeing the responsible use of such systems.
Develop and implement an AI governance and risk-management framework with internal policies, procedures, and systems for review, flagging risks, and reporting.
Ensure human oversight over AI systems.
Train users and those tasked with overseeing the AI systems.
Ensure there are sufficient resources committed to, and an adequate budget assigned to, overseeing and deploying AI systems and complying with the Act.
Conduct due diligence on any AI vendors and developers before engagement and on any AI systems before use, including relating to how AI vendors and developers and AI systems test for, avoid, and remedy algorithmic bias, and to ensure AI vendors and developers are compliant with the Act’s requirements relating to developers of high-risk AI systems.
Footnotes
1 A copy of HB 1709 is available at: https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB01709I.pdf (last accessed: 9 January 2025).
2 Section 551.001(8).
3 Section 551.001(13). The Act defines a “consequential decision” as “a decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of: (A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision; (B) education enrollment or an education opportunity; (C) employment or an employment opportunity; (D) a financial service; (E) an essential government service; (F) residential utility services; (G) a health-care service or treatment; (H) housing; (I) insurance; (J) a legal service; (K) a transportation service; (L) constitutionally protected services or products; or (M) elections or voting process.”
4 Id.
5 Section 551.005
6 Section 551.011
7 Section 551.006(d)
8 Section 551.005
9 Section 551.006(a)
10 Section 551.007(a)