In April 2025, the U.S. Food and Drug Administration (FDA) released a landmark guidance titled “Roadmap to Reducing Animal Testing in Preclinical Safety Studies,” outlining its commitment to advancing New Approach Methodologies (NAMs) — including in silico models, organoids, and other non-animal alternatives. This guidance encourages sponsors of Investigational New Drug (IND) applications to adopt scientifically credible alternatives to animal studies, marking a shift toward more human-relevant, Artificial Intelligence (AI)-integrated platforms for regulatory submissions.
Earlier, in January 2025, the FDA released a companion piece of draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” We discussed this FDA guidance in detail here: AI Drug Development: FDA Releases Draft Guidance.
Briefly, this guidance focuses specifically on the use of AI across the drug development lifecycle. It introduces a risk-based framework based on the model’s context of use and outlines the information that must be disclosed about model architecture, data governance, life cycle maintenance, and potential clinical impact — particularly where the model influences patient safety or drug quality.
A growing number of real-world examples confirm that AI-powered NAMs are becoming reality and demonstrate successful integration of AI and life science. Notably, the UVA/Padova Type 1 Diabetes Simulator, a physiologically detailed silico platform, has already been accepted by the FDA and used to support regulatory clearance of continuous glucose monitoring (CGM) devices. This is a tangible signal that virtual physiological systems can now form part of the evidentiary foundation for FDA approval.
FDA’s internal embrace of AI reinforces its regulatory messaging and signals long-term institutional commitment to digital tools. In particular, the FDA publicized its new uses of AI tools for internal operations, including a generative AI tool, “Elsa.” The FDA indicated that it would incorporate these tools widely throughout the agency by the end of June 2025.
Together, these FDA efforts mark a significant evolution: AI and others in silico platforms are now not only permissible but increasingly central to preclinical and clinical decision-making, along with regulatory review. With that recognition comes a dual imperative for sponsors and developers to demonstrate credibility to regulators, and safeguard innovation with robust IP strategy. NAMs significantly increase firms’ risk exposure with respect to both privacy and confidentiality concerns.
The sections that follow explore how companies and their legal teams can respond strategically — by leveraging emerging FDA frameworks, protecting innovations in AI modeling, and ensuring data governance practices are aligned with both regulatory and IP objectives.
What Are NAMs?
NAMs refer to non-animal strategies for evaluating drug safety and efficacy. These include in silico models, microphysiological systems, organoids, computational toxicology platforms, and AI-driven clinical simulations. NAMs offer benefits in speed, cost, and ethical soundness, while aligning more closely with human biology.
Despite rapid progress, FDA’s formal acceptance of AI-based NAMs remains limited. The UVA/Padova Type 1 Diabetes Simulator remains the only widely cited example of an in silico model that has been used successfully in FDA regulatory submissions — specifically in evaluating CGM devices. Its acceptance underscores the possibility of regulatory pathways for complex physiological simulators.
However, the ecosystem of NAMs is rapidly expanding. While not all have yet achieved full FDA qualification, several promising AI- and data-driven platforms are under consideration:
- Organ-on-a-Chip and Organoid Models: Patient-derived organoids and organ-on-a-chip systems are being validated to simulate tissue-specific responses. For example, intestinal organoids have been shown to predict off-target toxicity for T-cell therapies.
- AI-Based Computational Toxicology: AI models trained on large-scale toxicology databases are being developed to predict adverse outcomes and are supported by FDA through collaborative validation projects.
- In Silico Clinical Trials (ISCTs): Computational patient models are being piloted to simulate clinical trial outcomes, particularly for device testing, with defined workflows for model validation and uncertainty analysis.
- Physiologically Based Pharmacokinetic (PBPK) Models: These simulate drug distribution and metabolism and are under consideration as partial replacements for animal testing in pharmacokinetic profiling.
- Synthetic Control Arms: AI-derived virtual patient cohorts are gaining traction as replacements for placebo groups in trials, helping reduce the number of real patient participants.
- Wearable-Integrated AI: AI models analyzing real-time data from digital health technologies (e.g., wearables) are being reviewed for roles in patient monitoring, endpoint adjudication, and trial management.
NAMs and the FDA Risk Framework
Under the January 2025 guidance, the FDA uses a two-dimensional risk framework:
1. Model Influence Risk: How much the AI model’s output influences decisions.
2. Decision Consequence Risk: The potential impact of those decisions on patient safety or data integrity.
This determines the extent of documentation and disclosure required. High-risk AI models must include detailed submissions on training data, model performance, and governance — creating tension with trade secret protection and increasing the strategic need for patent coverage. The FDA’s evolving stance on AI-based NAMs signals that AI-enabled platforms will soon be standard components of regulatory filings. As such, developers must plan early for IP and data governance issues as discussed below in sections B and C.
Implications for IP Strategy
The FDA’s guidance indicates that when NAMs and AI models are employed in clinical or manufacturing decision-making, stakeholders may be required to provide disclosures about data sources, model training procedures, evaluation metrics, and maintenance protocols. As these transparency requirements expand, relying solely on trade secrets becomes less practical, and patent protection or hybrid IP strategies increasingly important. Mapping the specific innovations associated with NAMs and AI models allows stakeholders to clearly identify patentable inventions. The following framework offers a practical approach for mapping key technological advancements implied or associated with NAMs and AI models beyond the model itself.
Does the model generate or enable clinically actionable information in new ways?
Discovery of new clinical information can imply new method steps (e.g., specific administration routes), formulations, dosages, and treatments of different indications or patient populations. As discussed in our prior analysis of GLP-1 receptor agonists, while newly discovered mechanisms may not themselves be patentable, they often enable claims around new methods of treatment, dosage regimens, or formulations. See GLP-1 Receptor Agonists and Patent Strategy: Securing Patent Protection for New Use of Old Drugs. Accordingly, the use of a NAM could imply patentable claims directed to methods of treatment based on new patient populations, dosages, or formulations. Alternatively, the use of NAMs could imply patentable claims directed to workflows for predicting patient outcomes or monitoring treatment responses.
Does the model shift how treatment regimens or trial protocols are designed?
The AI model may, for example, change the controls and design needed for the clinical trial, such as inclusion and exclusion criteria, or novel ways of stratifying patients by molecular subtypes, genomic, or epigenomic signature.
In addition,dynamic AI systems could change the intervals or timing of administering therapeutic agents.
Does the model impose new requirements on data input or integration?
The AI model may require novel multimodal or longitudinal integration by, for example, combining imaging, omics, and wearable data. The AI model may incorporate epidemiological data to determine patient clusters, and combination with complex biomarker signatures from large data on molecular signatures, genomic, epigenomic, or multi-omics data obtained from the patients. The use of such AI models may therefore imply patentable workflows directed to information flows that allow prediction of patient outcomes, stratification of patient population, detection, or monitoring of disease development.
Does the AI model or its data structure change how upstream samples are collected or processed?
The AI model may alter patient sample workflows, potentially leading to patentable methods. Changes might include new biospecimen preservation protocols, modified sample quantity or type requirements, or added pre-processing steps. These workflow adjustments can support claims for sample preparation methods, automated systems integrated with AI, or compositions involving specially processed samples.
In sum, as AI models take on roles once reserved for clinical trials or animal studies, their influence on medical decision-making, data collection, and regulatory outcomes demands a more nuanced and forward-thinking IP strategy. The framework above offers a structured way to identify innovations tied not only to the model itself but to its impact on upstream workflows, treatment paradigms, and lifecycle management. For innovation leaders and IP teams, the above framework offers a systematic way to future-proof your AI models and ensure strategic protection.
This strategic lens also highlights the need for equally rigorous data governance approaches, which we now explore in Section C.
Data Governance and Compartmentalization in the NAM Era
As in silico NAMs and AI-enabled platforms become central to FDA submissions, data governance emerges as a critical strategic pillar — both for regulatory compliance and IP protection. Sponsors must now plan not only for data quality and model performance, but also for how data disclosures intersect with competitive advantage.
FDA guidance emphasizes lifecycle transparency: data inputs, training datasets, test cohorts, model validation strategies, and even future updates may need to be disclosed and monitored. While these disclosures promote regulatory trust, they also pose risks to trade secrets and competitive differentiation — particularly where model performance depends heavily on proprietary datasets or data preparation pipelines. Many life science firms are familiar with robust data governance in the privacy context, but privacy considerations are distinct from managing confidentiality and disclosure when balancing regulatory compliance with IP protection. To address these tensions, developers should consider a tiered data governance strategy, emphasizing:
1. Modularization and Compartmentalization of Model Components
- By isolating elements of model design (e.g., preprocessing pipeline, model architecture, deployment environment), firms can disclose only the portions relevant to a given regulatory context. For example, designing “Virtual Labs” of AI models working together could help modularizing different functions and data sets to facilitate a tired data governance system and limit the necessary data disclosure. See for example our previous discussion of the emerging use of “Virtual Labs” formed by a group of AI models with distinct functions: The Virtual Lab of AI Agents: A New Frontier in Innovation.
2. Decoupling Proprietary Data from Submission Sets
- Rationale: Training on large internal datasets while validating on publicly shared or FDA-approved sets may allow compliance without disclosing sensitive raw data.
- Example: Model is trained on proprietary multi-omics data but validated against FDA-endorsed challenge datasets for regulatory review.
3. Governance-by-Design for Versioning and Traceability
- Rationale: Lifecycle maintenance of AI models — including data drift, re-training, and re-deployment — must be documented and justified to FDA. A governance architecture that logs changes, justifies updates, and auto-generates audit trails is increasingly indispensable.
- Example: Automated logs showing when model weights were updated due to new population-level data trends, with reproducibility guaranteed.
These strategies not only align with the FDA’s evolving expectations, but also support future IP assertions — e.g., filing patents around data partitioning, model maintenance tools, and regulatory integration pipelines. Similarly, as the FDA’s own use of AI tools increases, firms should maintain an open dialogue on how these tools are being used and on what data, so that they can refine their data governance accordingly.
Conclusion: Aligning Innovation, Transparency, and Strategy
AI-based drug development and non-animal methods have been brought to the regulatory forefront by FDA’s release of the January 2025 AI guidance, April 2025 NAM roadmap, and further milestones in internal AI use. In silico models have been accepted for FDA regulatory submissions as exemplified by the UVA/Padova simulator. With these advances come both opportunities and obligations. As outlined above, developers should carefully review FDA’s guidance to determine strategies for building scalable, protectable, and clinically impactful AI platforms. Strategic thinking about IP and data governance should begin on day one. By thinking early and systematically about IP and data governance, stakeholders can position themselves at the frontline of the AI-powered transformation in drug development.