View Article

  • Regulations For Artificial Intelligence in Drug Development and Clinical Trials in US and EU

  • Chemists College of Pharmaceutical Sciences and Research (Affiliated with Kerala University of Health Sciences, Thrissur), Varikoli P. O., Puthencruz, Ernakulam 682308, Kerala, India.

Abstract

Objective: To review and compare the current regulatory frameworks governing artificial intelligence (AI) applications in drug development and clinical trials in the United States and Europe, highlighting core principles, challenges, and future directions. Methods: Comprehensive review of official regulatory documents, draft guidances1, legislative acts, and guidance papers published between 2020 and 2025 from key authorities, including the US Food and Drug Administration (FDA), European Medicines Agency (EMA), and European Union’s Artificial Intelligence Act.2 Comparative analysis focused on risk assessment frameworks, lifecycle management, ethical considerations, and stakeholder engagement. Results: The FDA's 2025 draft guidance1 introduces a seven-step, risk-based credibility assessment framework emphasizing AI model validation, lifecycle oversight, transparency, and defined contexts of use (COU). The EU’s AI Act2 categorizes AI applications by risk levels and mandates rigorous requirements for “high-risk” AI in drug development. EMA’s 2023 Reflection Paper3 complements these with ethical and human oversight emphases. Both regions prioritize data integrity, bias mitigation, and proactive regulatory engagement. However, differences remain in procedural detail and scope, resulting in regulatory heterogeneity but ongoing convergence efforts. Conclusions: US and European regulatory frameworks provide robust foundations to ensure safe and effective AI integration in pharma, balancing innovation with patient safety. Continued harmonization, refinement, and collaborative dialogue among stakeholders will be essential to address evolving AI technologies and promote global adoption.

Keywords

Artificial intelligence, drug development, clinical trials, regulatory framework, FDA, European Medicines Agency, AI Act, risk assessment, lifecycle management, ethics

Introduction

Artificial intelligence (AI) has emerged as a transformative technology across numerous industries, and its impact on pharmaceutical drug development is profound and expanding.4 Traditionally, drug development is a time-consuming and costly process that involves multiple complex phases—from discovery and preclinical research to clinical trials and post-market surveillance.5 The integration of AI technologies such as machine learning, deep learning, and natural language processing offers the pharmaceutical industry new tools to improve efficiency, reduce costs, and enhance the precision of drug discovery, clinical trial design, and patient outcome prediction.6 Despite these benefits, the adoption of AI in drug development has historically been cautious due to regulatory complexity, data privacy concerns, and the perceived opacity of complex AI models—often referred to as “black boxes.”7 However, recent advances in AI-driven methodologies, including predictive modeling and digital twin simulations, have demonstrated substantial potential to accelerate drug development timelines and optimize clinical trials.8 For instance, AI-based digital twins create personalized patient models to simulate disease progression and treatment response, thereby enabling more efficient and smaller clinical trials without compromising data integrity.9 As AI applications grow in sophistication and scope, regulatory frameworks have become critical to ensure the safety, efficacy, and quality of AI-enabled medical products. Regulatory agencies such as the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) are actively developing guidelines and legislative instruments to address unique challenges posed by AI, including model validation, transparency, bias mitigation, and continuous lifecycle management.1,10  The FDA’s 2025 draft guidance introduces a risk-based credibility framework focusing on AI model validation within clearly defined contexts of use, emphasizing transparency and stakeholder collaboration.1 Meanwhile, the European Union’s AI Act, scheduled for enforcement starting 2024 with full effect by 2026, establishes a risk classification system imposing rigorous requirements on high-risk AI applications including those in drug development.2,11 Although these regulatory advances establish a foundation for the responsible integration of AI, challenges remain. These include harmonizing diverse regional approaches, ensuring data quality and representativeness, addressing ethical considerations, and adapting regulatory structures to keep pace with rapidly evolving AI technologies such as generative AI and reinforcement learning. Continued collaboration among industry, regulators, academics, and ethical bodies will be essential for evolving pragmatic, effective, and harmonized regulatory frameworks.11 This article reviews current regulations governing AI in drug development and clinical trials in the US and Europe, comparing their key principles, scopes, and challenges. It aims to provide a comprehensive understanding of the evolving regulatory landscape that supports AI-driven pharmaceutical innovation while safeguarding public health.

METHODOLOGY

Research Design

This study adopts a qualitative, comparative research design aimed at systematically analyzing and contrasting the regulatory frameworks governing artificial intelligence (AI) applications in drug development and clinical trials in the United States and Europe.12 The focus is on capturing nuanced regulatory principles, risk management strategies, ethical considerations, and the evolving legal landscape from authoritative sources and expert commentaries published between 2020 and 2025.13

Data Sources

A comprehensive collection of primary and secondary data was conducted, including:

  • Official regulatory guidance documents and draft policies from the U.S. Food and Drug Administration (FDA) and the European Union’s legislative bodies including the European Medicines Agency (EMA),14
  • The European Union’s Artificial Intelligence Act (AI Act) and associated regulatory materials,10
  • Relevant international harmonization guidelines developed by the International Council for Harmonisation (ICH),
  • Peer-reviewed scientific literature, industry white papers, and policy analyses focused on AI regulation in pharmaceutical development.12

Data Collection Process

Documents and publications were systematically retrieved using a combination of:

  • Access to regulatory agencies’ official websites and databases,
  • Searching academic databases such as PubMed, ScienceDirect, and regulatory intelligence platforms,
  • Identification through citation tracing and expert recommendations within the pharmaceutical regulatory field.

Inclusion criteria prioritized materials published or updated between 2020 and 2025, directly addressing AI regulatory considerations in drug development, clinical trials, or associated ethical and legal challenges. Exclusion criteria omitted broader AI applications unrelated to pharmaceutical regulatory contexts.14,15

Data Analysis

Content analysis techniques were employed to extract thematic elements relating to:

  • Risk-based assessment frameworks, particularly credibility and validation criteria for AI models in regulatory submissions,
  • Lifecycle management expectations, encompassing continuous monitoring and updates of AI tools,
  • Ethical and transparency requirements including data governance, interpretability, and bias mitigation,
  • Contextual definitions focused on varying AI application scopes within drug development pipelines.1,14,15

Comparative synthesis focused on identifying convergences and divergences between the US FDA and EU regulatory frameworks. The analysis compared regulatory scope, risk classifications, procedural clarity, stakeholder engagement mechanisms, and implementation challenges.1,10

Limitations

The methodology is limited by the availability of publicly accessible information, thus confidential regulatory deliberations or nuanced country-specific legislations within the EU member states may not be fully represented. Rapid AI technology advancements may outpace available regulatory documentation beyond the data collection timeline. The study also focuses predominantly on the US and European context, recognizing the global landscape but limiting broader jurisdictional analysis.

RESULT AND DISCUSSION

Regulatory Frameworks in the United States

In January 2025, the United States Food and Drug Administration (FDA) published its first comprehensive draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” This guidance marks a significant step in the agency’s commitment to integrating advanced AI technologies into the regulatory evaluation of drugs and biologics while maintaining rigorous standards for safety, efficacy, and quality.1 Central to the guidance is the introduction of a risk-based credibility assessment framework, designed specifically for AI and machine learning (ML) models used in regulatory contexts. The framework acknowledges that AI models vary widely in complexity and application, and thus, regulatory scrutiny should be proportionate to the risk associated with the AI model's intended regulatory use, termed the Context of Use (COU). The COU precisely defines the question the model aims to address, its scope, and the decision impacted by its outputs.1

The FDA delineates a structured seven-step approach for establishing and evaluating model credibility:

  1. Define the Question of Interest: Specify the regulatory question the AI model seeks to inform.
  2. Define the Context of Use (COU): Clarify the exact role, scope, and limitations of the AI model, anchoring credibility assessments to COU.
  3. Conduct a Risk Assessment: Evaluate potential risks of inaccurate or misleading AI outputs and their impact on patient safety or product quality.
  4. Develop a Credibility Plan: Outline methodology for rigorous validation activities tailored to the AI model and COU.
  5. Execute the Credibility Plan: Implement planned validation, testing, and verification protocols.
  6. Document Findings: Thoroughly record all validation outcomes, including any deviations or limitations encountered.
  7. Determine Adequacy: Assess whether the collected evidence sufficiently supports the trustworthiness of the AI model for regulatory decision-making.1

The draft guidance broadly applies to AI applications influencing regulatory decisions across the entire product lifecycle, including clinical trial design innovation, pharmacovigilance activities, manufacturing quality control, real-world data analysis for real-world evidence generation, and model-informed drug development. However, it specifically excludes AI used solely for drug discovery purposes and non-regulatory operational efficiencies such as drafting submissions or internal decision support that do not impact product safety or quality.1 To promote transparency and continuous reliability, the FDA underscores the importance of lifecycle management of AI models. This involves continuous monitoring for model drift, updating models in response to new data or evolving knowledge, and documenting changes that might affect regulatory performance.1 Stakeholder engagement is a core focus of the guidance. The FDA encourages early and frequent interaction with sponsors, developers, and other interested parties to discuss AI model development, validation strategies, and regulatory expectations. Such engagements aim to clarify requirements, identify potential challenges early, and foster alignment, thereby facilitating efficient regulatory submissions and approvals.1 Complementary to this guidance, the FDA has established the Good Machine Learning Practice (GMLP) initiative, which aims to develop cross-sector standards for AI and ML model development, emphasizing data quality, methodology transparency, and risk management.1 In summary, the FDA’s 2025 draft guidance reflects a pioneering and detailed regulatory approach that balances innovation incentives with rigorous safeguards, positioning the agency to effectively oversee the increasing integration of AI in drug development and clinical evaluation.1

Regulatory Frameworks in Europe

The European Union’s Artificial Intelligence Act (AI Act), formally adopted in 2024, represents the world’s first comprehensive legal framework regulating AI applications across all sectors, including healthcare and drug development.16 The Act entered into force on August 1, 2024, with a phased enforcement schedule extending through August 2027.17 Full enforcement, particularly for “high-risk” AI systems, is scheduled to begin on August 2, 2026.17 The AI Act employs a risk-based classification system, categorizing AI systems into four tiers: unacceptable risk (prohibited), high risk, limited risk, and minimal or no risk.18 Most AI systems used within drug development and clinical trials are categorized as “high-risk,” which subjects them to rigorous requirements including:10

  • Transparency and Explainability: Providers must ensure AI outputs are interpretable and traceable, allowing end users and regulators to understand decision-making processes.10
  • Human Oversight: Effective human supervision must be integrated to mitigate risks from AI errors or unintended consequences.10
  • Data Governance: Strict protocols to guarantee the quality, integrity, and representativeness of input data used in AI model development and deployment.10
  • Ongoing Monitoring and Risk Management: Continuous performance evaluation and updates to address model drift, emerging biases, or new evidence.10
  • Compliance Documentation: Mandatory conformity assessments, technical documentation, and post-market surveillance reporting, often linked to existing medical device regulations such as the EU MDR (2017/745) and IVDR.10

The AI Act expands its jurisdiction to all providers, deployers, importers, and distributors of AI systems operating within the EU, creating accountability across the AI supply chain. It also recognizes the applicability to general-purpose AI models and mandates retroactive compliance for existing models with deadlines for documentation and risk mitigation extending to 2027.14 Complementing the AI Act, the European Medicines Agency (EMA) published a Reflection Paper in 2023 advocating ethical AI deployment in drug development and clinical research. EMA’s paper emphasizes principles including data traceability, preservation of human oversight, and strategies for preparing regulatory infrastructures to accommodate AI-driven innovations.2 Further global harmonization is driven by the International Council for Harmonisation’s (ICH) M15 guideline, which integrates AI and machine-learning principles into the existing model-informed drug development (MIDD) framework, promoting consistent regulatory expectations worldwide.19 The EU also fosters innovation through regulatory tools such as “regulatory sandboxes”, allowing controlled environments for testing novel AI applications under active regulatory supervision. This approach facilitates iterative learning and collaboration between innovators and regulators, balancing innovation acceleration with patient safety.14 Challenges remain in finalizing harmonized technical standards essential for AI Act compliance and with potential delays reported in enforcing certain AI literacy and transparency provisions. Nevertheless, the EU’s rigorous and multi-layered regulatory architecture underscores its ambition to lead globally in safe and ethical AI integration into healthcare and pharmaceuticals.14

Comparative Overview

Aspect

United States (FDA)

Europe (EU & EMA)

Regulation Instrument

FDA Draft Guidance (2025)

AI Act (Enforcement 2026), EMA Reflection Paper (2023)

Risk-based Framework

Yes, 7-step credibility assessment

Yes, risk classification (minimal to unacceptable)

Application Scope

Broad AI use in drug lifecycle except discovery

Broad, with “high-risk” AI under strict controls

Emphasis on Ethics & Transparency

Strong emphasis via GMLP and transparency goals

High emphasis on human oversight and ethical AI

Stakeholder Engagement

Encouraged through early dialogue and workshops

Ongoing consultations and regulatory cooperation

Harmonization Efforts

Internal FDA harmonization; international engagement

Global via ICH guidelines; cross-sector AI regulation

Coverage of AI Use Cases

Clinical trials, manufacturing, pharmacovigilance, RWE

Clinical trials, drug evaluation, medical device AI

Challenges And Future Directions

Despite notable advances in regulatory frameworks for artificial intelligence (AI) in drug development and clinical trials, significant challenges remain that must be addressed to fully realize AI’s transformative potential.

Data Quality and Availability

One of the foremost challenges is ensuring the quality, representativeness, and accessibility of data used to train and validate AI models. High-quality data are fundamental to developing reliable AI algorithms; however, pharmaceutical data can be fragmented, heterogeneous, or incomplete due to variations in clinical trial designs, patient populations, or measurement standards. Regulatory frameworks increasingly emphasize adherence to FAIR data principles (Findable, Accessible, Interoperable, and Reusable) and ALCOA standards (Attributable, Legible, Contemporaneous, Original, and Accurate) to enhance data integrity and reproducibility. However, balancing data transparency with patient privacy protections and proprietary concerns remains complex, requiring ongoing refinement of legal and ethical safeguards.24

Model Bias and Interpretability

Another critical concern is algorithmic bias and the “black box” nature of AI models. Many advanced AI techniques, especially deep learning and reinforcement learning, operate with limited interpretability, making it difficult to fully understand or explain their decision-making processes. This opacity challenges regulatory acceptance due to concerns over unintended biases that may adversely affect patient subgroups and clinical outcomes. Addressing these issues demands the integration of explainable AI (XAI) approaches and robust model validation strategies to guarantee fairness, transparency, and accountability within regulatory submissions.20,21,22

Regulatory Harmonization and Adaptability

The diversity in regional regulatory requirements presents an ongoing barrier to seamless global adoption of AI in drug development. While frameworks like the FDA’s risk-based credibility assessment and Europe’s AI Act provide thorough approaches, variations in procedural details, terminology, and enforcement timelines create regulatory fragmentation. Effective international harmonization, via mechanisms such as the International Council for Harmonisation’s (ICH) draft M15 guideline, is essential to provide consistent standards and facilitate multinational pharmaceutical innovation.1,19,23. Additionally, the rapid evolution of AI technologies—including generative AI systems, federated learning models, and adaptive algorithms—necessitates flexible and adaptive regulatory frameworks capable of accommodating emerging AI modalities without frequent, disruptive revisions. Regulators face the challenge of balancing the need for certainty and structure with the agility to integrate novel technologies and evidence rapidly.1,19,23

Ethical, Legal, and Social Considerations

AI deployment raises complex ethical and societal questions, including fairness, equity, and patient autonomy. Regulators and developers must ensure AI models do not perpetuate existing healthcare disparities or introduce new biases that compromise care quality for vulnerable populations. Clear guidelines on human oversight, informed consent in AI-driven clinical trials, and mechanisms for addressing accountability in AI-augmented decision-making remain focal ethical concerns.25

Future Directions and Collaborative Approaches

To overcome these challenges, multidisciplinary collaboration between regulatory agencies, pharmaceutical industry stakeholders, academic researchers, technology developers, and ethicists is paramount. Proactive stakeholder engagement ensures that regulatory guidance evolves informed by practical implementation experiences and scientific advances.26 Continued development and refinement of standardization efforts, such as Good Machine Learning Practice (GMLP), validation protocols, and transparency frameworks, will strengthen AI model reliability and regulatory confidence.26 Importantly, advancing real-world validation studies and pilot regulatory sandboxes can provide valuable insights into AI model performance and safety in diverse clinical contexts, informing iterative regulatory improvements.26

CONCLUSION

The regulatory landscape governing artificial intelligence (AI) in drug development and clinical trials is rapidly evolving, with the United States Food and Drug Administration (FDA) and European Union (EU) emerging as global leaders in establishing comprehensive frameworks. Both jurisdictions emphasize risk-based credibility assessments, transparency in AI model functioning, and robust lifecycle management to ensure that AI applications in pharmaceutical development meet stringent standards for safety, efficacy, and quality. While the US and EU regulatory approaches differ in scope, terminology, and procedural detail—with the FDA focusing on a structured seven-step risk assessment and the EU employing a broad risk-classification scheme under the AI Act—both frameworks converge on the critical objective of safeguarding patient welfare alongside fostering innovation. These systems recognize the complexity of AI technologies and the need to accommodate continuous learning and evolution within AI models, thereby promoting adaptive regulatory oversight. Future efforts must prioritize regulatory harmonization to minimize fragmentation and uncertainty for global drug developers and AI innovators. Achieving this will require sustained, collaborative engagement among regulators, industry leaders, academic experts, and ethicists. Such engagement will help refine regulatory guidance, facilitate the development of harmonized technical standards, and ensure mechanisms are in place for ongoing validation and post-market surveillance of AI systems. Moreover, as AI technologies such as generative AI, reinforcement learning, and federated learning grow more sophisticated, regulatory frameworks will need to remain agile and flexible, allowing for rapid incorporation of emerging best practices without sacrificing rigour. Together, these coordinated initiatives and continuous adaptations will unlock AI’s full potential to accelerate and optimize drug development and clinical trials globally, ultimately improving patient outcomes and advancing public health. The collaborative journey ahead promises a future where AI is seamlessly integrated into pharmaceutical innovation underpinned by robust, transparent, and patient-centred regulation.

REFERENCES

  1. U.S. Food and Drug Administration. Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. Draft Guidance for Industry and Other Interested Parties. 2025 Jan 6. Docket No. FDA-2024-D-4689. Federal Register. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological
  2. European Parliament; Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. L/1689, 12 July 2024.
  3. European Medicines Agency. Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle. EMA/CHMP/CVMP/83833/2023. Published 9 September 2024. Available: https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf
  4. Mak KK, Pichika MR. Artificial intelligence in drug development: present status and future prospects. Drug Discov Today. 2019;24(3):773-80. https://doi.org/10.1016/j.drudis.2018.11.014
  5. DiMasi JA, Grabowski HG, Hansen RW. Innovation in the pharmaceutical industry: New estimates of R&D costs. J Health Econ. 2016;47:20-33. https://doi.org/10.1016/j.jhealeco.2016.01.012
  6. Paul D, Sanap G, Shenoy S, Kalyane D, Kalia K, Tekade RK. Artificial intelligence in drug discovery and development. Drug Discov Today. 2021;26(1):80-93. https://doi.org/10.1016/j.drudis.2020.10.010
  7. Bzdok D, Krzywinski M, Altman N. Points of Significance: Machine learning: a primer. Nat Methods. 2017;14(12):1119-20. https://doi.org/10.1038/nmeth.4526
  8. Bordukova M, Vidovszky AA, et al. Generative artificial intelligence empowers digital twins in drug discovery and clinical trials. Expert Opin Drug Discov. 2024;19(1):33-42. https://doi.org/10.1080/17460441.2023.2273839
  9. An G, Cockrell C. Drug Development Digital Twins for Drug Discovery, Testing and Repurposing: A Schema for Requirements and Development. Front Syst Biol. 2022;2:928387. https://doi.org/10.3389/fsysb.2022.928387
  10. Aboy M, et al. Navigating the EU AI Act: implications for regulated digital medical products. NPJ Digital Medicine. 2024;7:156. https://doi.org/10.1038/s41746-024-01232-3
  11. Van Kolfschooten H. The EU Artificial Intelligence Act (2024): Implications for the healthcare sector. Sci Total Environ. 2024;883:163784. https://doi.org/10.1016/j.healthpol.2024.105152
  12. Singh R, Mahmood Z, Chapman C, et al. Regulating the AI-enabled ecosystem for human therapeutics. Commun Med. 2025;5:12. https://doi.org/10.1038/s43856-025-00910-x
  13. Pantanowitz L, Gattenhorn D, Cadwalladr G, et al. Regulatory Aspects of Artificial Intelligence and Machine Learning in Healthcare: Global and Regional Frameworks. Artificial Intelligence in Medicine. 2024;140:102611. https://doi.org/10.1016/j.modpat.2024.100609
  14. Niazi SK, Magoola M, Langley C, et al. Regulatory Perspectives for AI/ML Implementation in Pharmaceutical GMP Settings: A Comprehensive Review. Pharmaceutics. 2025;17(3):528. https://doi.org/10.3390/ph18060901
  15. Das J, Lee DJ, et al. Public feedback to FDA on regulatory considerations for AI in drug manufacturing: data governance, lifecycle, validation and risk-based model development. AAPS Open. 2025;11(1):1-15. https://doi.org/10.1186/s41120-025-00110-w
  16. Gstrein OJ, Haleem N, Zwitter A. General-purpose AI regulation and the European Union AI Act. Internet Policy Review [Internet]. 2024 [18 September 2025];13(3). Available from: https://policyreview.info/articles/analysis/general-purpose-ai-regulation-and-ai-act
  17. RAND Corporation. Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act. Santa Monica, CA: RAND Corporation; 2024. https://www.rand.org/pubs/research_reports/RRA3243-3.html?utm_source=chatgpt.com
  18.  Balcioglu YS. The European Union Artificial Intelligence Act: a new risk-based framework for regulating AI. Sci Total Environ. 2025; https://doi.org/10.1016/j.jrt.2025.100128
  19. International Council for Harmonisation (ICH). General Principles for Model-Informed Drug Development (M15) Step 2b Draft Guideline. ICH; 6 Nov 2024. Available from: https://database.ich.org/sites/default/files/ICH_M15_EWG_Step2_DraftGuideline_2024_1031.pdf
  20. Cross JL, Davis SE, Nguyen P-L, Rajkomar A. Bias in medical AI: Implications for clinical decision-making. NPJ Digital Medicine. 2024;7(1):217. https://doi.org/10.1371/journal.pdig.0000651
  21. Hasanzadeh F, Hajebrahimi F, Ghassemi M, et al. Bias recognition and mitigation strategies in artificial intelligence in healthcare: A review. npj Digital Medicine. 2025;8(1):15. https://doi.org/10.1038/s41746-025-01503-7
  22. Ding Q, Yu S, Zhang N, et al. Explainable Artificial Intelligence in the Field of Drug Discovery. Drug Design, Development and Therapy. 2025;19:525-171. https://doi.org/10.2147/DDDT.S525171
  23. International Council for Harmonisation. M15 General Principles for Model-Informed Drug Development (MIDD). Draft guideline Step 2b: December 2024. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/m15-general-principles-model-informed-drug-development?utm_source=chatgpt.com
  24. Blanco-González A, Cabezon A, Seco-Gonzalez A, Conde-Torres D, Antelo-Riveiro P, Pineiro A, Garcia-Fandino R. The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies. ACS Omega. 2025;10(22):14512-14532. https://pubs.acs.org/doi/10.1021/acsomega.5c00549?utm_source=chatgpt.com
  25. Ocaña A, Pandiella A, Privat C, Bravo I, Luengo-Oroz M, Amir E, et al. Integrating artificial intelligence in drug discovery and early drug development: a transformative approach. Biomarker Research. 2025;13(1):45. https://doi.org/10.1186/s40364-025-00758-2
  26. Gilbert S, Mathias R, Schönfelder A, Wekenborg M, Steinigen-Fuchs J, Dillenseger A, Ziemssen T. A roadmap for safe, regulation-compliant Living Labs for AI and digital health development. Science Advances. 2025;11(20):eadv7719. https://doi.org/10.1126/sciadv.adv7719.

Reference

  1. U.S. Food and Drug Administration. Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. Draft Guidance for Industry and Other Interested Parties. 2025 Jan 6. Docket No. FDA-2024-D-4689. Federal Register. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological
  2. European Parliament; Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. L/1689, 12 July 2024.
  3. European Medicines Agency. Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle. EMA/CHMP/CVMP/83833/2023. Published 9 September 2024. Available: https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf
  4. Mak KK, Pichika MR. Artificial intelligence in drug development: present status and future prospects. Drug Discov Today. 2019;24(3):773-80. https://doi.org/10.1016/j.drudis.2018.11.014
  5. DiMasi JA, Grabowski HG, Hansen RW. Innovation in the pharmaceutical industry: New estimates of R&D costs. J Health Econ. 2016;47:20-33. https://doi.org/10.1016/j.jhealeco.2016.01.012
  6. Paul D, Sanap G, Shenoy S, Kalyane D, Kalia K, Tekade RK. Artificial intelligence in drug discovery and development. Drug Discov Today. 2021;26(1):80-93. https://doi.org/10.1016/j.drudis.2020.10.010
  7. Bzdok D, Krzywinski M, Altman N. Points of Significance: Machine learning: a primer. Nat Methods. 2017;14(12):1119-20. https://doi.org/10.1038/nmeth.4526
  8. Bordukova M, Vidovszky AA, et al. Generative artificial intelligence empowers digital twins in drug discovery and clinical trials. Expert Opin Drug Discov. 2024;19(1):33-42. https://doi.org/10.1080/17460441.2023.2273839
  9. An G, Cockrell C. Drug Development Digital Twins for Drug Discovery, Testing and Repurposing: A Schema for Requirements and Development. Front Syst Biol. 2022;2:928387. https://doi.org/10.3389/fsysb.2022.928387
  10. Aboy M, et al. Navigating the EU AI Act: implications for regulated digital medical products. NPJ Digital Medicine. 2024;7:156. https://doi.org/10.1038/s41746-024-01232-3
  11. Van Kolfschooten H. The EU Artificial Intelligence Act (2024): Implications for the healthcare sector. Sci Total Environ. 2024;883:163784. https://doi.org/10.1016/j.healthpol.2024.105152
  12. Singh R, Mahmood Z, Chapman C, et al. Regulating the AI-enabled ecosystem for human therapeutics. Commun Med. 2025;5:12. https://doi.org/10.1038/s43856-025-00910-x
  13. Pantanowitz L, Gattenhorn D, Cadwalladr G, et al. Regulatory Aspects of Artificial Intelligence and Machine Learning in Healthcare: Global and Regional Frameworks. Artificial Intelligence in Medicine. 2024;140:102611. https://doi.org/10.1016/j.modpat.2024.100609
  14. Niazi SK, Magoola M, Langley C, et al. Regulatory Perspectives for AI/ML Implementation in Pharmaceutical GMP Settings: A Comprehensive Review. Pharmaceutics. 2025;17(3):528. https://doi.org/10.3390/ph18060901
  15. Das J, Lee DJ, et al. Public feedback to FDA on regulatory considerations for AI in drug manufacturing: data governance, lifecycle, validation and risk-based model development. AAPS Open. 2025;11(1):1-15. https://doi.org/10.1186/s41120-025-00110-w
  16. Gstrein OJ, Haleem N, Zwitter A. General-purpose AI regulation and the European Union AI Act. Internet Policy Review [Internet]. 2024 [18 September 2025];13(3). Available from: https://policyreview.info/articles/analysis/general-purpose-ai-regulation-and-ai-act
  17. RAND Corporation. Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act. Santa Monica, CA: RAND Corporation; 2024. https://www.rand.org/pubs/research_reports/RRA3243-3.html?utm_source=chatgpt.com
  18.  Balcioglu YS. The European Union Artificial Intelligence Act: a new risk-based framework for regulating AI. Sci Total Environ. 2025; https://doi.org/10.1016/j.jrt.2025.100128
  19. International Council for Harmonisation (ICH). General Principles for Model-Informed Drug Development (M15) Step 2b Draft Guideline. ICH; 6 Nov 2024. Available from: https://database.ich.org/sites/default/files/ICH_M15_EWG_Step2_DraftGuideline_2024_1031.pdf
  20. Cross JL, Davis SE, Nguyen P-L, Rajkomar A. Bias in medical AI: Implications for clinical decision-making. NPJ Digital Medicine. 2024;7(1):217. https://doi.org/10.1371/journal.pdig.0000651
  21. Hasanzadeh F, Hajebrahimi F, Ghassemi M, et al. Bias recognition and mitigation strategies in artificial intelligence in healthcare: A review. npj Digital Medicine. 2025;8(1):15. https://doi.org/10.1038/s41746-025-01503-7
  22. Ding Q, Yu S, Zhang N, et al. Explainable Artificial Intelligence in the Field of Drug Discovery. Drug Design, Development and Therapy. 2025;19:525-171. https://doi.org/10.2147/DDDT.S525171
  23. International Council for Harmonisation. M15 General Principles for Model-Informed Drug Development (MIDD). Draft guideline Step 2b: December 2024. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/m15-general-principles-model-informed-drug-development?utm_source=chatgpt.com
  24. Blanco-González A, Cabezon A, Seco-Gonzalez A, Conde-Torres D, Antelo-Riveiro P, Pineiro A, Garcia-Fandino R. The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies. ACS Omega. 2025;10(22):14512-14532. https://pubs.acs.org/doi/10.1021/acsomega.5c00549?utm_source=chatgpt.com
  25. Ocaña A, Pandiella A, Privat C, Bravo I, Luengo-Oroz M, Amir E, et al. Integrating artificial intelligence in drug discovery and early drug development: a transformative approach. Biomarker Research. 2025;13(1):45. https://doi.org/10.1186/s40364-025-00758-2
  26. Gilbert S, Mathias R, Schönfelder A, Wekenborg M, Steinigen-Fuchs J, Dillenseger A, Ziemssen T. A roadmap for safe, regulation-compliant Living Labs for AI and digital health development. Science Advances. 2025;11(20):eadv7719. https://doi.org/10.1126/sciadv.adv7719.

Photo
Anjana T. K.
Corresponding author

Chemists College of Pharmaceutical Sciences and Research (Affiliated with Kerala University of Health Sciences, Thrissur), Varikoli P. O., Puthencruz, Ernakulam 682308, Kerala, India.

Photo
Dr. Kinjal Bipinkumar Gandhi
Co-author

Chemists College of Pharmaceutical Sciences and Research (Affiliated with Kerala University of Health Sciences, Thrissur), Varikoli P. O., Puthencruz, Ernakulam 682308, Kerala, India.

Anjana. T. K.*, Dr. Kinjal Bipinkumar Gandhi, Regulations for Artificial Intelligence in Drug Development and Clinical Trials in US and EU, Int. J. of Pharm. Sci., 2025, Vol 3, Issue 9, 2596-2605 https://doi.org/10.5281/zenodo.17182583

More related articles
A Study To Identify The Prevelance Of Covid 19 Vac...
Vilas Pai, SATISH S, RAMAKRISHNA SHABARAYA A, ...
A Review on Herbal Shikakai Shampoo...
Sangram Nagargoje, Pallavi Rathod, Snehal Chaudhari, ...
A Review Of Therapeutical Potential Of Cassia Fistula In Diabetic Mellitus ...
Rahul shinge, viraj Mahajan, Dr. N. B. Chougule , ...
Formulation and Evaluation of Mouth Dissolving Tablet of Metoclopramide: A Revie...
Amol Dhayarkar , Pravin Navgire , Dr. V. M satpute, S. R. Ghodake , ...
Development and Validation of a New Reverse Phase - High Performance Liquid Chro...
Shahed Shaikh, Preeti Sable, Sushama Vaishnav, Pravin Wakte, ...
Related Articles
A Review On The Role of Angiotensin Receptor-Neprilysin Inhibitors in Heart Fail...
TUPE CHAITANYA KHANDERAO , Kisshor Vasant Otarri , Ajay Kale , ...
Formulation And Evaluation of Anti Acne Facewash...
K. K .Fidha Afeesa , Binasin Shahala Muhammed, Erfana Ismail, Fathima Fida C. A., Fayiz U. K., Venki...
A Review Article on Buccal Patches ...
Bharti kokate , Khandare Rajeshree, ...
Recent Advancements in Glucose-Modulated Insulin Delivery Systems ...
Rupali Chopade, Sakshi Jatale, Pallavi Kandale, Dr. Shivshankar Mhaske, Shatrughna Nagrik, ...