Chemists College of Pharmaceutical Sciences and Research (Affiliated with Kerala University of Health Sciences, Thrissur), Varikoli P. O., Puthencruz, Ernakulam 682308, Kerala, India.
Objective: To review and compare the current regulatory frameworks governing artificial intelligence (AI) applications in drug development and clinical trials in the United States and Europe, highlighting core principles, challenges, and future directions. Methods: Comprehensive review of official regulatory documents, draft guidances1, legislative acts, and guidance papers published between 2020 and 2025 from key authorities, including the US Food and Drug Administration (FDA), European Medicines Agency (EMA), and European Union’s Artificial Intelligence Act.2 Comparative analysis focused on risk assessment frameworks, lifecycle management, ethical considerations, and stakeholder engagement. Results: The FDA's 2025 draft guidance1 introduces a seven-step, risk-based credibility assessment framework emphasizing AI model validation, lifecycle oversight, transparency, and defined contexts of use (COU). The EU’s AI Act2 categorizes AI applications by risk levels and mandates rigorous requirements for “high-risk” AI in drug development. EMA’s 2023 Reflection Paper3 complements these with ethical and human oversight emphases. Both regions prioritize data integrity, bias mitigation, and proactive regulatory engagement. However, differences remain in procedural detail and scope, resulting in regulatory heterogeneity but ongoing convergence efforts. Conclusions: US and European regulatory frameworks provide robust foundations to ensure safe and effective AI integration in pharma, balancing innovation with patient safety. Continued harmonization, refinement, and collaborative dialogue among stakeholders will be essential to address evolving AI technologies and promote global adoption.
Artificial intelligence (AI) has emerged as a transformative technology across numerous industries, and its impact on pharmaceutical drug development is profound and expanding.4 Traditionally, drug development is a time-consuming and costly process that involves multiple complex phases—from discovery and preclinical research to clinical trials and post-market surveillance.5 The integration of AI technologies such as machine learning, deep learning, and natural language processing offers the pharmaceutical industry new tools to improve efficiency, reduce costs, and enhance the precision of drug discovery, clinical trial design, and patient outcome prediction.6 Despite these benefits, the adoption of AI in drug development has historically been cautious due to regulatory complexity, data privacy concerns, and the perceived opacity of complex AI models—often referred to as “black boxes.”7 However, recent advances in AI-driven methodologies, including predictive modeling and digital twin simulations, have demonstrated substantial potential to accelerate drug development timelines and optimize clinical trials.8 For instance, AI-based digital twins create personalized patient models to simulate disease progression and treatment response, thereby enabling more efficient and smaller clinical trials without compromising data integrity.9 As AI applications grow in sophistication and scope, regulatory frameworks have become critical to ensure the safety, efficacy, and quality of AI-enabled medical products. Regulatory agencies such as the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) are actively developing guidelines and legislative instruments to address unique challenges posed by AI, including model validation, transparency, bias mitigation, and continuous lifecycle management.1,10 The FDA’s 2025 draft guidance introduces a risk-based credibility framework focusing on AI model validation within clearly defined contexts of use, emphasizing transparency and stakeholder collaboration.1 Meanwhile, the European Union’s AI Act, scheduled for enforcement starting 2024 with full effect by 2026, establishes a risk classification system imposing rigorous requirements on high-risk AI applications including those in drug development.2,11 Although these regulatory advances establish a foundation for the responsible integration of AI, challenges remain. These include harmonizing diverse regional approaches, ensuring data quality and representativeness, addressing ethical considerations, and adapting regulatory structures to keep pace with rapidly evolving AI technologies such as generative AI and reinforcement learning. Continued collaboration among industry, regulators, academics, and ethical bodies will be essential for evolving pragmatic, effective, and harmonized regulatory frameworks.11 This article reviews current regulations governing AI in drug development and clinical trials in the US and Europe, comparing their key principles, scopes, and challenges. It aims to provide a comprehensive understanding of the evolving regulatory landscape that supports AI-driven pharmaceutical innovation while safeguarding public health.
METHODOLOGY
Research Design
This study adopts a qualitative, comparative research design aimed at systematically analyzing and contrasting the regulatory frameworks governing artificial intelligence (AI) applications in drug development and clinical trials in the United States and Europe.12 The focus is on capturing nuanced regulatory principles, risk management strategies, ethical considerations, and the evolving legal landscape from authoritative sources and expert commentaries published between 2020 and 2025.13
Data Sources
A comprehensive collection of primary and secondary data was conducted, including:
Data Collection Process
Documents and publications were systematically retrieved using a combination of:
Inclusion criteria prioritized materials published or updated between 2020 and 2025, directly addressing AI regulatory considerations in drug development, clinical trials, or associated ethical and legal challenges. Exclusion criteria omitted broader AI applications unrelated to pharmaceutical regulatory contexts.14,15
Data Analysis
Content analysis techniques were employed to extract thematic elements relating to:
Comparative synthesis focused on identifying convergences and divergences between the US FDA and EU regulatory frameworks. The analysis compared regulatory scope, risk classifications, procedural clarity, stakeholder engagement mechanisms, and implementation challenges.1,10
Limitations
The methodology is limited by the availability of publicly accessible information, thus confidential regulatory deliberations or nuanced country-specific legislations within the EU member states may not be fully represented. Rapid AI technology advancements may outpace available regulatory documentation beyond the data collection timeline. The study also focuses predominantly on the US and European context, recognizing the global landscape but limiting broader jurisdictional analysis.
RESULT AND DISCUSSION
Regulatory Frameworks in the United States
In January 2025, the United States Food and Drug Administration (FDA) published its first comprehensive draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” This guidance marks a significant step in the agency’s commitment to integrating advanced AI technologies into the regulatory evaluation of drugs and biologics while maintaining rigorous standards for safety, efficacy, and quality.1 Central to the guidance is the introduction of a risk-based credibility assessment framework, designed specifically for AI and machine learning (ML) models used in regulatory contexts. The framework acknowledges that AI models vary widely in complexity and application, and thus, regulatory scrutiny should be proportionate to the risk associated with the AI model's intended regulatory use, termed the Context of Use (COU). The COU precisely defines the question the model aims to address, its scope, and the decision impacted by its outputs.1
The FDA delineates a structured seven-step approach for establishing and evaluating model credibility:
The draft guidance broadly applies to AI applications influencing regulatory decisions across the entire product lifecycle, including clinical trial design innovation, pharmacovigilance activities, manufacturing quality control, real-world data analysis for real-world evidence generation, and model-informed drug development. However, it specifically excludes AI used solely for drug discovery purposes and non-regulatory operational efficiencies such as drafting submissions or internal decision support that do not impact product safety or quality.1 To promote transparency and continuous reliability, the FDA underscores the importance of lifecycle management of AI models. This involves continuous monitoring for model drift, updating models in response to new data or evolving knowledge, and documenting changes that might affect regulatory performance.1 Stakeholder engagement is a core focus of the guidance. The FDA encourages early and frequent interaction with sponsors, developers, and other interested parties to discuss AI model development, validation strategies, and regulatory expectations. Such engagements aim to clarify requirements, identify potential challenges early, and foster alignment, thereby facilitating efficient regulatory submissions and approvals.1 Complementary to this guidance, the FDA has established the Good Machine Learning Practice (GMLP) initiative, which aims to develop cross-sector standards for AI and ML model development, emphasizing data quality, methodology transparency, and risk management.1 In summary, the FDA’s 2025 draft guidance reflects a pioneering and detailed regulatory approach that balances innovation incentives with rigorous safeguards, positioning the agency to effectively oversee the increasing integration of AI in drug development and clinical evaluation.1
Regulatory Frameworks in Europe
The European Union’s Artificial Intelligence Act (AI Act), formally adopted in 2024, represents the world’s first comprehensive legal framework regulating AI applications across all sectors, including healthcare and drug development.16 The Act entered into force on August 1, 2024, with a phased enforcement schedule extending through August 2027.17 Full enforcement, particularly for “high-risk” AI systems, is scheduled to begin on August 2, 2026.17 The AI Act employs a risk-based classification system, categorizing AI systems into four tiers: unacceptable risk (prohibited), high risk, limited risk, and minimal or no risk.18 Most AI systems used within drug development and clinical trials are categorized as “high-risk,” which subjects them to rigorous requirements including:10
The AI Act expands its jurisdiction to all providers, deployers, importers, and distributors of AI systems operating within the EU, creating accountability across the AI supply chain. It also recognizes the applicability to general-purpose AI models and mandates retroactive compliance for existing models with deadlines for documentation and risk mitigation extending to 2027.14 Complementing the AI Act, the European Medicines Agency (EMA) published a Reflection Paper in 2023 advocating ethical AI deployment in drug development and clinical research. EMA’s paper emphasizes principles including data traceability, preservation of human oversight, and strategies for preparing regulatory infrastructures to accommodate AI-driven innovations.2 Further global harmonization is driven by the International Council for Harmonisation’s (ICH) M15 guideline, which integrates AI and machine-learning principles into the existing model-informed drug development (MIDD) framework, promoting consistent regulatory expectations worldwide.19 The EU also fosters innovation through regulatory tools such as “regulatory sandboxes”, allowing controlled environments for testing novel AI applications under active regulatory supervision. This approach facilitates iterative learning and collaboration between innovators and regulators, balancing innovation acceleration with patient safety.14 Challenges remain in finalizing harmonized technical standards essential for AI Act compliance and with potential delays reported in enforcing certain AI literacy and transparency provisions. Nevertheless, the EU’s rigorous and multi-layered regulatory architecture underscores its ambition to lead globally in safe and ethical AI integration into healthcare and pharmaceuticals.14
Comparative Overview
|
Aspect |
United States (FDA) |
Europe (EU & EMA) |
|
Regulation Instrument |
FDA Draft Guidance (2025) |
AI Act (Enforcement 2026), EMA Reflection Paper (2023) |
|
Risk-based Framework |
Yes, 7-step credibility assessment |
Yes, risk classification (minimal to unacceptable) |
|
Application Scope |
Broad AI use in drug lifecycle except discovery |
Broad, with “high-risk” AI under strict controls |
|
Emphasis on Ethics & Transparency |
Strong emphasis via GMLP and transparency goals |
High emphasis on human oversight and ethical AI |
|
Stakeholder Engagement |
Encouraged through early dialogue and workshops |
Ongoing consultations and regulatory cooperation |
|
Harmonization Efforts |
Internal FDA harmonization; international engagement |
Global via ICH guidelines; cross-sector AI regulation |
|
Coverage of AI Use Cases |
Clinical trials, manufacturing, pharmacovigilance, RWE |
Clinical trials, drug evaluation, medical device AI |
Challenges And Future Directions
Despite notable advances in regulatory frameworks for artificial intelligence (AI) in drug development and clinical trials, significant challenges remain that must be addressed to fully realize AI’s transformative potential.
Data Quality and Availability
One of the foremost challenges is ensuring the quality, representativeness, and accessibility of data used to train and validate AI models. High-quality data are fundamental to developing reliable AI algorithms; however, pharmaceutical data can be fragmented, heterogeneous, or incomplete due to variations in clinical trial designs, patient populations, or measurement standards. Regulatory frameworks increasingly emphasize adherence to FAIR data principles (Findable, Accessible, Interoperable, and Reusable) and ALCOA standards (Attributable, Legible, Contemporaneous, Original, and Accurate) to enhance data integrity and reproducibility. However, balancing data transparency with patient privacy protections and proprietary concerns remains complex, requiring ongoing refinement of legal and ethical safeguards.24
Model Bias and Interpretability
Another critical concern is algorithmic bias and the “black box” nature of AI models. Many advanced AI techniques, especially deep learning and reinforcement learning, operate with limited interpretability, making it difficult to fully understand or explain their decision-making processes. This opacity challenges regulatory acceptance due to concerns over unintended biases that may adversely affect patient subgroups and clinical outcomes. Addressing these issues demands the integration of explainable AI (XAI) approaches and robust model validation strategies to guarantee fairness, transparency, and accountability within regulatory submissions.20,21,22
Regulatory Harmonization and Adaptability
The diversity in regional regulatory requirements presents an ongoing barrier to seamless global adoption of AI in drug development. While frameworks like the FDA’s risk-based credibility assessment and Europe’s AI Act provide thorough approaches, variations in procedural details, terminology, and enforcement timelines create regulatory fragmentation. Effective international harmonization, via mechanisms such as the International Council for Harmonisation’s (ICH) draft M15 guideline, is essential to provide consistent standards and facilitate multinational pharmaceutical innovation.1,19,23. Additionally, the rapid evolution of AI technologies—including generative AI systems, federated learning models, and adaptive algorithms—necessitates flexible and adaptive regulatory frameworks capable of accommodating emerging AI modalities without frequent, disruptive revisions. Regulators face the challenge of balancing the need for certainty and structure with the agility to integrate novel technologies and evidence rapidly.1,19,23
Ethical, Legal, and Social Considerations
AI deployment raises complex ethical and societal questions, including fairness, equity, and patient autonomy. Regulators and developers must ensure AI models do not perpetuate existing healthcare disparities or introduce new biases that compromise care quality for vulnerable populations. Clear guidelines on human oversight, informed consent in AI-driven clinical trials, and mechanisms for addressing accountability in AI-augmented decision-making remain focal ethical concerns.25
Future Directions and Collaborative Approaches
To overcome these challenges, multidisciplinary collaboration between regulatory agencies, pharmaceutical industry stakeholders, academic researchers, technology developers, and ethicists is paramount. Proactive stakeholder engagement ensures that regulatory guidance evolves informed by practical implementation experiences and scientific advances.26 Continued development and refinement of standardization efforts, such as Good Machine Learning Practice (GMLP), validation protocols, and transparency frameworks, will strengthen AI model reliability and regulatory confidence.26 Importantly, advancing real-world validation studies and pilot regulatory sandboxes can provide valuable insights into AI model performance and safety in diverse clinical contexts, informing iterative regulatory improvements.26
CONCLUSION
The regulatory landscape governing artificial intelligence (AI) in drug development and clinical trials is rapidly evolving, with the United States Food and Drug Administration (FDA) and European Union (EU) emerging as global leaders in establishing comprehensive frameworks. Both jurisdictions emphasize risk-based credibility assessments, transparency in AI model functioning, and robust lifecycle management to ensure that AI applications in pharmaceutical development meet stringent standards for safety, efficacy, and quality. While the US and EU regulatory approaches differ in scope, terminology, and procedural detail—with the FDA focusing on a structured seven-step risk assessment and the EU employing a broad risk-classification scheme under the AI Act—both frameworks converge on the critical objective of safeguarding patient welfare alongside fostering innovation. These systems recognize the complexity of AI technologies and the need to accommodate continuous learning and evolution within AI models, thereby promoting adaptive regulatory oversight. Future efforts must prioritize regulatory harmonization to minimize fragmentation and uncertainty for global drug developers and AI innovators. Achieving this will require sustained, collaborative engagement among regulators, industry leaders, academic experts, and ethicists. Such engagement will help refine regulatory guidance, facilitate the development of harmonized technical standards, and ensure mechanisms are in place for ongoing validation and post-market surveillance of AI systems. Moreover, as AI technologies such as generative AI, reinforcement learning, and federated learning grow more sophisticated, regulatory frameworks will need to remain agile and flexible, allowing for rapid incorporation of emerging best practices without sacrificing rigour. Together, these coordinated initiatives and continuous adaptations will unlock AI’s full potential to accelerate and optimize drug development and clinical trials globally, ultimately improving patient outcomes and advancing public health. The collaborative journey ahead promises a future where AI is seamlessly integrated into pharmaceutical innovation underpinned by robust, transparent, and patient-centred regulation.
REFERENCES
Anjana. T. K.*, Dr. Kinjal Bipinkumar Gandhi, Regulations for Artificial Intelligence in Drug Development and Clinical Trials in US and EU, Int. J. of Pharm. Sci., 2025, Vol 3, Issue 9, 2596-2605 https://doi.org/10.5281/zenodo.17182583
10.5281/zenodo.17182583