Guiding Principles of Good AI Practice in Drug Development

Using AI responsibly across the drug product life cycle
AI in Drug Development
Interactive micro-learning protocol • ~20 minutes
Progress 0%
Page 1
What's in this lesson: You will explore 10 international guiding principles for good AI practice in drug development, see examples, check your understanding, and complete a short assessment.
Why this matters: These principles help ensure AI-enabled drug development remains safe, ethical, and scientifically sound.

Quick thought experiment

Imagine an AI system claims a new oncology drug cuts mortality by 40% versus standard of care. The sponsor wants to fast-track development based on these predictions alone.

Before you read further, select what you would ask first about this AI.

From document to daily decisions

The original January 2026 document defines 10 principles for good AI practice in drug development. They span ethics, risk management, data and model quality, and clear communication with users and patients.

Use the interactive clusters on the right to see how related principles group together.

Human-centric, risk-based, standard-aligned

  • 1. Human-centric by design – AI aligns with ethical and human-centric values, supporting rather than replacing clinical judgment.
  • 2. Risk-based approach – Validation, risk mitigation, and oversight scale with model risk and context of use.
  • 3. Adherence to standards – AI follows applicable legal, ethical, scientific, cybersecurity, and regulatory standards, including GxP.
  • These principles anchor every decision about whether, where, and how to use a particular AI system.

A team wants to deploy an AI tool that prioritizes trial sites purely based on predicted enrollment speed, with no documented evaluation of bias against specific patient groups. Which principle is most clearly at risk?

Defining use and safeguarding data

  • 4. Clear context of use – Role and scope of the AI are explicitly defined.
  • 5. Multidisciplinary expertise – Technical and domain experts are engaged across the AI life cycle.
  • 6. Data governance and documentation – Data provenance, processing, and analytical decisions are traceable and verifiable, with appropriate privacy protections.

A sponsor repurposes an AI toxicity prediction model trained on adult oncology data to guide pediatric dosing without updating documentation. Which principle is most clearly violated?

From model design to clear communication

  • 7. Model design and development – Uses best practices in model design and software engineering; data are fit-for-use.
  • 8. Risk-based performance assessment – Evaluates the full system, including human–AI interactions.
  • 9. Life cycle management – Quality management systems support monitoring, re-evaluation, and handling issues.
  • 10. Clear, essential information – Plain language explains context of use, performance, limitations, data, and updates.

What to remember

  • AI in drug development must be human-centric, risk-based, and aligned with legal and ethical standards.
  • Clear context of use, multidisciplinary expertise, and strong data governance are non-negotiable foundations.
  • Robust model design, system-level performance assessment, and lifecycle management protect patients over time.
  • Plain, accessible communication about AI purpose, performance, and limitations is essential for trust.

How this assessment works

You will answer 4 multiple-choice questions covering the 10 guiding principles. There is one best answer per question. A score of at least 80% is required to earn the certificate.

Your responses are saved automatically. Use the navigation buttons at the bottom to move between questions. When you reach the Results page, your score and certificate eligibility will be displayed.

Score summary