In January 2026, the European Medicines Agency (EMA), together with the U.S. Food and Drug Administration (FDA), have taken an important step by publishing the “Guiding Principles of Good AI Practice in Drug Development.” This document is more than a technical checklist — it is a clear signal that regulators are getting serious about how artificial intelligence (AI) should be developed, validated, governed, and, ultimately, trusted across the medicines lifecycle.
While the principles are formally framed around drug development, their implications go well beyond non-clinical and clinical domains. For Health Economics and Outcomes Research (HEOR), this guidance offers something the field has long needed: a credible regulatory blueprint for responsible AI use that could help agencies move from cautious experimentation to structured adoption.
Why this matters now
AI is already being used across HEOR — whether for real-world evidence generation, economic modeling, patient segmentation, or long-term outcome prediction. Yet, despite methodological innovation, acceptance by HTA bodies and payers remains uneven. One of the key barriers is not capability, but confidence: confidence in transparency, robustness, reproducibility, and governance.
By articulating shared principles for AI use, the EMA and its partners are laying the groundwork for that confidence. Importantly, they are doing so in a way that aligns closely with the questions HTA agencies ask every day: What is this model for? What risks does it introduce? Can we trust the outputs? And how do we manage it over time?
A bridge to HEOR: Learning from regulatory leadership
We have already seen how regulatory clarity can accelerate adoption. The UK, for example, has actively explored how AI can be used to support evidence generation and decision-making in health systems. EMA-FDA’s principles create an opportunity to extend this momentum across Europe and beyond — including into HEOR and HTA decision frameworks.
Although all ten principles are relevant, four stand out as particularly transformative for HEOR.
Four principles with outsized impact on HEOR
1. Human-centric by design
This principle explicitly anchors AI development in ethical and human-centric values. For HEOR, this is critical. Economic models and real-world analyses directly influence access, reimbursement, and, ultimately, patient outcomes.
A human-centric approach reinforces that AI in HEOR should support, not replace, expert judgement. It legitimizes hybrid workflows where analysts, clinicians, patients, and decision-makers remain central, while AI enhances scale, speed, and insight. This framing directly addresses common HTA concerns about “black box” decision-making.
2. Risk-based approach
Not all AI use cases carry the same consequences, and this principle explicitly recognizes this. For HEOR, this principle is particularly powerful.
Using AI to automate literature screening does not pose the same risk as using it to inform long-term survival extrapolations or pricing decisions. A risk-based approach allows proportionate validation, governance, and oversight — making AI adoption more realistic and scalable for both developers and agencies.
This is precisely the kind of nuance HTA bodies need to move beyond binary “acceptable/not acceptable” positions on AI.
3. Risk-based performance assessment
Closely linked, the EMA and FDA emphasize that performance assessment should consider the complete system, including human-AI interaction, and be tailored to the intended context of use.
For HEOR, this reframes validation away from abstract accuracy metrics and toward decision relevance. The key question becomes: Is this AI fit-for-purpose for the policy or reimbursement decision it supports? This aligns naturally with HTA thinking and opens the door to more pragmatic, decision-focused validation frameworks.
4. Life cycle management
Perhaps the most underappreciated principle in HEOR today is life cycle management. The EMA highlights the need for ongoing monitoring, re-evaluation, and management of issues such as data drift.
HEOR models are often treated as static artefacts, yet AI-enabled models evolve as data, clinical practice, and populations change. Recognizing AI as a living system — not a one-off submission — could fundamentally change how HTA agencies think about post-submission evidence generation, managed entry agreements, and reassessment over time.
From drug development to HTA: An opportunity not to miss
This guidance is explicitly focused on drug development, but its principles are intentionally broad and collaborative. They invite extension, adaptation, and harmonization across jurisdictions and evidence domains.
For HEOR, this is an opportunity. By aligning AI methods with regulatory expectations early — rather than waiting for explicit HTA-specific rules — the field can help shape how agencies evaluate AI-enabled evidence. In doing so, HEOR can move from being a passive recipient of regulation to an active contributor to responsible AI adoption.
Looking ahead
AI will not replace HEOR expertise — but it will increasingly shape how evidence is generated, synthesized, and interpreted. These guiding principles offer a shared language to discuss trust, risk, and value. If agencies apply similar thinking to HEOR, we may finally see a path toward consistent, transparent, and confident use of AI in reimbursement and access decisions.
In that sense, this guidance is not just about AI in drug development. It is about preparing the entire evidence ecosystem — including HEOR — for a future where intelligent systems are used responsibly, transparently, and in service of better patient outcomes.
Interested in learning more?
Watch our recent webinar, “AI in HEOR: Case Studies on Navigating Regulatory and HTA Guidance,” on demand, featuring experts Dalia Dawoud, Manuel Cossio, Sheena Singh, and Cale Harrison:

