Solutions
About Us
Insights
Careers

Empowering Patient Engagement in HTA: Lessons from an AI-Generated Plain Language Summary Case Study

The challenge: Making HTA understandable to everyone

Health technology assessments (HTAs) play a critical role in determining which treatments and innovations are adopted within healthcare systems. However, the technical language and complexity of HTA reports often make them inaccessible to patients and caregivers — the very individuals whose lives these decisions affect the most.

Plain Language Summaries (PLS) are designed to close this gap. They can translate HTA findings into clear, patient-friendly language, empowering people to engage meaningfully in healthcare decisions. Yet, producing high-quality PLS documents is a slow and resource-intensive process. Teams must balance scientific rigor with readability, cultural sensitivity, and accuracy — a demanding task that limits scalability.

This is where artificial intelligence (AI) offers a transformative opportunity.

 

The study: Can generative AI help bridge the communication gap?

At ISPOR Europe 2025, we presented a pioneering study exploring whether generative AI can create accurate and patient-friendly summaries from complex HTA documents.

Using a NICE Highly Specialized Technologies (HST) guidance on onasemnogene abeparvovec (a gene therapy for spinal muscular atrophy), the team tested Google Gemini, a large language model, to generate a full PLS automatically.

The AI-generated summary was evaluated across 18 quality measures covering readability, accuracy, relevance, and tone. A “human-in-the-loop” reviewer ensured alignment with patient communication standards and European HTA Regulation principles — integrating transparency and patient empowerment into the assessment.

 

The results: Speed meets substance

The results were striking. The AI produced an eight-page (2,570-word) PLS in just 15 seconds, structured around all key HTA components — disease context, treatment mechanism, clinical effectiveness, safety, and patient impact.

Across 18 evaluation criteria, the PLS achieved an average score of 8.27/10, reflecting strong alignment with plain language and patient-centered communication standards.

  • Mechanism simplicity (9.2/10) and plain language explanation (8.9/10) were top-performing categories, demonstrating Gemini’s ability to simplify complex gene therapy concepts without sacrificing accuracy.
  • The document met CEFR B1 readability, ensuring accessibility for non-specialist audiences.

However, the AI struggled with target population clarity (6.8/10) and unmet need articulation (6.5/10) — areas requiring deeper contextual and emotional nuance. These findings underscore the importance of maintaining a human role in refining and validating AI outputs, especially when tailoring content for specific patient groups.

 

The implications: Toward patient-centered HTA with AI

The study demonstrates that AI can accelerate and enhance the creation of patient-friendly HTA communications, promoting inclusivity and transparency in healthcare decision-making. But it also emphasizes that AI should complement, not replace, human expertise.

Generative AI tools like Gemini can help:

  • Scale patient engagement, enabling broader and faster dissemination of accessible HTA information.
  • Support regulatory compliance, aligning with EU HTA Regulation principles of transparency and participation.
  • Enhance health literacy, fostering more equitable and informed patient involvement.

Yet, meaningful adoption requires:

  • Human-in-the-loop systems to verify accuracy, tone, and contextual relevance.
  • Prompt optimization to capture nuances like unmet needs or cultural differences.
  • Ongoing validation to ensure reliability and regulatory alignment.

 

The conclusion: AI as a partner in patient empowerment

This work highlights how AI, when thoughtfully integrated, can make HTA more human-centered, transparent, and inclusive. Rather than automating empathy, it can help scale understanding — bringing patients into the conversation, not leaving them behind.

As HTA continues to evolve under new European regulations, embedding AI into communication workflows may mark a key step toward a truly patient-centered future — where every individual can understand, question, and contribute to the health decisions that shape their lives.

 

Interested in learning more?

Read the abstract published at ISPOR EUROPE 2025: “Can Generative AI Deliver Patient-Friendly Summaries? A Case Study Using NICE Guidance for Spinal Muscular Atrophy” by Manuel Cossio and Ramiro E. Gilardino.

Enhancing the Reliability of Indirect Treatment Comparisons: The Role of Key Opinion Leaders

ITCs are essential for comparing two or more interventions when head-to-head randomized controlled trials (RCTs) are unavailable. In cases where patient-level data are available for at least one study, population-adjusted methods, such as matching-adjusted indirect comparison (MAIC) or simulated treatment comparison (STC), can be used to adjust for differences in treatment effect modifiers (TEMs) and prognostic variables (PVs) across study populations. Typically, relevant TEMs and PVs are identified through literature reviews, statistical approaches, and expert opinion.

To ensure the accurate identification and clinical relevance of TEMs and PVs, sponsors often consult with key opinion leaders (KOLs), improving the reliability of the ITC results. However, despite the critical and nuanced insight they can provide, the role of KOLs remains unstructured in formal guidance.

Here, we discuss the need for guidelines that outline when and how to integrate KOL input for better-informed healthcare decision-making.

 

The uncharted role of KOLs in ITCs

Health technology assessment (HTA), specifically in the context of ITCs, is an area driven largely by quantitative methods. Yet, the qualitative nuance provided by clinical experts, or KOLs, is indispensable. KOLs offer insights into TEMs and PVs that might otherwise elude purely data-driven models.

 

Current guidance documents

Limited structured guidance from HTA bodies and professional societies leads to inconsistent KOL engagement. Current guidance documents for conducting ITCs (e.g., the NICE methods manual1 and DSU TSD 182) emphasize identifying potential TEMs via expert discussion but leave practical KOL engagement strategies underdefined.

 

Guidance from non-payer organizations

The lack of guidance from non-payer organizations is also evident. The Cochrane Handbook3 discusses both methodological strengths and pitfalls of potential bias when treatments or populations vary in subtle but clinically important ways. However, the guidance is primarily focused on quantitative data synthesis rather than the integration of qualitative insights.

The PRISMA guidelines4 stress the need for transparent reporting when combining evidence across comparisons, but no tools or frameworks are proposed for capturing qualitative contributions.

A 2023 review5 of methodological approaches for identifying TEMs in ITCs highlighted that available guidance largely focused on statistical methods for adjusting TEMs rather than systematic and comprehensive processes for identifying TEMs. In addition, only 17 of 511 (3.3%) ITCs included in the review presented a description of the selection process for TEMs.

 

Bridging the gap: Methodological and procedural challenges

The current landscape is marked by a reliance on ad hoc approaches. KOL input on ITCs is often collected on an “as-needed” basis — sometimes late in the process — which may result in missed opportunities to refine study protocols early on. Without structured guidance from HTA bodies and professional societies, engagement with KOLs remains inconsistent. This gap underscores a broader tension: the need to honor the nuance of clinical insights while adhering to statistical rigor. It has been suggested that solicitation of KOL input should occur during the early, formative phases of the research process within a pre-specified framework.6 Late-stage involvement and integrating KOL input post-hoc  — after the core design and analysis decisions have been made — can introduce bias and risks the integrity of the ITC.

Several methods for structured expert input, such as the Delphi technique or nominal group processes, have been proposed in adjacent fields, but these are rarely applied in the context of ITCs.7 The consequence is a reliance on unstructured interviews or informal consensus-building, which can introduce subjectivity and reduce reproducibility.

 

Looking forward: Harmonizing expert input with methodological standards

The way forward lies in bridging the divide between clinical intuition and methodological precision. The development of clear guidelines that outline when and how to integrate KOL input would be a significant step toward enhancing the reliability of ITCs. One promising approach is the early engagement of KOLs in, for example, structured Delphi panels or advisory boards during the protocol development stage. Codifying this process would ensure that expert insights inform the research from the outset, rather than serving as an afterthought.

HTA bodies, regulatory agencies, and academic methodologists should prioritize the collaborative creation of comprehensive guidelines to address the following key aspects of KOL consultation for ITCs:

  • Selecting KOLs: Defining objective, transparent eligibility criteria to ensure that only the most appropriate clinical experts contribute their insights.
  • Timing: Detailing when during the research process expert opinion should be solicited — ideally early on to influence study design and TEM identification.
  • Question types: Guiding the formulation of questions that address specific knowledge gaps related to TEMs and PVs in the ITC.
  • Integration protocols: Outlining systematic methods for incorporating and reporting KOL insights, ensuring this information is subject to the same transparency standards as quantitative data.

The accurate identification of TEMs can significantly influence the reliability of ITC outcomes. Without consistent frameworks, there’s a risk that KOL input is either underutilized or inconsistently applied, affecting both the credibility and applicability of findings. The lack of guidance highlights the need to explore effective approaches in the absence of standardized methodologies for KOL engagement. Addressing this gap is essential for improving the reliability of ITC results and supporting informed healthcare decision-making.

 

Upcoming webinar

Our Evidence, Value, and Access team will be hosting the upcoming webinar, “Navigating the First Year of EU JCA Implementation: Updates, Methodological Insights, and Bridging Local HTA Realities,” on July 10 at 10 am ET. Register today to reserve your spot!

From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval

Achieving positive topline results in a clinical trial marks a critical milestone in the drug development process, yet it is far from the end of the submission journey. Instead, it signals the start of a complex, fast-paced effort to prepare for regulatory submission and navigate the FDA’s multi-stage review. The final “regulatory defense” stage demands rigorous collaboration, meticulous planning, and adaptability to meet the expectations of regulatory agencies.

Here we discuss the key stages in the post-topline journey, exploring key milestones, unexpected challenges, and best practices for ensuring a strong submission and a smooth path to approval.

 

1. The Preparation: Post-topline readiness and strategic planning

The preparation phase begins immediately after topline results are available. During this critical window — often lasting several months — cross-functional teams shift their focus to assembling the final submission package. Statisticians and programmers play a central role here, finalizing the tables, listings, and figures (TLFs) that will populate the Clinical Study Report (CSR) and preparing submission-ready datasets following CDISC standards, including ADaM, SDTM, and associated documentation.

In parallel, a pre-BLA or pre-NDA meeting with the FDA is typically scheduled to align on expectations, identify potential concerns, and set the foundation for a smoother review process. This phase is not just about document generation; it’s about establishing a strategy, anticipating regulatory scrutiny, and ensuring the submission is both complete and compelling. The quality of the groundwork laid here often dictates the ease — or difficulty — of the phases that follow.

 

2. The Submission: Crossing the threshold to regulatory review

Once the submission is filed, the process transitions into a more structured phase governed by the FDA’s review protocols. The agency begins with a 60-day filing review to assess whether the BLA or NDA is complete and acceptable for full review. If so, the sponsor receives a Day 74 Letter, which provides early feedback, flags any immediate concerns, and confirms the Prescription Drug User Fee Act (PDUFA) date — typically 10 months post-filing for standard reviews or 6 months for priority reviews. Although this phase may seem procedural, its significance is high. A clean, well-organized submission can streamline the review process, limit questions, and reduce the risk of delays. This is also the point where rolling submissions, if applicable under Fast Track designation, can offer a tactical advantage by accelerating document delivery and potentially shortening review timelines.

For statistical and programming teams, this is not a time to sit back and relax — it’s an opportunity to ensure internal alignment and anticipate questions the FDA may raise based on known data complexities. Strong documentation and traceability within datasets and outputs are essential at this point, helping to support any needed follow-up. Proactive communication and readiness during this phase help lay the groundwork for the more intensive regulatory engagement that follows.

 

3. The Regulatory Defense: Responding, clarifying, and defending your data

The regulatory defense phase is where the bulk of agency interaction occurs — and where flexibility and responsiveness become essential. During this time, the FDA may issue multiple information requests (IRs), asking for clarification on statistical methodology, specific data points, or safety and efficacy outcomes. Mid-cycle communications, typically occurring around months 4–5 for standard reviews, offer a formal opportunity to assess the review’s progress and surface any significant concerns.

In some cases, the agency may convene an Advisory Committee (AdCom) meeting to gather expert input, particularly when there are outstanding safety questions or complex benefit-risk considerations. Throughout this phase, the ability to quickly respond to ad hoc requests, provide high-quality data outputs, and maintain close collaboration across functions is critical. It’s a high-stakes stage where well-prepared teams can help preserve timelines and ensure the submission stays on track.

 

4. The Unexpected: Adapting to setbacks and charting a new course

In some cases, the regulatory journey doesn’t lead directly to approval. If the FDA identifies significant deficiencies in the initial submission — whether related to clinical data, statistical interpretation, manufacturing, or safety — it may issue a Complete Response Letter (CRL). This marks a temporary halt in the process, requiring the sponsor to address the concerns before resubmission. Depending on the scope of the deficiencies, the resubmission may fall under Class I (minor issues, reviewed in 2 months) or Class II (major issues, reviewed in 6 months).

For statisticians and programmers, this could mean conducting additional analyses, integrating new data, or adjusting the structure and presentation of the submission package. While a CRL can be a setback, it’s also an opportunity to recalibrate, seek additional guidance from the FDA, and improve the likelihood of approval in the next cycle. The key is to approach this phase with transparency, strategic thinking, and a readiness to adapt and respond.

 

Final takeaways

The path from topline results to regulatory approval is rarely linear. Timelines can range from as little as 12 months in expedited reviews to over 30 months in cases involving major deficiencies and resubmissions. Success in this post-unblinding phase hinges on proactive planning, adaptable resourcing, and the ability to respond quickly and thoroughly to regulatory needs. Equally important is collaboration across functions — clinical, regulatory, biostatistics, programming, and operations must work closely and cohesively to anticipate challenges, align timelines, and respond efficiently to agency requests. Whether following a standard or accelerated route, the shared priority is a comprehensive, high-quality submission that stands up to regulatory scrutiny — and ultimately supports timely access to new therapies for patients.

 

Interested in learning more?

Watch Jasperlynn Kao and Florence Le Maulf’s recent webinar, “From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval”:

Patient Centricity in Comparative Effectiveness Research

Inclusion of the patient voice in clinical research is becoming increasingly important. Oncology trials, for example, may be shifting toward a more patient-centered approach to outcomes, such as quality of life and time to symptomatic progression,1 as endpoints like progression-free survival and response rate may not be meaningful to patients in the absence of symptom alleviation.2

The primary purpose of a patient-centric approach to clinical research is to improve patient outcomes, which include well-being and quality of life.3 As part of that endeavor, here we discuss increasing patient centricity in comparative effectiveness research by including patient-reported outcome data in indirect treatment comparison analyses.

 

What are patient-reported outcomes (PRO)?

One way of increasing patient centricity in clinical trials is by incorporating patient-reported outcomes (PRO) as trial endpoints. A PRO is “any report of the status of a patient’s health condition that comes directly from the patient without interpretation of the patient’s response by a clinician or anyone else.”4 PRO measures assess broad concepts like health-related quality of life (HRQoL) as well as specific symptoms, such as pain or fatigue. However, there are concerns regarding the extent to which PRO results are published.5 That is, in addition to growing interest in measuring PRO, there are also ongoing efforts to promote the reporting of PRO findings from clinical trials. Specifically, the Consolidated Standards of Reporting Trials (CONSORT) PRO extension guidelines recommend that PRO results be reported in peer-reviewed publications, ideally with a trial’s primary endpoints.6

 

Benefits to publishing PRO findings

There are various benefits of publishing PRO findings from clinical trials. These PRO data can be used to:

  • Facilitate shared decision-making between patients and providers, and
  • Influence treatment guidelines and drug approvals.7

Additionally, health technology assessment (HTA) agencies recognize patient-reported HRQoL and treatment satisfaction data when confirming clinical benefits and supporting reimbursement decisions.8,9 Specifically, the inclusion of PRO data in peer-reviewed publications is considered especially influential.8 It has been recommended that in order for PRO data to have the largest possible impact, they must be maximized.7 One way to enhance the impact of PRO data from clinical trials is to include them in comparative effectiveness research.

 

What are indirect treatment comparisons (ITC)?

Indirect treatment comparison (ITC) analyses can be used to evaluate the efficacy and safety of interventions that have not been compared in a head-to-head trial. Several ITC methods are available with network meta-analysis (NMA) being the most common.10 Other techniques include those that adjust for patient differences, such as population-adjusted indirect comparison (PAIC). Specifically, PAIC utilizes individual patient-level data (IPD) from one trial (i.e., the index trial) and published aggregate data for comparator trials.10 Using PAIC as an example, the PRO IPD from the index trial would be utilized along with published aggregated PRO data from comparator trials. This would be contingent upon PRO data being publicly available for the comparator(s) of interest, which is one limitation of conducting comparative effectiveness analyses for PRO data. Moreover, the definition and assessment criteria would need to be similar.

 

What does the intersection of PRO and ITC look like?

Examples can be found in the oncology literature. Specifically, ITC analyses of PRO have been published for HR+/HER2- breast cancer. In one study, palbociclib + fulvestrant was associated with better outcomes than abemaciclib + fulvestrant in terms of changes from baseline in several domains of a common oncology PRO measure, the EORTC QLQ-C30 (overall quality of life, emotional functioning, nausea/vomiting, appetite, and diarrhea).11 Another study utilized a matching-adjusted indirect comparison to evaluate time to symptom deterioration, revealing favorable outcomes for ribociclib over abemaciclib in the EORTC QLQ-C30 domains of appetite, diarrhea, and fatigue.12

 

Including PRO data in ITC analyses

Although patient engagement is at the center of conversations about clinical research, incorporating the patient perspective in a meaningful manner remains challenging. As efforts to increase patient centricity continue, including PRO data in ITC analyses may represent an overlooked area of opportunity. Specifically, long before a clinical trial begins, resources are used to select optimal PRO measures, determine when and how they will be administered, and develop an analytical plan. During the trial, patients devote their time and energy to participation, including the completion of PRO measures. When considering all the work that goes into this one component of the clinical trial, it only makes sense that using these data as comprehensively as possible is a way of honoring patients and researchers who share a common goal. As stated earlier, these data should be maximized by including them in comparative effectiveness research.

Including PRO data in ITC analyses both begins and ends with dissemination: that is, only when PRO results from clinical trials are publicly available can they be included in ITC analyses. Along the same lines, the results of the ITC analyses should also be published, ideally in an open-access format, so that patients and providers alike can utilize the findings to develop treatment plans. Additional recommendations beyond dissemination include ongoing collaboration between patients, HTA agencies, and health economics and outcomes research (HEOR) specialists to align on goals and enhance decision-making for the adoption of new technologies.

 

Final takeaway

Although patient-centered comparative effectiveness research is only one piece of the puzzle, it contributes to the larger endeavor of increasing patient centricity in clinical research and may also shed light on other areas previously viewed as separate from these efforts.

Advancing Equity in Health Technology Assessment: Lessons from CAR T-Cell Therapies

Chimeric antigen receptor (CAR) T-cell therapies, classified as advanced therapy medicine products, have revolutionized the treatment landscape for certain hematological cancers, providing new hope to patients who previously had limited options. Since the U.S. FDA approved tisagenlecleucel (Kymriah) and axicabtagene ciloleucel (Yescarta) in 2017 for relapsed or refractory B-cell precursor acute lymphoblastic leukemia and large B-cell lymphoma, respectively, evidence has suggested that CAR T-cell therapies could offer a potentially curative approach in a range of other hematological conditions.1,2,3,4

However, despite their potential to improve patient outcomes, access to CAR T-cell therapies remains inconsistent due to cost, delivery complexity, and manufacturing challenges. Additionally, disparities in access related to social determinants of health (SDOH) further limit equitable benefits, disproportionately impacting marginalized populations (such as those living in rural areas, individuals with no family or social networks, and older people).

Health technology assessment (HTA) has traditionally focused on clinical outcomes and cost-effectiveness. Although health equity has been recognized as a distinct value element in HTA, and relevant frameworks and guidelines exist, it is not routinely integrated into decision-making. As such, CAR T-cell therapies represent a valuable case study for better understanding and advancing equity considerations in HTA.

 

What are CAR T-cell therapies?

CAR T-cell therapies are a type of immunotherapy that modify a patient’s T-cells to target and attack cancer cells, offering effective options for relapsed or refractory hematological cancers. This process involves extracting, modifying, and reinfusing the cells, followed by close monitoring for severe adverse events. Beyond their current approved indications, CAR T-cell therapies are also being investigated for several other hematological malignancies, as well as in solid tumors and non-cancer indications such as autoimmune conditions.4

Delivering CAR T-cell therapies presents significant challenges for healthcare systems due to their complexity, high cost, and the need for specialized infrastructure and expertise. The treatment requires apheresis, cell manufacturing, conditioning therapy, and intensive post-infusion monitoring, all conducted at accredited centers, often located in major urban areas.5 Successful delivery also requires coordination among a multidisciplinary team of physicians, nurses, and pharmacists, along with investment in treatment center infrastructure, including intensive care unit capacity and specialized training to manage severe adverse events (e.g., cytokine release syndrome and neurotoxicity).5

 

CAR T-cell therapies: Highlighting equity concerns in access to innovative treatments

Ensuring that equitable access to healthcare is considered in the HTA decision-making, particularly for high-cost, innovative treatments like CAR T-cell therapy, has become a growing concern. Despite advancements in science, therapeutic applications, and complication management, access to CAR T-cell therapy remains limited, with only a small percentage of eligible patients receiving treatment.6,7 This restricted access stems from challenges specific to CAR T-cell therapy, such as high costs, complex logistics, and manufacturing constraints, which are compounded by factors related to SDOH and equity.

Equity gaps are evident in disease incidence and prevalence, treatment patterns, and outcomes of patients eligible for CAR T-cell therapies. For example, racial and ethnic minorities, particularly Black and Hispanic populations, experience higher rates of certain hematological malignancies, yet are underrepresented in clinical trials that inform CAR T-cell therapy approvals.8,9 This leads to gaps in effectiveness and safety data across populations. Furthermore, differences in diagnosis and referral patterns contribute to inequities, with marginalized groups less likely to be referred to specialized centers due to limited provider awareness or implicit biases. Older adults, who could benefit from CAR T-cell therapies, are often excluded from trials, limiting evidence for their use in this population.10 SDOH, such as geographic remoteness and socioeconomic status, exacerbate inequities in access to CAR T-cell therapies once they are approved. Patients living in rural areas face logistical and financial barriers to reaching treatment centers, while individuals from lower socioeconomic backgrounds struggle with transportation, caregiving responsibilities, and lost wages.11,12 These overlapping disparities create a cumulative burden, limiting equitable access and worsening outcomes for historically underserved groups.

 

Exploring equity factors in HTAs of CAR T-cell therapies and the journey toward inclusive access

Traditional HTA frameworks have historically overlooked equity considerations, prioritizing clinical efficacy and cost-effectiveness while neglecting how SDOH and equity factors affect patient access and outcomes. This gap not only exacerbates disparities but also fails to incentivize health technology developers to commit to systematic evidence gathering and addressing these issues in their evidence submissions. While several modified economic modeling approaches that account for equity considerations exist (e.g., distributional cost-effectiveness analyses, equity-based weighting, multi-criteria decision analysis), there is a lack of consensus on which approach is best and how these methods can systematically be incorporated into HTA.13,14 As a result, HTAs often do not account for the unique burdens faced by underserved populations, such as indirect costs related to travel, caregiving, and lost income, further exacerbating existing inequities.

Recent commitments to equity from HTA bodies present valuable opportunities to ensure fair access to novel, high-cost therapies.15,16 CAR T-cell therapies, with their complex delivery and high cost, serve as a compelling case study for examining how HTA bodies incorporate equity considerations into their assessments. To explore this further, we conducted a review of 18 HTAs from Canada’s Drug Agency and the National Institute for Health and Care Excellence, focusing on six CAR T-cell therapies. Our review found that most submissions acknowledged disparities in disease incidence, treatment, and outcomes based on race, socioeconomic status, diagnosis and referral patterns, and age. These disparities were often linked to financial and geographical barriers that disproportionately affect marginalized groups. However, there were limited and inconsistent efforts to quantify these factors in the economic modeling or in the analysis of the clinical evidence submitted. This likely reflects the fact that HTA bodies do not routinely require sponsors to quantify equity concerns within their submissions, leading both decision-makers and companies to potentially overlook these issues.

Cytel will present the results of this review at the 2025 ISPOR conference in Montreal, Canada, where we will explore how gaps in HTA evaluations can inadvertently perpetuate inequities in access to CAR T-cell therapies. Join us at our podium session to learn more about how incorporating equity considerations into HTA processes can promote more equitable outcomes and ensure that all patients, regardless of their background, can benefit from CAR T-cell therapies. Do not miss this opportunity to engage in the discussion on advancing inclusive access to high-cost, innovative therapies.

 

Addressing equity concerns in CAR T-cell therapies: Strategies for inclusive access

Cytel can support pharmaceutical clients in addressing equity concerns through the following offerings:

  • Innovative trial designs that consider elements of health equity
  • Generation of real-world evidence to supplement trial programs
  • Lifecycle evidence generation to support value in diverse groups of patients
  • Advanced analytics, such as transportability analyses, to maximize the use of evidence generated in other settings
  • Quantifying the impact of inequalities in the value proposition of new health technologies.

Maximizing the Potential of Real-World Data with Bayesian Borrowing

In response to concerns about data quality in real-world evidence (RWE) generation, including issues such as bias and small sample sizes, resulting in low precision estimates with questionable accuracy and thus interpretability challenges, regulatory submissions have increasingly incorporated advanced methodologies to enhance the robustness of RWE.

Among these methods, Bayesian borrowing stands out as an approach that can significantly increase the scientific potential of real-world data. By leveraging data from multiple sources that may all have different weaknesses, Bayesian borrowing can combine these and enhance the power of comparisons with trial data for comparisons beyond those from a randomized control trial. Bayesian borrowing can also be used to create hybrid control arms, enabling a smaller control cohort to address ethical concerns and patient availability issues.1

 

The Bayesian borrowing concept

Bayesian borrowing methods make use of external data, potentially from multiple sources, by using a prior distribution that adjusts for the possibility that this external data may come from a different population. While using external or historical data can enhance the precision and accuracy of parameter estimates in a study, directly simple pooling of this data could lead to bias if the external population differs from the current one.2,3,4 To address this, priors such as a power prior is used to adjust the influence of the external data, which is more diffuse than complete pooling of current study dataset and the external dataset, reducing the possible bias but also the eventual precision of the parameter estimate.

In drug development, Bayesian borrowing is primarily applied in situations involving rare diseases, pediatric trials, or when there are no existing approved treatments for the same conditions.5

 

Figure 1. Bayesian borrowing

 

Quantitative bias analysis (QBA) plays a crucial role in supporting studies that employ Bayesian borrowing by assessing the impact that the weaknesses in the data being integrated has on study results. When leveraging external or historical data through Bayesian methods, such as Bayesian borrowing, there is always a risk that the borrowed data may introduce bias due to elements that cannot be addressed directly in analysis specifications, such as missing or unmeasured data, or other quality issues. QBA helps to quantify the extent of these biases and provides a structured approach to adjust for them, thereby enhancing the interpretation possibilities of the results, ultimately supporting study validity and scientific integrity.

By applying QBA alongside Bayesian borrowing, researchers can transparently account for uncertainties in the borrowed data and ensure that the final estimates are more robust, credible, and defensible in both regulatory and clinical decision-making contexts.

 

Figure 2. Example of QBA for Bayesian borrowing

 

FDA and HTA submissions incorporated with Bayesian borrowing methods

In recent years, the acceptance of Bayesian borrowing approaches has been evolving from both regulatory and Health Technology Assessment (HTA) perspectives.

The FDA has highlighted this shift through initiatives like a podcast discussing the use of Bayesian statistics, including a case where Bayesian methods were used to borrow data from an adult trial to assess an asthma product’s treatment effects in pediatric patients.6 Additionally, the FDA recommended that GSK apply Bayesian dynamic borrowing to integrate adult trial data for a pediatric study for post-marketing activities, and these results were subsequently accepted.7

HTA bodies are also considering Bayesian methods; for example, NICE recommended using Bayesian hierarchical models, which are closely related to Bayesian borrowing, in the technical appraisal of larotrectinib for NTRK-fusion positive solid tumors in 2020.8

Furthermore, the FDA plans to release draft guidance on the use of Bayesian methods in clinical trials for drugs and biologics by the end of 2025.

 

The future of Bayesian borrowing

Although Bayesian methods have garnered increasing attention from regulatory and HTA bodies, their practical implementation has been somewhat limited. Challenges such as organizational resistance to novel approaches, resource constraints, and difficulties in applying these advanced methods effectively can hinder their adoption in regulatory and HTA submissions. However, as awareness grows and best practices are established, these barriers are likely to diminish, paving the way for more widespread use of Bayesian methods.

 

Notes

1 Dron, L., Golchi, S., Hsu, G., & Thorlund, K. (2019). Minimizing Control Group Allocation in Randomized Trials Using Dynamic Borrowing of External Control Data – An Application to Second Line Therapy for Non-Small Cell Lung Cancer. Contemporary Clinical Trials Communications, 16(1).

2 Viele, K., Berry, S., Neuenschwander, B., Amzal, B., Chen, F., Enas, N., Hobbs, B., Ibrahim, J. G., Kinnersley, N., Lindborg, S., Micallef, S., Roychoudhury, S., & Thompson, L. (2013). Use of Historical Control Data for Assessing Treatment Effects in Clinical Trials. Pharmaceutical Statistics, 13(1).

3 Struebing, A., McKibbon, C., Ruan, H., Mackay, E., Dennis, N., Velummailum, R., He, P., Tanaka, Y., Xiong, Y., Springford, A., & Rosenlund, M. (2024). Augmenting External Control Arms Using Bayesian Borrowing: A Case Study in First-Line Non-Small Cell Lung Cancer. Journal of Comparative Effectiveness Research, 13(5).

4 Mackay, E. K. & Springford, A. (2023). Evaluating Treatments in Rare Indications Warrants a Bayesian Approach. Frontiers in Pharmacology, 14(1).

5 Muehlemann, N., Zhou, T., Mukherjee, R., Hossain, M. I., Roychoudhury, S., & Russek‑Cohen, E. (2023). A Tutorial on Modern Bayesian Methods in Clinical Trials. Therapeutic Innovation & Regulatory Science, 57(1).

6 Clark, J. (2023). Using Bayesian Statistical Approaches to Advance our Ability to Evaluate Drug Products. CDER Small Business and Industry Assistance Chronicles, U.S. FDA.

7 Best, N., Price, R. G., Pouliquen, I. J., & Keene, O. N. (2021). Assessing Efficacy in Important Subgroups in Confirmatory Trials: An Example Using Bayesian Dynamic Borrowing. Pharmaceutical Statistics, 20(1).

8 NICE. (2020). Appraisal Consultation Document: Larotrectinib for Treating NTRK Fusion-Positive Solid Tumours.

Artificial Intelligence Applications in HEOR

Written by Reza Jafar, Omar Irfan, and Maria Rizzo

Recent advancements in machine learning (ML) and artificial intelligence (AI) can offer tremendous potential benefits to health economics and outcomes research (HEOR), such as in cohort selection, feature selection, predictive analytics, causal inference, and economic evaluation.[1] The use of ML and AI has been previously explored in systematic literature reviews (SLRs), real-world evidence (RWE), economic modeling, and medical writing.[2-4]

In this article, we assess the evolving landscape of evidence and developments attributed to AI in HEOR, reflecting on recent insights and developments presented at the 2024 US conference for The Professional Society for Health Economics and Outcomes Research (ISPOR) in Atlanta. Read more »

Planning Strategies for Externally Controlled Trials: Insights from ISPOR US 2024

External Control Arms (ECAs) provide comparative evidence when recruiting patients is difficult or unethical in randomized controlled trials. ECAs have significant potential to save resources and accelerate access to innovative treatment. In a previous blog, our experts took a deep dive into the concept of ECAs, their acceptable use cases, and the current regulatory guidance.

Existing guidance on the design and conduct of externally controlled trials emphasizes the importance of early engagement with regulatory and HTA bodies to justify using an ECA and discuss the preliminary study design and statistical analyses. With the increasing use of ECAs in regulatory and HTA submissions, the acceptable use cases for ECAs and important design and analytical considerations are becoming clearer. However, sponsors still face key questions about the optimal timing to plan for an ECA and how to prepare for early interactions to address differing regulatory and HTA perspectives.

In the ISPOR US 2024 HEOR Theatre session, Jason Simeone, Evie Merinopoulou, and Grace Hsu delved into these questions, discussing how regulatory and HTA stakeholders appraise ECAs, common issues from both perspectives and proposed practical solutions. In this blog, we ask Evie follow-up questions, highlighting insights from their ISPOR HEOR Theater session.

 

Your ISPOR US 2024 presentation was about early planning strategies for ECA. So, what is the optimal timing to start planning for an ECA?

Ideally, sponsors that need to perform an ECA to support their development program should start planning for the ECA alongside the clinical trial design. This allows them to gain experience with current real-world data (RWD) and make any necessary investment decisions for data improvements, such as additional data collection or infrastructure upgrades. Further, considering an ECA during the trial design provides the opportunity to incorporate real-world endpoints into the clinical trial. This is particularly valuable because defining clinical endpoints in real-world databases can often be challenging, especially when they are not measured consistently between routine practice and clinical trials.

Further, we showed in our presentation how although formal guidance from regulatory and HTA bodies on ECAs is consistent, final decisions when appraising ECAs may differ. This divergence in regulatory vs HTA acceptance reflects differing requirements for ECAs. When planning ECAs, both perspectives and requirements should be considered. Therefore, within sponsor organizations, early planning is key for cross-functional alignment (between HEOR/Market Access and Medical Affairs teams) on ECA study objectives and design, leading to more efficient evidence planning. With regards to external engagements with regulators and payers, the optimal timing is very contextual, but generally, sponsors should engage with decision makers via available routes like early advice programs, early enough to have the time to incorporate feedback and adjust their RWD strategy and study design—before protocol and SAP finalization.

 

Is early planning necessary for all cases? For instance, if a product is being developed for an indication with a rapidly changing treatment landscape and the appropriate comparators may not yet be known, would these early planning activities still be useful?

Yes, absolutely. During the early feasibility assessments that we discussed in our ISPOR presentation, we should evaluate a range of elements to determine the feasibility of an ECA—ranging from the identification of target populations to the reliable capture of confounders and study endpoints, among other factors. Identifying relevant comparators is only one element of those assessments. Even if comparators change over time, becoming familiar with RWD and current gaps helps inform discussions about the appropriate data strategy and design, which should be flexible enough to reflect some of the changes in the treatment landscape. Perhaps now, we would want to know if treatments are well captured and elements like patient count on a relevant comparator will need to be refreshed.  It is important to ask questions during the early planning stages that are specific yet broad enough to inform ECA feasibility, even if the research question evolves, particularly concerning the RWD strategy.

You’ve recommended that study sponsors should be prepared to discuss certain topics during early engagement meetings, such as the ECA rationale, data source, early design considerations, and feasibility assessment. In a resource-constrained environment, sponsors may not want to invest so much money in these activities before the very first meeting, only to receive a negative response. What topics should be prioritized for that first engagement with an HTA or regulatory agency vs. subsequent meetings?

This is an important point. Ultimately, the most crucial aspect is to clarify the justification for an ECA and assess whether the agencies are open to considering evidence from an ECA. Working with the right experts who understand agency requirements from prior experience is important. Beyond the justification for an ECA being clear, we see that most critiques of ECAs stem from data issues. So, in my opinion, presenting external data source options and discussing anticipated challenges can facilitate a more productive discussion in those early engagements. If resources are constrained, a more targeted review is sufficient rather than a full-blown data landscaping exercise.

 

During your presentation, you emphasized the importance of identifying fit-for-purpose data. However, in some cases, a sponsor may have to submit to an HTA body in a region where such data is not readily available. For instance, if a detailed data landscaping assessment reveals that most fit-for-purpose data is in the US, but the submission is for a European HTA agency, how can sponsors address this challenge in their submission?

First and foremost, sponsors need to present to the local agency that they have thoroughly attempted to identify a data source accurately representing the local (target) population of interest. Local agencies are usually quite understanding if sponsors can demonstrate that they made the necessary effort and did not cherry-pick data sources, but instead selected a source with the highest quality data available for the research question, in a transparent and systematic manner. However, this means that there might be some potential external validity bias that could concern a local decision-maker. For example, a UK or German payer might be concerned that evidence submitted from a US data-derived ECA may not be generalizable to the target population of the decision problem.

At Cytel, we have been engaged in some very interesting work to understand how we could adjust for this potential external validity bias using transportability methods. These are quantitative methods, similar to those adjusting for confounding, and can be reliably used to extend conclusions from one study population to an external target population. Essentially, if core evidence comes from a US-data derived ECA, transportability methods can be applied to adjust the study findings to measurable patient characteristics in the target population of interest, accounting for prognostic factors or effect modifiers. We recently published a demonstration project on this topic [1].  Additionally, NICE recently updated its RWE framework [2] to include transportability analysis methods.

Alternatively, sponsors could consider designing a prospective study, though this approach requires much higher costs and extensive timelines. If you’re taking this route, you should design data collection with the ECA in mind, aligning patient selection criteria, endpoint definitions, etc., which is why planning early is important.

Overall, at Cytel we encourage sponsors to approach data selection in a transparent and systematic way, as recommended across all existing ECA formal guidance documents, and leverage available analytical approaches to address potential external validity concerns when using non-local data if additional data collection is not feasible.

Which internal stakeholders should be involved in this process of early planning for ECAs, and what should sponsors consider when partnering externally?

Typically, in sponsor organizations, there are clinical development and medical affairs teams that understand regulatory requirements and processes very well. In addition, there are Market Access and HEOR/RWE teams that know RWD and real-world evidence methods very well. These teams may not always work closely together, but in our presentation, we talked about the importance of bringing these two teams together early on in planning for ECAs to align differing regulatory vs payer requirements. When selecting external partners, it’s important to work with organizations that have important methodological and technical expertise. They should also have a thorough understanding of the evolving guidance and acceptance criteria of decision-making agencies and be able to provide strategic guidance on important study design decisions and early stakeholder engagements.

 

Interested in exploring further? Download the slides from the ISPOR HEOR Theatre Session presented by Cytel here.

 

Notes

[1] Ramagopalan SV, Popat S, Gupta A, et al. Transportability of Overall Survival Estimates From US to Canadian Patients With Advanced Non–Small Cell Lung Cancer With Implications for Regulatory and Health Technology Assessment. JAMA Netw Open. 2022;5(11):e2239874. doi:10.1001/jamanetworkopen.2022.39874

[2] https://www.nice.org.uk/corporate/ecd9/resources/nice-realworld-evidence-framework-pdf-1124020816837

The Need for a “Living” Approach to HTAs

Read more »

The Need for Structured Tools to Guide HTA Submissions

Read more »