Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Simulating Multiple Endpoints While Including External Historical Data in Adaptive Oncology Trial Designs

Multiple endpoints are now the rule, not the exception

In many contemporary Phase III oncology programs, a single primary endpoint is no longer sufficient. While Overall Survival (OS) remains the gold standard and regulators still view it as the most direct measure of clinical benefit, in practice, OS takes time to mature leading to very long and expensive clinical trials. In metastatic settings with multiple subsequent lines of therapy, the signal can dilute over time. As a result, sponsors frequently structure confirmatory trials with OS on top of an endpoint that is faster to measure, such as Progression-Free Survival (PFS), and sometimes Overall Response Rate (ORR), incorporated either as dual primary endpoints or within a gatekeeping framework.

For example, a Phase III trial in non-small cell lung cancer (NSCLC) where PFS is expected to read out at ~18 months, while OS may require 36 months of follow-up. The sponsor hopes PFS will support regulatory interaction earlier, potentially even forming the basis of accelerated approval, while OS continues to mature for full approval. The accelerated approval may save the sponsor resources or may bring in additional resources while still following OS data accrual, as the OS evidence is still required by regulatory agencies for the final claim of success.

Although this seems straightforward, this approach fails to take into account all the complexities that may impact that final claim. These endpoints are correlated, mature at different rates, and are influenced by post-progression therapy, imaging frequency, and dropout patterns. Designing such a study requires more than separate power computations for each endpoint, it requires understanding how they behave together. This is where simulation becomes essential.

 

The statistical reality of correlated endpoints

Endpoints such as ORR, PFS, and OS are not independent random variables. They arise from the same underlying disease process. Patients who achieve early tumor shrinkage (i.e., ORR) often experience delayed progression. But that does not guarantee improved OS. Subsequent therapy, crossover, and differential dropout can attenuate survival differences. Many programs begin by assuming independence when calculating sample size or multiplicity adjustments. Unfortunately, that assumption rarely holds once joint behavior is modeled explicitly.

For example:

  • If ORR and PFS have moderate positive correlation (e.g., driven by response durability), the probability of dual success may be higher than naïve calculations suggest.
  • If OS is weakly correlated with PFS due to heavy post-progression treatment, hierarchical strategies may protect alpha but substantially reduce the probability of demonstrating statistical significance on OS.

Note that statisticians usually include a range of correlation coefficients between endpoints to evaluate their impact on overall operating characteristics of the trial.

The FDA will typically focus first on control of familywise type I error across endpoints. But during review, questions often shift toward interpretability:

  • How was correlation justified?
  • Were joint distributions modelled based on empirical data?
  • How sensitive are conclusions to deviations in event timing?

Those questions are difficult to answer with closed-form approximations alone.

 

Why closed-form calculations do not apply

Closed testing procedures, alpha recycling, and parallel gatekeeping frameworks are well-established tools for multiplicity control. From a theoretical standpoint, they provide strong familywise error control under specified assumptions, but operating characteristics become non-intuitive once endpoints are correlated and events accrue at different rates.

For example, let’s assume a hierarchical testing strategy where OS is tested first and fails narrowly due to immature data, PFS may never formally be tested, even if the PFS hazard ratio is clinically meaningful.

Alternatively, reversing the order (i.e., PFS tested first followed by OS) may increase the probability of declaring success on PFS, but now OS significance depends on passing through earlier gates. Power becomes conditional in ways that clinical teams often underestimate.

Simulating such designs allows evaluation of:

  • Probability of joint success (OS and PFS both significant)
  • Probability of partial success (e.g., showing significant PFS while OS is not yet mature)
  • Impact of varying correlation assumptions
  • Sensitivity to delayed event accrual
  • Effect of interim analyses on overall power

This helps clinical teams focus on actual operating characteristics under realistic assumptions instead of theoretical power under ideal ones. For example, in some settings, probability of winning on both endpoints may drop from 75% to around 50% when introducing correlation structures.

 

Modeling multiple endpoint outcomes

Traditional simulations often generate each endpoint independently from parametric survival distributions (e.g., using Exponential or Weibull curves). This is convenient, but not always clinically realistic. The FDA will often ask how simulation assumptions were calibrated. “We assumed independence” is not persuasive.

Therefore, modelling patient outcome data based on a multistate model may generate more credible data that aligns better with what will come to be in practice. This is certainly not the only approach, but one we encourage using on top of the copula approach where correlation coefficients between the endpoints must be specified.

Leveraging prior internal data, particularly standard-of-care arms from earlier studies, can anchor assumptions about:

  • Correlation between endpoints
  • Event-time distributions
  • Dropout rates
  • Missing data mechanisms

Alternatively, external historical data can also be used for this purpose. However, clinical teams must ensure proper evaluation for exchangeability of this data to the assumptions they are using it for, especially if disease management has shifted from when this data was collected.

 

Multiplicity control considerations

As previously mentioned, testing multiple primary endpoints requires strict familywise type I error control. Common approaches include:

  • Hierarchical gatekeeping
  • Alpha recycling
  • Closed testing procedures
  • Pre-specified adaptive decision rules

Under strong positive correlation, alpha allocation may be conservative relative to realized joint behavior. Under weak correlation, nominal power calculations may overstate the chance of dual success.

One area that is often overlooked is how interim analyses interact with multiplicity. Early looks based on PFS may alter the distribution of OS information at final analysis, particularly if enrollment slows after interim data are reviewed. That secondary impact is unfortunately rarely captured.

Simulations accounting for the multiple endpoints decisions may help characterize type 1 error control and power trade-offs in more realistic execution scenarios.

 

Integrating external and historical data

In oncology, prior data are often available, particularly for standard-of-care arms. Including empirically derived components, such as correlation and dropout rate assumptions, in simulation makes projections more defensible.

Regulatory agencies may still require conservative assumptions, but a simulation framework grounded in observed data allows transparent discussion of where assumptions are aggressive, where they are conservative, and why.

 

A practical perspective

Multiple primary endpoints introduce scientific opportunity and statistical complexity at the same time. There is a list of trade-offs that must be accounted for, including but not limited to, overcommitting on sample size, conditional power dependencies across endpoints, sensitivity to correlation structures, event timing uncertainty, and interim decision impacts.

Simulation, when built on joint patient-level modelling and calibrated to empirical data, allows these trade-offs to be evaluated prospectively rather than discovered after a database lock.

In our experience, teams that invest early in this level of simulations and endpoints modelling encounter fewer redesign discussions, particularly once regulatory feedback begins. More importantly, cross-functional stakeholders gain a clearer understanding of what “success” actually means across endpoints.

That clarity is often worth as much as the statistical precision itself.

 

Interested in learning more?

Join J. Kyle Wathen, Valeria Mazzanti,  and Julija Saltane for their upcoming webinar “Simulating Multiple Endpoints to Drive Late-Stage Oncology Trials” on Thursday, April 2 at 10 AM ET:

Evaluating Safety and Efficacy in Phase III Alzheimer’s Disease Trial: Endpoints and Statistical Analysis Methods

In clinical trials studying Alzheimer’s disease — a complex neurodegenerative condition that gradually impairs cognitive functions — cognitive performance and functional abilities are often assessed together. Understanding these dimensions and how they’re measured in clinical trials is essential in shaping Cytel’s statistical analyses.

Here, we discuss our experience working with a sponsor on a Phase III clinical trial evaluating the safety and efficacy of monotherapy in patients with Alzheimer’s disease and the statistical model we used to analyze the repeated measurements on two co-primary endpoints.

 

Alzheimer’s disease

Alzheimer’s disease is a complex neurodegenerative condition that gradually impairs cognitive functions. Its onset and progression are influenced by a range of risk factors and some of the most well-established include age, gender, family history, genetic predisposition, and underlying health conditions.

The disease unfolds in distinct stages, each reflecting a different level of cognitive and functional decline. These stages range from mild cognitive impairment to severe dementia, with symptoms worsening as the disease advances.

 

Evaluation of Alzheimer’s disease in clinical trials

In clinical trials, the severity of impairment is evaluated using various scales, each addressing distinct aspects of cognitive and functional decline. The most effective approach combines both cognitive and functional assessments, as functional abilities are closely tied to cognitive performance.

Understanding these dimensions and how they’re measured in clinical trials is essential in shaping the statistical analyses used. Multiple discussions between stakeholders and the sponsor need to take place to reach a consensus on the appropriate endpoints and statistical methods to be used for the analyses.

 

Investigating safety and efficacy of monotherapy in patients with Alzheimer’s disease

We recently collaborated with a small biotech company specializing in Alzheimer’s research on a Phase III clinical trial investigating the safety and efficacy of monotherapy in participants with Alzheimer’s disease, followed by a 12-month open-label treatment. This study has been the subject of complementary analyses exploring biomarkers (p-tau181 and p-tau217) and additional comparative effectiveness analyses with external control arms.

 

Two primary endpoints: ADAS-Cog11 and ADCS-ADL23

To evaluate treatment efficacy in the Phase III trial, we focused on two co-primary endpoints: the ADAS-Cog11 and the ADCS-ADL23, measured at multiple timepoints throughout the study.

 

ADAS-Cog11: The cognitive assessment

The ADAS-Cog11 is a cognitive subscale that assesses key domains such as memory, praxis, orientation, and language. Scores range from 0 to 70, with higher scores indicating greater cognitive impairment. A more refined version of the ADAS-Cog11, known as the ADAS-Cog13, includes two additional items that assess memory and attention. This new version provides additional sensitivity to change in cognition at earlier stages of AD.

For the primary analysis, ADAS-Cog11 was retained as the primary endpoint. This decision was guided by its use in previous studies evaluating the same investigational product, ensuring consistency and comparability across trials. The added value of the ADAS-Cog13 was also analyzed as an explorative efficacy variable to provide deeper insights into cognitive outcomes.

 

ADCS-ADL23: The functional perspective

The ADCS-ADL23 scale complements the ADAS-Cog11 by providing a functional perspective that reflects the impact of cognitive decline. It evaluates the ability to perform daily living activities, with scores ranging from 0 to 78, where higher scores reflect better functional ability and thus less impairment.

 

Cytel’s approach: Analysis with Mixed Models for Repeated Measures (MMRM)

To analyze the repeated measurements on the co-primary endpoints, we employed Mixed Models for Repeated Measures (MMRM). This approach allows the comparison of cognitive and functional changes over time across treatment arms in a robust and flexible way.

In our models, several key risk factors are included to ensure a well-adjusted analysis. These include baseline disease severity, as measured by the Mini-Mental State Examination (MMSE), prior use of standard AD treatments, and geographic region, as fixed effects. Adjustment for baseline values of the ADAS-Cog11 or ADCS-ADL23 scores is considered to account for differences between subjects at baseline. This helps improve the precision of treatment effect estimates and correct for any imbalances between treatment groups. We also include the treatment group indicator along with its interactions with visit timing to capture if and how treatment effects evolve over time.

This method is particularly valuable for multiple reasons. First, it allows controlling for variables that could influence the observed outcomes — like known risk factors — to be able to understand the treatment effect more accurately. Additionally, by using mixed effects models, both the between and within-subject variability over time is accounted for, which is especially important in a heterogeneous condition like Alzheimer’s. Finally, one of the key strengths of MMRM is its ability to handle incomplete data, meaning it can account for missing values without requiring imputation.

The MMRM method supports the generation of individual and group profile graphs over time. These visualizations offer a clear and intuitive way to observe the evolution of treatment effect. They make it easier to compare trends across groups or subjects, and communicate findings in a straightforward manner, both to scientific audiences and to stakeholders who may not be familiar with the statistical details.

 

Final takeaways

Alzheimer’s disease is the most prevalent neurodegenerative disease and remains one of the most complex challenges in clinical research, requiring robust methodologies to capture both cognitive and functional decline over time. Complementary and adapted clinical scales are essential tools for assessing disease progression, and advanced statistical methods offer a robust and flexible interpretation of the treatment effect.

By leveraging adaptive models, mixed-effects approaches, and sensitivity analyses, we help sponsors generate reliable insights that drive decision-making in neurodegenerative drug research.

Launch Communication: Addressing HCPs Effectively and Ensuring Product Success

The year 2025 promises to be an exciting one for pharmaceutical innovation, with an array of new therapies set to reshape the treatment landscape. From oncology to rare diseases, the industry is preparing to deliver innovative solutions across a diverse range of indications. In oncology, Summit Therapeutics’ ivonescimab will target non-small cell lung cancer and later breast cancer, while Daiichi Sankyo’s Enhertu expands its label to treat additional tumor types. In immunology, Johnson & Johnson’s Tremfya seeks approvals for Crohn’s disease and ulcerative colitis, with Amgen’s Uplizna pursuing indications for myasthenia gravis. Novo Nordisk’s CagriSema could redefine obesity treatment with potential weight loss exceeding 20%, while innovations in rare diseases include Elamipretide for Barth syndrome and Upstaza for AADC deficiency, addressing critical unmet needs. While all these projected launches represent significant scientific innovations, the road to successful commercialization in Germany requires more than scientific breakthroughs — it demands a strategic and tailored communication plan.

One of the most critical challenges in the German market is launch timing. Regulatory approvals are often uncertain, leaving companies working in a state of “near readiness.” Materials must be prepared to an advanced stage but remain flexible enough to adapt to last-minute changes. Furthermore, with the EU HTA reforms having entered into force on January 11, 2025, pharmaceutical companies are facing new dynamics in the evaluation of innovative therapies, which aim to streamline market access and improve patient outcomes.

Here, we’ll explore how to effectively prepare communication materials for a launch, breaking down the key phases.

 

Phase I – Pre-Launch

Step 1: Crafting your story and defining the USP

Each successful pharmaceutical launch starts with a clear story and a well-defined unique selling proposition (USP). With clinical trial data already available at this stage, your team can craft a compelling narrative that resonates with both healthcare professionals (HCPs) and patients.

Defining the USP
The USP should highlight the product’s most valuable features. This is not necessarily an efficacy-related feature. For instance, simplicity in application can be a game-changer. Imagine transitioning from a treatment requiring three daily doses — often missed due to the complexity of adherence — to a single daily dose. This change not only improves adherence but also may enhance therapeutic outcomes and patient satisfaction.

Storytelling: Making your message stick
People remember stories far better than plain facts, a phenomenon known as story bias. Structuring your information as a narrative helps HCPs quickly grasp and retain the key message. Emotional appeal plays a significant role in this, making the message more memorable when tied to an impactful story.

The key elements of an effective story include:

  1. Starting point: The current situation or challenge (e.g., low adherence with multi-dose regimens).
  2. Conflict or problem: The pain points or unmet needs.
  3. Tension: The stakes involved, creating a sense of urgency.
  4. Solution: How the new product resolves these issues (e.g., simpler dosing leading to better outcomes).

Aligning story and message
While the story provides a narrative, your core message delivers the USP explicitly. It’s crucial to distill this into one clear, written statement that connects seamlessly with the story. For example:

  • Message: A once-daily treatment improves adherence and satisfaction.
  • Story: From the struggles of managing multiple daily doses to the simplicity and success of once-daily therapy.

By aligning the message with the story, you ensure that HCPs not only understand but also remember your product’s unique benefits.

 

Step 2: Building a robust publication strategy

A well-planned publication strategy is essential for maintaining momentum and ensuring that your product remains top-of-mind for HCPs over time. This phase requires strategic foresight, long-term planning, and alignment with your product’s unique story and USP.

Phases of a publication strategy
To create a cohesive and impactful presence, publications should be carefully timed to align with the product lifecycle:

  1. Pivotal study results: Highlight key clinical data that underpin your product’s value.
  2. Launch period: Release data and materials that support your key messages during the launch.
  3. Congress presentations: Leverage scientific platforms to amplify your findings.
  4. Case studies: Showcase how your product is used in everyday work of HCPs and what patients it is for.
  5. Review articles: Provide comprehensive overviews of the treatment landscape and background for your product.
  6. Non-interventional studies: Showcase real-world data to support effectiveness, safety and treatment adherence.

Long-term planning: Staying in focus
It’s vital to map out your publication strategy well in advance, especially before launch. The goal is to keep your product in the spotlight by regularly contributing to relevant discussions in the medical community. Some activities, such as planning a non-interventional study, require significant lead time and early involvement of HCPs in study design.

Setting the right topics
Your topics should align with your product’s story and USP. For example:

  • Mechanism of action: Highlight novel mechanisms that set your product apart.
  • Efficacy: Clinically relevant subgroup analyses or responder analyses can highlight efficacy outcomes further.
  • Safety and tolerability: An important topic for clinical practice. HCPs should know what to expect and how to manage.
  • Quality of life: Shift the conversation towards patient-centric outcomes.

A strong thematic focus helps positioning your product within the broader medical narrative. For instance, if the USP emphasizes improved quality of life rather than a survival advantage, ensure that this topic is prominently discussed in key publications, congress presentations, and continuing medical education (CMEs) programs.

 

Phase II – Launch

Step 1: Making the pivotal study known

The results from the pivotal study form the backbone of any pharmaceutical launch. They provide the data that underscores your product’s value, making it essential to disseminate the findings effectively and strategically to the medical community. Alongside this, preparing impactful launch materials ensures your sales and medical teams are equipped to engage with HCPs confidently and consistently.

Your pivotal study data needs to reach the right audience through the right channels:

  • KOL presentations: Engage trusted key opinion leaders (KOLs) to present findings at relevant conferences and symposiums, lending credibility and reach to your data.
  • Reprints and special issues: Distribute reprints of the study in medical journals or as targeted mailings.
  • Secondary publications: Collaborate with thought leaders to craft secondary articles to set the right topics (see above).

 

Step 2: Prepare essential materials

These materials act as a starting point for your field force, giving them something tangible to distribute and discuss during their initial interactions with HCPs. When creating materials, prioritize clarity and relevance.

  1. Quick-access materials
    • One-pagers: Compact, easy-to-read documents summarizing key product benefits and clinical data. These are ideal for quick reference and can be prepared early in the launch process.
    • Handout cards: Provide practical guidance, such as managing side effects or linking to a landing page via QR codes.
  2. Core materials
    • Detail aids: Ensure these materials support the product’s story and facilitate a natural conversational flow. Include elements like case studies to make the narrative relatable and impactful.
    • Slide decks: Develop a comprehensive slide kit that serves as the foundation for all other materials, from sales presentations to training content.
    • Conversation guides/objection handlers: Provide structured guidance for handling objections and addressing key questions effectively.

By focusing on these priorities, your materials will resonate effectively with HCPs, reinforcing your product’s value and fostering trust.

 

Step 3: Preparing the sales force

A well-prepared sales force is critical for effectively communicating a product’s value to HCPs. Training should not only provide in-depth knowledge of the product but also focus on presenting the benefits in a way that resonates with the physician’s daily practice and patient care priorities.

Comprehensive training: Thinking like an HCP
When training your sales team, it’s essential to adopt the HCP’s perspective. What matters most to the physician? Tailor the messaging to address the specific needs and interests of their role, practice, and patients.

  • Key questions for training:
    • What challenges does this product address for HCPs?
    • How does it make their workflow for patient treatment smoother?
    • What benefits can it provide to patients under their care?

To refine these messages and ensure relevance, consider engaging an Advisory Board of HCPs to provide insights during the preparation phase.

Translating product benefits into practical advantages
Effective communication bridges the gap between product features and the everyday concerns of HCPs and patients. Consider the following perspectives:

  • For the physician: How does the product improve treatment efficacy, patient outcomes, or time efficiency?
  • For practice staff: Does it simplify workflows or improve patient management?
  • For patients: What’s the tangible impact on their quality of life, adherence, or treatment experience?

For example, if the product offers once-daily dosing, the message for physicians could emphasize improved adherence and better clinical outcomes, while for patients, it could highlight ease of use and reduced daily burden.

Addressing side effects: A crucial focus
While side effects may not be the most engaging topic from a marketing perspective, they are highly relevant to HCPs. Addressing this aspect thoughtfully can establish trust and confidence:

  • Side effect management: Provide clear, actionable guidance on identifying and managing common side effects.
  • Adherence strategies: Equip HCPs with tools to counsel patients effectively, helping them stay on treatment despite potential challenges.

By emphasizing practical solutions to these concerns, your sales team can engage in meaningful, trust-building conversations with HCPs.

 

Phase III – Post-Launch

The launch may mark the beginning of your product’s presence in the market, but the post-launch phase is where sustained engagement solidifies success. This stage is about expanding your material offerings, deepening HCP and patient interactions, and leveraging diverse communication channels to maximize impact.

Step 1: Expanding materials and strengthening communication channels

As the product becomes established, adding resources tailored to both HCPs and patients ensures continued interest and adoption:

  • Brochures and patient materials: Create informative materials that HCPs can hand directly to patients, addressing key concerns and enhancing their understanding of the treatment.
  • Case studies: Develop clinical case studies to showcase real-world application and outcomes, helping HCPs connect evidence to practice.
  • Interactive study content: Transform pivotal study data into interactive formats, such as e-learning modules, to engage users more effectively.
  • Digital content: Enhance digital engagement with webcasts, podcasts, web content, and CME programs.

 

Step 2: Leveraging multiple communication channels

Different HCPs prefer different modes of communication. To make your message stick, it’s crucial to diversify your channels and formats:

  • Written content: Journals, brochures, and patient handouts for in-depth reading.
  • Audio content: Podcasts and narrated case studies for convenience and accessibility.
  • Visual content: Infographics, videos, and interactive slide decks to illustrate key points vividly.
  • Interactive engagements: Webinars, webcasts, and live Q&A sessions to foster real-time interaction and dialogue.

By continuously expanding resources and strategically using diverse communication formats, your post-launch strategy can maintain momentum, deepen engagement, and ensure long-term success.

 

Final takeaways: Strategic communication for a successful launch

Launching a pharmaceutical product requires meticulous planning, strategic storytelling, and continuous engagement. From pre-launch preparation to post-launch expansion, every phase is an opportunity to address the needs of healthcare professionals, establish your product’s value, and create lasting impact.

By aligning your messaging with the unique demands of the German market, utilizing multi-channel communication strategies, and addressing the priorities of both HCPs and patients, you can ensure a successful launch and a strong market presence.

 

Interested in learning more? Watch our recent webinar “Guide to Successful G-BA Consultations: Practical Tips for Market Access Professionals”:

Vaccine Efficacy Trials: Design Considerations and Simulation Tools

Vaccine efficacy (VE) trials play a critical role in assessing how well vaccines prevent infection or disease. These Phase 3 trials measure VE as the proportionate reduction in infection rates between vaccinated and unvaccinated groups. For decades, VE trials have been instrumental in the development of safe, life-saving vaccines, forming the cornerstone of public health policies. Their importance grew exponentially during the race to develop COVID-19 vaccines.

Designing robust VE trials is essential to generating reliable, actionable results. This is where tools like East HorizonTM – Explore can make a significant impact by empowering researchers to design, simulate, and analyze these types of trials effectively.

 

Commonly used metrics for vaccine efficacy trials

Commonly used metrics to evaluate outcomes in vaccine efficacy trials include risk ratios, hazard ratios, and odds ratios:

  • Risk ratios: The risk of an event happening in one group vs. the risk of the same event happening in another group.
  • Hazard ratios: The relative risk of the complication based on comparison of event rates.
  • Odds ratios: The likelihood that an outcome will occur given a particular exposure vs. in the absence of that exposure.

These metrics enable researchers to evaluate outcomes with precision.

 

Special considerations for vaccine efficacy trials

VE trials have a unique set of characteristics that set them apart from other late-phase clinical trials. These characteristics include a large number of study participants, often in the tens of thousands of study subjects, specific follow-up and event requirements, unique testing rules, and super-superiority thresholds, which set a higher standard of efficacy for these products as they are targeted at a relatively small expected number of events. These features address the ethical, logistical, and public health demands of evaluating vaccines for healthy populations. Many of these aspects are shaped by regulatory guidelines (e.g., FDA, EMA) and global health priorities.

Some examples of these unique aspects include:

 

  • Fixed follow-up times: Standardized observation periods that ensure consistency in data collection and improve the reliability of trial results.
  • Targeted event counts & stopping boundaries: Setting target case numbers and stopping boundaries enhances trial efficiency by focusing resources on meaningful outcomes.
  • Unique testing methods for measuring vaccine efficacy
    • 1 – Ratio of proportions: This approach compares infection rates between vaccinated and unvaccinated groups to estimate VE.
    • 1 – Ratio of Poisson rates: Designed for time-to-event data, this method accommodates varying follow-up times among participants.
  • Super-superiority testing: Evaluate cases where vaccine efficacy significantly exceeds standard expectations.
  • Futility boundaries: Facilitate early termination of trials if interim results indicate the vaccine is unlikely to meet efficacy goals.

 

Generating and analyzing VE data using advanced simulation software

East Horizon – Explore enables precise trial design by simulating binary endpoints and time-to-event data and offers a powerful tool for analysis and data visualization. The solution allows users to model randomization and enrollment times, replicating realistic trial scenarios, and modeling enrollment schedules and infection incidence.

Binary and time-to-event endpoints allow biostatisticians to model infection risks and represent participants avoiding infection during the trial period. Additionally, East Horizon – Explore allows for effect measures and hypothesis testing at ease. Users can utilize either 1 – Ratio of proportions or Poisson rates as straightforward and industry-standard formulas.

Advanced continuity correction and R integration capabilities allow users to address potential Type-I error inflation for larger event rates, while enabling advanced customization through R code integration.

 

Financial analysis: Beyond basic efficacy testing

East Horizon – Explore goes beyond traditional VE analysis by incorporating optional financial and operational modeling. Users may incorporate revenue and cost modeling, alongside traditional efficacy testing. For example, users can include variables related to potential market share and associated costs, based on expected treatment thresholds to generate an expected Net Present Value (eNPV) forecast. This option enables strategic decision-making with detailed financial forecasts tailored to vaccine development, which is especially sensitive to cost and market access pressures.

Tailored financial forecasts are particularly important for vaccines because they differ from other clinical products in key ways. Unlike therapeutic drugs, vaccines often require substantial upfront investment for large-scale manufacturing, face shorter market exclusivity periods, and must balance affordability with global accessibility. These unique challenges demand a specialized approach to financial modeling that ensures both economic viability and alignment with public health priorities.

 

Final takeaways

Vaccine efficacy trials are critical component of public health. A unique set of characteristics, however, set them apart from other late-phase trials, requiring special consideration. Advanced simulation software, like East Horizon – Explore, can help sponsors optimize trial designs and gain deeper insights into trial outcomes.

 

Interested in learning more?

East Horizon – Explore offers a comprehensive platform tailored for a variety of designs, including VE trials. The platform empowers researchers with flexible design capabilities, rigorous statistical methods, and decision-support tools. From robust VE analysis to financial modeling, Explore facilitates data-driven decisions that advance vaccine research and enhance public health outcomes.

Understanding the Critical Role of DMCs in Oncology Studies

In clinical research, particularly within oncology, Data Monitoring Committees (DMCs) play a pivotal role in ensuring the integrity and safety of clinical trials. With the high volume of oncology studies and the extensive use of DMCs in these trials, it is essential to understand the specific nuances and challenges these committees face. Here, I provide an overview of the critical aspects of DMCs in oncology studies.

Read more »

Oncology Clinical Trials: Design Trends in Biomarker Research

Oncology research has seen many changes and advances in recent decades, from new therapies in combination with backbone chemotherapy to novel treatments targeting malignancies, and compounds targeting specific disease biomarkers at the genetic mutation level. The latter approach has called to question large, relatively long clinical studies assessing the safety and efficacy of treatments against a large population defined at the tumor level. Rather, research at the subpopulation or biomarker level has garnered much more interest as targeted treatments are being developed.  

This focus on subpopulations and biomarkers is changing how researchers approach clinical trials in oncology and helps resolve several issues with larger clinical trials. For example, treatment effects may be diluted in a heterogeneous population, possibly resulting in an underpowered study. Furthermore, a large trial in a heterogeneous population may place patients for whom the drug is ineffective at risk of serious adverse events. On the other hand, restricting enrollment to a target subgroup without sufficient evidence may deny a large segment of the patient population access to a potentially beneficial treatment. This blog post will briefly introduce two statistical approaches addressing the rise of more specific study populations: predefined subpopulation statistical analysis in the context of a larger trial population and population enrichment of the more promising subgroup within an ongoing study. 

Subpopulation Analysis 

Subpopulation testing and analysis is a phase III clinical trial design strategy in which a subset of the study population is selected based on patient characteristics that may be more likely to respond to the treatment under investigation. Identifying and analyzing specific subpopulations allows the researcher to explore whether a treatment leads to different effects in a pre-designated subpopulation. A subpopulation can be defined by any stratification characteristic such as gender or geography, and in oncology clinical trials, specific biomarkers identified within a study population. 

This type of approach to clinical research has several significant benefits in Oncology studies: 

  • A large trial in a heterogeneous population may place patients for whom the drug is ineffective at risk of serious adverse events. 
  • In a heterogenous population, the treatment effect may be diluted, possibly resulting in an underpowered study. 
  • Restricting enrollment to the targeted subgroup without sufficient statistical evidence of lack of efficacy in the non‐targeted subgroup may eliminate beneficial treatment options for patients. 
  • Subpopulation analysis allows for treatment recommendations based on individual characteristics. 

As with any novel adaptive design approach, subpopulation analysis requires several considerations at the design stage. These considerations include the specific definition of the subpopulations for analysis in the study, the appropriate timing for an interim analysis, the methods used for hypothesis testing and type-1 error preservation, and the sequence of hypothesis testing of the different subpopulations and/or the full study population.  

With these considerations in mind, rigorous planning and testing in the design stage of such a clinical trial is critical. Cytel’s East Horizon adaptive clinical trial design software offers a unique solution for the planning and testing of a clinical trial design that includes subpopulation analysis. In Cytel’s solution, hypothesis testing for the full and subpopulations can be performed using graphical multiple comparison procedures (gMPC) with a weighted Bonferroni procedure employed for closed testing. This method of hypothesis testing uses directed, weighted graphs where each node corresponds to a single hypothesis. A transition matrix is used as a complement to specify the weights and generate an intuitive diagram. Finally, a simple algorithm sequentially tests the individual hypotheses using the specified weights and hierarchies. 

 

Population Enrichment 

Population Enrichment is an adaptive clinical trial approach that includes the prospective use of any patient characteristic to obtain a study population in which detection is more likely than in the unselected population. There are two types of population enrichment: Prognostic Enrichment, in which a high-risk patient population is identified based on a biomarker, and Predictive Enrichment, in which the researchers identify a patient group more likely to respond to treatment. Some industry trends that have contributed to the popularization of this adaptive design method include the soaring costs of clinical trial execution, a move away from a “one-size-fits-all” approach to clinical development, and the rising interest in individualized medicine. This adaptive design approach has several benefits, including the identification of highly responsive patient populations, the efficient detection of a treatment effect in a smaller sample size, and the ability to identify beneficial treatments for a subgroup of patients that may have failed with a broader population in a more traditional study design.  

Population enrichment can be seen as an extension of the sample size re-estimation (SSR) methodology, which we discussed in more depth in a previous blog post. 

In the enrichment adaptive approach, a pre-specified number of subjects comprising the entire population, designated as cohort 1, is tested in an interim analysis, and a data monitoring committee reviews the results to assess efficacy or futility against predetermined thresholds. Suppose the analysis shows promising results for only a specific subpopulation of interest in the study, this population is “enriched” with additional patient enrollment in the remaining number of subjects of the study, designated as cohort 2, to enhance data collection for only this subgroup of interest and increase the overall probability of success of the study. As with any adaptive approach, this method has specific considerations, including closed testing with a p-value combination, the preservation of type-1 errors, and additional special considerations requiring attention in event-driven trials like most oncology ones.  

 

Final Takeaways 

Both subpopulation analysis and population enrichment are adaptive approaches to modern trial designs in oncology that offer great hope for researchers and patients alike. As the focus on specific patient populations narrows, these adaptive design types are gaining industry traction. Software-guided clinical trial design and simulation using tools such as East Horizon ensure adaptive elements are incorporated thoughtfully and are rigorously tested prior to trial launch. 

Learn more about these approaches in our upcoming webinar ‘’Oncology Clinical Trials: Design Trends in Biomarker-Driven Research’’ with Boaz Adler and Valeria Mazzanti.

Watch out, the FDA Rejection Criteria are Now in Place

In this blog, I share some experiences we recently had during an FDA submission Cytel performed for a sponsor after September 15, 2021. What is special about this date?

Read more »

Late-Stage Clinical Development Strategy: Trade-Offs and Decision-Making in the Confirmatory Setting

Despite accumulating learnings from early phases, several uncertainties remain to be addressed when designing pivotal trials. Adaptive trials can help mitigate uncertainties; however, the trade-offs and their impact differ in the confirmatory setting. Quantifying uncertainties and risks and planning for mitigating adaptations are necessary to maximize the chances of success while maintaining the required scientific rigor of pivotal trials. Quantitative strategies can help inform decisions and optimize choices. Read more »

2015 Highlights – Seamless Adaptive Clinical Trials: Now that we get the statistics, what’s really at stake?

Day three of our 2015 Highlight series. Our third most popular blog post is a look at the realities of implementing a seamless trial.

Seamless adaptive clinical trials have gained popularity for reducing the projected time it takes to complete the process of drug development. However, a new study by Cuffe et al., shows that despite a tremendous amount of statistical knowledge about seamless trials, sponsors remain unsure about how to calculate the financial and operational costs of a seamless clinical development program [1]. This in turn results in many unnecessary risks and missed opportunities.

This post offers advice on what you need to keep in mind in order to implement a successful seamless adaptive clinical study.

What is a Seamless Trial?

Also called a combined-phase study, the idea behind a seamless trial is simple: Instead of conducting several phases of a study, plan one adaptive trial where the phases are separated by interim looks. This tends to save time and reduce the number of patients.

For example, the seamless Phase 2/3 ADVENT trial (click image at right to read case study), took a clinical program whose traditional design would have been four Phase 2 studies and one large Phase 3 study, and combined it into a four arm trial, which dropped the two less successful arms after the interim look [1]. The data obtained from the successful active arm prior to the interim look was able to play the dual role of confirming safety, and thereafter establishing efficacy. Employing a seamless adaptive late phase trial reduced sample size from 520 to 350 [2].

When conducting early phase trials, seamless proof-of-concept and dose-finding trials have also become more popular. Their benefit lies in the fact that combining these two trials into one larger trial allows significant reduction in trial time. Cytel Consultants recently reduced trial time by an expected 9-12 months (and 100 fewer patients) by employing such a design.

Varieties of Seamless Trials:

Most seamless trials can be split into two broad categories. An inferentially seamless trial is one where some of the data used prior to the interim look also plays an inferential role after the interim look. Consider once again the combined Phase 2/3 ADVENT trial. The four arm trial prior to the interim look was meant to establish safety. This means that the data from the arm chosen to continue to the confirmatory part of the trial played two roles: prior to the interim look it helped establish safety, after the interim look the same data from this arm, combined with the data collected post-look, also helped to establish efficacy. As a result, the two look trial was inferentially seamless.

An operationally seamless trial, by contrast, is one where the data evaluated after the interim look is kept distinct from the data evaluated prior to the interim look. Each set of data has its distinctive purpose.

The Purported Inflexibility of Seamless Studies:

The flexibility of an adaptive design is often touted as one of its greatest advantages. Based on data collected at an interim look, DMCs can decide how to move forward in a manner which gives the new drug, device or biologic the best possible chance to prove its safety or efficacy.

Seamless studies are beneficial for reducing sample size and increasing the speed of the trial. Unlike other adaptive designs, however, seamless studies may be somewhat less flexible. This is particularly true for inferentially seamless trials. Cuffe et al., cite two reasons for this:

  1. In traditional trial designs, clinical trial sponsors are able to look at the entire set of data collected after a given phase, and make key decisions about the designs of the phases that follow. By contrast, unless  seamless adaptive studies allow for data to be unblinded at an interim look, DMC members must rely on go/no-go decision rules for the second stage of the trial. These rules will have to be determined even before the first stage of the trial begins, which means it may not be possible to take advantage of all of the information which the first stage can provide.
  2. In inferentially seamless trials, the final analysis combines a part of the data that was collected prior to the interim look with data collected post-interim look. In order to use this data, certain constraints must be placed on the entire trial design. As Cuffe et al., explain, “[I]t is worth noting that a seamless study allows only limited changes to the Phase III portion: substantial changes to study conduct can mean that the two portions answer different clinical questions.” [1]

The fact is that in a combined phase study, the basic structure of the post-look portion of the trial has to be determined prior to the interim look. Unlike in a traditional clinical program, a combined Phase 2/3 trial may not allow sponsors to look at all of the unblinded interim data to take advantage of new information which could affect design decisions.

Overcoming the Inflexibility of a Seamless Study: 

Although there is no doubt that seamless trials place certain restraints on late phase trials, sponsors have several reasons to employ a seamless design.

  • Early Phase Advantages: Cuffe et al., find that in practice, many of the restrictions cited above only apply to late phase studies. Their findings reveal that an early phase seamless adaptive study ‘incorporated multiple adaptations and took advantage of safety data from a dose-escalation study to increases the range of doses in the second portion…’ [1]
  • Late Phase Advantages: Although late phase seamless studies might not allow for as much flexibility as other adaptive trials, they have the potential to reduce sample size rather dramatically. In the Phase 3 ADVENT trial designed by Cytel, the use of a seamless study reduced sample size from 520 to 350. Securing a significantly smaller trial made the inflexibility worth it for the sponsors of the ADVENT trial.

 

Notes:

[1]: Cuffe, Robert L., et al. “When is a seamless study desirable? Case studies from different pharmaceutical sponsors.” Pharmaceutical statistics 13.4 (2014): 229-237.

[2] Operationally Seamless & Inferentially Seamless Adaptive Designs