Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

FDA’s Bayesian Guidance: Strategic Considerations for Sponsors

The FDA’s January 2026 draft guidance, “Use of Bayesian Methodology in Clinical Trials of Drug and Biological Products,” clarifies how the Agency expects sponsors to justify Bayesian approaches, especially when an informative prior borrows external information to support primary inference. As a draft guidance, it is nonbinding and not for implementation.

This blog highlights strategic considerations that should inform development planning, protocol/SAP design, and FDA engagement.

 

Type I error control is not the only path

The guidance notes that calibrating Bayesian success criteria to a Type I error rate “may not be applicable or appropriate” when borrowing external information. In those settings, sponsors may instead define success using posterior probability criteria (e.g., Pr(d>a)>c) and, where appropriate, benefit-risk or decision-theoretic frameworks.

At the same time, the draft guidance also recognizes that Bayesian methods are often used within an overall frequentist framework (e.g., to facilitate complex adaptive designs), where Type I error calibration can remain appropriate. Regardless of the framework, success criteria should be pre-specified and justified.

 

Strategic implication:

When the FDA and sponsor agree that a design does not need to be calibrated to the Type I error rate (often discussed in pediatrics and rare diseases), the draft guidance describes alternative operating characteristics such as Bayesian power (probability of success averaged over a prior) and the probability of a correct decision (akin to positive predictive value). That flexibility increases the premium on a well-justified analysis prior, credible simulations, and early FDA alignment.

 

Prior specification is now a regulatory deliverable

The draft guidance recommends that sponsors pre-specify and justify the prior in the protocol, document external information sources (including exclusions), and quantify prior influence metrics. For informative priors, the FDA emphasizes a systematic, transparent review of the totality of relevant evidence — effectively bringing evidence-synthesis discipline into prior construction.

 

Key expectations:

  • Pre-defined source selection criteria before searching for external data
  • Patient-level data preferred over published summary statistics
  • Randomized controlled evidence is generally preferred over single-arm or observational sources
  • Documentation of sources considered and excluded, with rationale

 

Strategic implication:

Prior construction cannot be a post-hoc exercise. Build the evidence base for your prior prospectively — ideally while Phase 2 is ongoing — and then plan early for patient-level data access and any needed re-analyses to align primary estimand/estimators/strategies for handling intercurrent events. If patient-level data from prior studies are not accessible, negotiate data-sharing early or design natural history studies with Bayesian use in mind.

 

Dynamic discounting provides protection — with complexity

The draft guidance discusses both static and dynamic discounting approaches for borrowing external information. Dynamic approaches (e.g., commensurate/supervised power priors, mixture priors, Bayesian hierarchical models, elastic priors) can reduce borrowing when prior-data conflict emerges. These approaches can improve robustness but introduce additional parameters and assumptions that need justification. The FDA also notes the applicability of discounting methods is case-by-case and should be discussed with the Agency.

 

Strategic implication:

For rare diseases with uncertain external data relevance, dynamic discounting is often an important safeguard. For common diseases with robust and highly relevant prior data, simpler (static) discounting may suffice and can simplify the regulatory narrative. Either way, determine the discounting approach while still blinded to the results of the trials that will be borrowed — per the guidance’s explicit recommendation — and support the choice with simulations that span plausible degrees of prior data conflict.

 

Effective sample size is a central metric — Not Type I error inflation

The draft guidance recommends against using Type I error inflation to measure prior influence, calling it “philosophically inconsistent.” Instead, it highlights Effective Sample Size (ESS) and other metrics (e.g., the prior-only estimate) as more interpretable ways to quantify borrowing. The guidance also notes that multiple ESS calculation methods exist, and that ESS can exceed the source-study sample size when the variability in the target population variability is higher.

 

Strategic implication:

Quantify and present ESS across a plausible range of outcomes, including summary statistics such as maximum and mean values. For dynamic methods, show how ESS changes with different degrees of prior-data agreement. Be prepared to explain why ESS may differ from the original study’s nominal samples size and reassess influence after trial completion when dynamic priors are used.

 

Simulation standards are now explicit

The draft guidance recommends providing a comprehensive simulation report (including code, implementation details, and results) across pre-specified, plausible scenarios, including pessimistic assumptions about treatment effect. Simulations should address statistical parameters (e.g., variance, background rate, intercurrent events) as well as operational assumptions such as accrual rate. For MCMC-based analyses, computational settings (warmup/burn-in, iterations, chains, convergence diagnostics) and any other important algorithm-specific settings should be documented for reproducibility.

 

Strategic implication:

Treat simulations and computational reproducibility as submission-grade deliverables, not just internal design exploration. Establish reproducible computational workflows from the start. Pre-specify scenarios and decision rules, and define contingency procedures for implementation issues (e.g., MCMC non-convergence) before the first interim look and before the final analysis.

 

Early FDA engagement is essential

The draft guidance states that “the time needed for FDA and the sponsor to align on an appropriate prior should be considered in the development of the intended trial” and recommends submitting information “as early as possible to ensure sufficient time for FDA feedback prior to initiation.” The draft guidance also states that sponsors should have early discussions with the Agency about the planned estimands, estimators, and approaches for handling missing data in the analyses of external data that will be borrowed, and any differences relative to the approaches planned for the prospective trial data.

 

Strategic implication:

Use early interactions (e.g., Pre-IND or End-of-Phase 2 meetings and, where applicable, the Complex Innovative Trial Design (CID) program) to align on prior specification, success criteria, operating characteristics, and simulation strategy before protocol finalization. Include detailed design comparisons in meeting packages — the draft guidance explicitly recommends comparing proposed Bayesian designs against an alternative, including simpler alternatives.

 

Interim analyses: Design the decision points upfront

The guidance emphasizes that in trials with interim decision-making (e.g., group sequential designs), success criteria should be specified for each decision point. When Bayesian success criteria are calibrated to Type I error rate, interim criteria can be constructed to preserve overall control of the family-wise error rate across looks.

For designs not calibrated to Type I error rate, operating characteristics are calculated relative to the prior and can be especially sensitive when the sample size is small — or when an early interim look makes the effective sample size small. The guidance also notes that skeptical (or enthusiastic) priors can be used in adaptive settings to temper early stopping behavior for efficacy (or futility), but the resulting decision framework should be demonstrated via simulation.

 

Key interim analysis considerations:

  • Pre-specify what decisions can be made at each look (e.g., stop for efficacy, stop for futility, adapt) and the exact posterior or predictive criteria that trigger each action.
  • Simulate interim timing under realistic accrual, endpoint maturation, and missing data patterns — not just idealized information fractions.
  • Plan prior sensitivity and robustness checks targeted at early looks, where prior influence is greatest (e.g., alternative priors and alternative borrowing strengths).
  • Operationalize Bayesian computation for interim timelines: reproducible pipelines, diagnostic thresholds, locked code/versioning, and contingency plans for non-convergence.
  • Protect safety and benefit-risk interpretability: consider minimum exposure or follow-up requirements even if an early efficacy threshold is met.

 

Strategic implication:

Treat interim analyses as part of the regulatory-facing Bayesian package, with pre-specified decision rules, simulations that stress-test early looks, and an execution plan that can be reproduced under tight timelines.

 

Rare vs. Common Disease Considerations

Consideration Rare Diseases Common Diseases
Justification for borrowing Often straightforward: document infeasibility of a conventionally powered randomized trial (small populations and/or ethics) and explain how borrowing supports interpretable benefit-risk. Higher burden: efficiency gains alone may not suffice; clearly demonstrate relevance, address potential bias, and explain why non-borrowing alternatives are not adequate.
Prior data availability Often limited; may rely on natural history studies/registries, small prior trials, and/or structured expert elicitation. Typically richer: Phase II/earlier indications, external trials, and real-world data may be available, but heterogeneity and relevance must be managed.
Recommended approach Dynamic discounting and robust priors; success criteria not calibrated to Type I error may be appropriate when FDA and sponsor agree; plan extensive sensitivity analyses. Bayesian methods embedded in a Type I calibrated frameworks when appropriate; borrowing (if used) is typically limited and carefully justified; pediatric extrapolation handled via separate extrapolation plan.
Key success factor Prospective natural history characterization and early alignment on estimand definition and strategies to make external data relevant. Early data-sharing to enable patient-level review, alignment on estimand definition and strategies, and covariance adjustment, plus a clear relevance narrative and drift/bias mitigation plan.

The bottom line

This draft guidance provides a clearer regulatory pathway for Bayesian methods, but that pathway requires substantial upfront investment in prior construction, estimand definition, and strategies for handling intercurrent events, simulations, and documentation at submission quality. The strategic question is not whether Bayesian methods are acceptable in principle — it is whether the efficiency gains justify the additional complexity and review burden for your specific program.

For rare diseases, the answer is often yes. Bayesian borrowing may be the only viable path to interpretable and approvable evidence. For common diseases, the calculus is more nuanced; borrowing typically needs a stronger relevance argument and may be most defensible when embedded in a Type I calibrated framework. Either way, the strategic decisions about prior specification, discounting method, and operating characteristics should be made early, documented thoroughly, and aligned with FDA before the pivotal trial initiation.

What’s clear is that biostatisticians must now be prepared to operate in both paradigms:

  • To calibrate Bayesian designs to Type I error when appropriate, and
  • To construct and defend fully Bayesian alternatives (including borrowing) when circumstances warrant.

The January 2026 draft guidance does not eliminate the traditional framework; it expands the toolkit. Using that expanded toolkit effectively will require new skills, new conversations, and new ways of thinking about evidence.

The statistical methodology exists. FDA expectations are clearer. The challenge is execution.

 

Interested in learning more?

Cytel invites you to an interactive Office Hours session with Melissa Spann and Savina Jaeger on Wednesday, March 4 at 9 am ET, where you will have the opportunity to ask questions about the FDA’s Draft Guidance for Industry: Use of Bayesian Methodology in Clinical Trials of Drugs and Biological Products:

Improving Efficiency in Oncology Dose-Escalation Trials: A Cautious Bayesian Approach

In the dynamic world of oncology drug development, the complexity of dose-finding studies increases substantially when multiple disease types are evaluated within a single trial. The heterogeneity between cancer types poses a critical challenge: how can we design efficient dose-escalation procedures that account for patient differences across indications particularly when one indication recruits more quickly than the other?

A new approach, cautious iBOIN (ciBOIN), offers a compelling answer. Built on the foundation of the Bayesian Optimal Interval (BOIN) design and its variant with informative priors (iBOIN), ciBOIN introduces a prudent method for borrowing strength from common cancer types that recruit faster to rarer types with slower recruitment while maintaining separate maximum tolerated dose (MTD) estimation for each cancer type.

 

The dose-escalation dilemma in multi-cohort trials

Traditional dose-escalation designs often face a trade-off between safety and efficiency. When trials pool data across disease types, they risk obscuring differences in toxicity profiles.

On the other hand, treating each type entirely independently can lead to missed opportunities to leverage valuable information.

 

Enter ciBOIN: A pragmatic compromise

The ciBOIN method was developed as a compromise between pooling disease types and separate dose-escalation. It allows dose-escalation decisions in the slower-recruiting disease type to be cautiously influenced by data from the faster-recruiting one. The design is particularly appealing in trials where each disease type may require a distinct MTD estimation due to differing patient profiles.

Through extensive simulations, ciBOIN was compared against separate dose-escalation using BOIN over a range of scenarios. The assessed scenarios and results can be classified in three categories:

  • Same toxicity in both disease types: ciBOIN leads to similar or slightly better MTD detection rates with less patients overdosed and a lower DLT rate compared to a separate dose-escalation.
  • Higher toxicity in the common disease type: ciBOIN underestimates the MTD for the rare type but achieves improved safety, reducing the number of patients exposed to overly toxic doses and lowering the overall dose-limiting toxicity (DLT) rate compared to a separate dose-escalation.
  • Higher toxicity in the rare disease type: Here, ciBOIN again underestimates the MTD a bit, this time in the common disease type, but again with reduced overdosing rates.

Overall, ciBOIN results in smaller trial sizes. The highest reduction (~3 patients) with ciBOIN compared to separate dose escalation was observed in the highest dose-toxicity profile.

 

A balanced path forward

The findings support ciBOIN as a viable compromise between full pooling and strict separation. It ensures that dose recommendations are never too aggressive, thereby safeguarding patient safety while still achieving gains in operational efficiency.

Notably, ciBOIN enables a nuanced strategy: one that adapts to the heterogeneity of real-world oncology trials without overcomplicating implementation. For sponsors and statisticians navigating increasingly complex pipelines, this approach may offer a timely and practical innovation.

 

Looking ahead

As oncology trials continue to evolve toward platform and umbrella designs, methods like ciBOIN will be instrumental in ensuring both flexibility and rigor. Future work may explore extending the framework to accommodate more than two cohorts or using other approaches than BOIN and iBOIN.

Ultimately, ciBOIN exemplifies how thoughtful design choices, informed by Bayesian thinking and tempered by clinical caution, can help meet the dual mandate of safety and speed in early-phase drug development.

 

Interested in learning more?

Martin Kappler, along with Yuan Ji from the University of Chicago, will present “ciBOIN — A Bayesian-Informed Dose-Escalation Design for Multi-Cohort Oncology Trials with Potentially Varying Maximum Tolerated Doses” at the 46th Annual Conference of the International Society for Clinical Biostatistics (ISCB) on August 24–28, 2025, in Basel, Switzerland.

Maximizing the Potential of Real-World Data with Bayesian Borrowing

In response to concerns about data quality in real-world evidence (RWE) generation, including issues such as bias and small sample sizes, resulting in low precision estimates with questionable accuracy and thus interpretability challenges, regulatory submissions have increasingly incorporated advanced methodologies to enhance the robustness of RWE.

Among these methods, Bayesian borrowing stands out as an approach that can significantly increase the scientific potential of real-world data. By leveraging data from multiple sources that may all have different weaknesses, Bayesian borrowing can combine these and enhance the power of comparisons with trial data for comparisons beyond those from a randomized control trial. Bayesian borrowing can also be used to create hybrid control arms, enabling a smaller control cohort to address ethical concerns and patient availability issues.1

 

The Bayesian borrowing concept

Bayesian borrowing methods make use of external data, potentially from multiple sources, by using a prior distribution that adjusts for the possibility that this external data may come from a different population. While using external or historical data can enhance the precision and accuracy of parameter estimates in a study, directly simple pooling of this data could lead to bias if the external population differs from the current one.2,3,4 To address this, priors such as a power prior is used to adjust the influence of the external data, which is more diffuse than complete pooling of current study dataset and the external dataset, reducing the possible bias but also the eventual precision of the parameter estimate.

In drug development, Bayesian borrowing is primarily applied in situations involving rare diseases, pediatric trials, or when there are no existing approved treatments for the same conditions.5

 

Figure 1. Bayesian borrowing

 

Quantitative bias analysis (QBA) plays a crucial role in supporting studies that employ Bayesian borrowing by assessing the impact that the weaknesses in the data being integrated has on study results. When leveraging external or historical data through Bayesian methods, such as Bayesian borrowing, there is always a risk that the borrowed data may introduce bias due to elements that cannot be addressed directly in analysis specifications, such as missing or unmeasured data, or other quality issues. QBA helps to quantify the extent of these biases and provides a structured approach to adjust for them, thereby enhancing the interpretation possibilities of the results, ultimately supporting study validity and scientific integrity.

By applying QBA alongside Bayesian borrowing, researchers can transparently account for uncertainties in the borrowed data and ensure that the final estimates are more robust, credible, and defensible in both regulatory and clinical decision-making contexts.

 

Figure 2. Example of QBA for Bayesian borrowing

 

FDA and HTA submissions incorporated with Bayesian borrowing methods

In recent years, the acceptance of Bayesian borrowing approaches has been evolving from both regulatory and Health Technology Assessment (HTA) perspectives.

The FDA has highlighted this shift through initiatives like a podcast discussing the use of Bayesian statistics, including a case where Bayesian methods were used to borrow data from an adult trial to assess an asthma product’s treatment effects in pediatric patients.6 Additionally, the FDA recommended that GSK apply Bayesian dynamic borrowing to integrate adult trial data for a pediatric study for post-marketing activities, and these results were subsequently accepted.7

HTA bodies are also considering Bayesian methods; for example, NICE recommended using Bayesian hierarchical models, which are closely related to Bayesian borrowing, in the technical appraisal of larotrectinib for NTRK-fusion positive solid tumors in 2020.8

Furthermore, the FDA plans to release draft guidance on the use of Bayesian methods in clinical trials for drugs and biologics by the end of 2025.

 

The future of Bayesian borrowing

Although Bayesian methods have garnered increasing attention from regulatory and HTA bodies, their practical implementation has been somewhat limited. Challenges such as organizational resistance to novel approaches, resource constraints, and difficulties in applying these advanced methods effectively can hinder their adoption in regulatory and HTA submissions. However, as awareness grows and best practices are established, these barriers are likely to diminish, paving the way for more widespread use of Bayesian methods.

 

Notes

1 Dron, L., Golchi, S., Hsu, G., & Thorlund, K. (2019). Minimizing Control Group Allocation in Randomized Trials Using Dynamic Borrowing of External Control Data – An Application to Second Line Therapy for Non-Small Cell Lung Cancer. Contemporary Clinical Trials Communications, 16(1).

2 Viele, K., Berry, S., Neuenschwander, B., Amzal, B., Chen, F., Enas, N., Hobbs, B., Ibrahim, J. G., Kinnersley, N., Lindborg, S., Micallef, S., Roychoudhury, S., & Thompson, L. (2013). Use of Historical Control Data for Assessing Treatment Effects in Clinical Trials. Pharmaceutical Statistics, 13(1).

3 Struebing, A., McKibbon, C., Ruan, H., Mackay, E., Dennis, N., Velummailum, R., He, P., Tanaka, Y., Xiong, Y., Springford, A., & Rosenlund, M. (2024). Augmenting External Control Arms Using Bayesian Borrowing: A Case Study in First-Line Non-Small Cell Lung Cancer. Journal of Comparative Effectiveness Research, 13(5).

4 Mackay, E. K. & Springford, A. (2023). Evaluating Treatments in Rare Indications Warrants a Bayesian Approach. Frontiers in Pharmacology, 14(1).

5 Muehlemann, N., Zhou, T., Mukherjee, R., Hossain, M. I., Roychoudhury, S., & Russek‑Cohen, E. (2023). A Tutorial on Modern Bayesian Methods in Clinical Trials. Therapeutic Innovation & Regulatory Science, 57(1).

6 Clark, J. (2023). Using Bayesian Statistical Approaches to Advance our Ability to Evaluate Drug Products. CDER Small Business and Industry Assistance Chronicles, U.S. FDA.

7 Best, N., Price, R. G., Pouliquen, I. J., & Keene, O. N. (2021). Assessing Efficacy in Important Subgroups in Confirmatory Trials: An Example Using Bayesian Dynamic Borrowing. Pharmaceutical Statistics, 20(1).

8 NICE. (2020). Appraisal Consultation Document: Larotrectinib for Treating NTRK Fusion-Positive Solid Tumours.

Oncology Clinical Trials: Design Trends in Biomarker Research

Oncology research has seen many changes and advances in recent decades, from new therapies in combination with backbone chemotherapy to novel treatments targeting malignancies, and compounds targeting specific disease biomarkers at the genetic mutation level. The latter approach has called to question large, relatively long clinical studies assessing the safety and efficacy of treatments against a large population defined at the tumor level. Rather, research at the subpopulation or biomarker level has garnered much more interest as targeted treatments are being developed.  

This focus on subpopulations and biomarkers is changing how researchers approach clinical trials in oncology and helps resolve several issues with larger clinical trials. For example, treatment effects may be diluted in a heterogeneous population, possibly resulting in an underpowered study. Furthermore, a large trial in a heterogeneous population may place patients for whom the drug is ineffective at risk of serious adverse events. On the other hand, restricting enrollment to a target subgroup without sufficient evidence may deny a large segment of the patient population access to a potentially beneficial treatment. This blog post will briefly introduce two statistical approaches addressing the rise of more specific study populations: predefined subpopulation statistical analysis in the context of a larger trial population and population enrichment of the more promising subgroup within an ongoing study. 

Subpopulation Analysis 

Subpopulation testing and analysis is a phase III clinical trial design strategy in which a subset of the study population is selected based on patient characteristics that may be more likely to respond to the treatment under investigation. Identifying and analyzing specific subpopulations allows the researcher to explore whether a treatment leads to different effects in a pre-designated subpopulation. A subpopulation can be defined by any stratification characteristic such as gender or geography, and in oncology clinical trials, specific biomarkers identified within a study population. 

This type of approach to clinical research has several significant benefits in Oncology studies: 

  • A large trial in a heterogeneous population may place patients for whom the drug is ineffective at risk of serious adverse events. 
  • In a heterogenous population, the treatment effect may be diluted, possibly resulting in an underpowered study. 
  • Restricting enrollment to the targeted subgroup without sufficient statistical evidence of lack of efficacy in the non‐targeted subgroup may eliminate beneficial treatment options for patients. 
  • Subpopulation analysis allows for treatment recommendations based on individual characteristics. 

As with any novel adaptive design approach, subpopulation analysis requires several considerations at the design stage. These considerations include the specific definition of the subpopulations for analysis in the study, the appropriate timing for an interim analysis, the methods used for hypothesis testing and type-1 error preservation, and the sequence of hypothesis testing of the different subpopulations and/or the full study population.  

With these considerations in mind, rigorous planning and testing in the design stage of such a clinical trial is critical. Cytel’s East Horizon adaptive clinical trial design software offers a unique solution for the planning and testing of a clinical trial design that includes subpopulation analysis. In Cytel’s solution, hypothesis testing for the full and subpopulations can be performed using graphical multiple comparison procedures (gMPC) with a weighted Bonferroni procedure employed for closed testing. This method of hypothesis testing uses directed, weighted graphs where each node corresponds to a single hypothesis. A transition matrix is used as a complement to specify the weights and generate an intuitive diagram. Finally, a simple algorithm sequentially tests the individual hypotheses using the specified weights and hierarchies. 

 

Population Enrichment 

Population Enrichment is an adaptive clinical trial approach that includes the prospective use of any patient characteristic to obtain a study population in which detection is more likely than in the unselected population. There are two types of population enrichment: Prognostic Enrichment, in which a high-risk patient population is identified based on a biomarker, and Predictive Enrichment, in which the researchers identify a patient group more likely to respond to treatment. Some industry trends that have contributed to the popularization of this adaptive design method include the soaring costs of clinical trial execution, a move away from a “one-size-fits-all” approach to clinical development, and the rising interest in individualized medicine. This adaptive design approach has several benefits, including the identification of highly responsive patient populations, the efficient detection of a treatment effect in a smaller sample size, and the ability to identify beneficial treatments for a subgroup of patients that may have failed with a broader population in a more traditional study design.  

Population enrichment can be seen as an extension of the sample size re-estimation (SSR) methodology, which we discussed in more depth in a previous blog post. 

In the enrichment adaptive approach, a pre-specified number of subjects comprising the entire population, designated as cohort 1, is tested in an interim analysis, and a data monitoring committee reviews the results to assess efficacy or futility against predetermined thresholds. Suppose the analysis shows promising results for only a specific subpopulation of interest in the study, this population is “enriched” with additional patient enrollment in the remaining number of subjects of the study, designated as cohort 2, to enhance data collection for only this subgroup of interest and increase the overall probability of success of the study. As with any adaptive approach, this method has specific considerations, including closed testing with a p-value combination, the preservation of type-1 errors, and additional special considerations requiring attention in event-driven trials like most oncology ones.  

 

Final Takeaways 

Both subpopulation analysis and population enrichment are adaptive approaches to modern trial designs in oncology that offer great hope for researchers and patients alike. As the focus on specific patient populations narrows, these adaptive design types are gaining industry traction. Software-guided clinical trial design and simulation using tools such as East Horizon ensure adaptive elements are incorporated thoughtfully and are rigorously tested prior to trial launch. 

Learn more about these approaches in our upcoming webinar ‘’Oncology Clinical Trials: Design Trends in Biomarker-Driven Research’’ with Boaz Adler and Valeria Mazzanti.

The Role of External Data in Oncology Drug Development

Randomized controlled trials (RCTs) remain the gold standard for the evaluation of the safety and effectiveness of a new treatment. However, in a number of cases alternative approaches leveraging external data (i.e., data from outside of a clinical trial) — ranging from single arm trials to augmented RCTs — can be appropriate. Here, we discuss how to leverage and incorporate external data in drug development, focusing on the use of external control arms and Bayesian borrowing  

Read more »

Advancing Oncology Trials with Bayesian Basket Designs

Written by Yuan Ji, Professor of Biostatistics at The University of Chicago and Mansha Sachdev, Senior Marketing Manager, Content

 

The need for innovative and efficient trial designs has become increasingly apparent in the evolving landscape of oncology drug development. Traditional clinical trials often focus on a single cancer type, requiring multiple individual trials to assess a treatment’s efficacy across different cancer subtypes. This approach can be both resource-intensive and time-consuming. Basket trials, however, offer an innovative solution by allowing simultaneous evaluation of a single therapy across multiple cancer types or subtypes that share common molecular characteristics. This method promises to enhance precision and efficiency in oncology drug development, particularly when combined with Bayesian statistical methods.

Here, we outline the potential transformative role of Bayesian basket trials in oncology drug development.

Read more »

Use of Rolling-enrollment Designs to Accelerate Clinical Trials

Popular statistical designs, such as CRM (O. Quigley et al., 1990), mTPI-2 (Guo et al., 2017), and i3+3 (Liu et al., 2020) typically enroll patients in cohorts and follow the enrolled cohort for a certain period (e.g., 28 days), and then apply sequential decisions that determine the dose level for each cohort based on the observed toxicity data. Accrual is suspended after enrollment of each cohort of patients until all the patients in the current cohort have been fully followed with definitive dose-limiting toxicity (DLT) or non-DLT outcomes. Cohort-based enrollment can thus slow down dose-finding trials since the outcomes of the previous cohort must be fully evaluated before the next cohort can be enrolled. This type of cohort-based designs can also be inefficient, especially if the trial needs to be frequently suspended. 1

To shorten the study duration of phase I trials and reduce the number of accrual suspensions, use of rolling-enrollment designs is recommended, which allows concurrent patient enrollment that is faster than cohort-base enrollment.

Read more »

Why Are There Not More Bayesian Clinical Trials?

Statistical methods have long been fundamental to drug development, and advancements in the last few decades in computing power have opened the door to more widespread use of Bayesian methods in clinical trials. Interest in Bayesian methods is growing – in particular due to what these approaches enable.

So why aren’t more clinicians using Bayesian methods?

Read more »

The Uses of Bayesian Methods in Late-Phase Clinical Trial Strategy

A number of late-phase clinical trial sponsors remain hesitant to employ Bayesian approaches in confirmatory settings, for fear that such statistical approaches generate obstacles for regulatory acceptance. A new Cytel position paper acknowledges the many strategic uses of Bayesian methods in late-phase trial design, arguing that they help generate more effective, efficient, and ethical clinical trials.

Read more »

Understanding Group Sequential Designs

Group sequential clinical trial designs‚ a type of adaptive clinical trial design, have emerged as a powerful tool in enhancing the efficiency and ethical conduct of clinical trials, due to the ability to stop the trial early based on accumulating data. Here, I expand on the intricacies of group sequential designs, key design features, applications in clinical trials, their advantages, challenges, and impact on the landscape of clinical trials.

Read more »