Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Leveraging RWE Innovations to Inform Clinical Strategy and Strengthen Healthcare Decision-Making

Real-world evidence (RWE) is no longer a supporting actor, but rather a strategic asset that should be embedded across the product lifecycle.

We now have tools that were unimaginable a decade ago: synthetic data that preserves privacy while enabling scenario modeling and early go/no‑go decisions, external control arms (ECAs) to strengthen single‑arm trials and accelerate access in high unmet need settings,
and decentralized long‑term extensions via tokenization that reduce burden while capturing 10+ years of safety and effectiveness across the patients’ real-world journey.

These innovations aren’t just “nice to have.” They are how we accelerate access to needed therapies, demonstrate value with confidence, and build submissions that stand up to today’s scrutiny.

Here, I discuss how these capabilities are reshaping clinical strategy and unlocking smarter, faster, more equitable evidence generation.

 

Generating synthetic data with agentic AI

Synthetic data is artificially generated data that mimics the statistical properties of real data without containing identifiable patient information. Starting with appropriate real-world data (RWD) (patient-level) or randomized controlled trial (RCT) data source(s), sponsors can use an AI-supported pipeline to generate a synthetic dataset, then assess similarities to the original data to gauge success.

Synthetic data can:

  • Inform early go/no-go decisions: A cost-effective approach to optimizing asset strategy before large investments by simulating expected outcomes under various scenarios in Phase I–II.
  • Inform CT design: Model alternative controls and sample sizes and stress-test treatment effects in a cost-effective manner.
  • Build privacy-preserving cost-effective ECAs: Build an ECA partially (+ RWD) or totally through a fully de-identified synthetic cohort. This is not for regulatory purposes yet, but it can inform provider and payer decisions.

RWD has its limitations: it must closely resemble real patient populations and protect patient privacy, and can be costly, time-consuming, and potentially unethical. Synthetic data can help overcome these challenges.

 

Strengthen regulatory submission with an external control arm

External control arms use data from historical RCT or RWD when randomization is not feasible or ethical, or to power / accelerate a study where there is high unmet need.

ECAs can:

  • Strengthen single-arm trials (SAT): Provide contextual information for SAT regulatory submissions, increasing probability of success.
  • Accelerate access to needed therapies: For RCT in high unmet need (e.g., accelerated approval pathway) and/or with slow recruitment, RWD can augment the control arm.
  • Support a lifecycle management approach: Supports label expansions to new populations (e.g., to male breast cancer) or new lines of therapy for decisions by regulators, payers, and providers.

While RCTs are considered the “gold standard,” the FDA in 2023 wrote that “externally controlled studies may be considered” (with strong justification), while in 2025, the EMA guidance stated “in some situations, causal conclusions may be derived from a setting where the investigational medicinal product data was collected under a clinical trial protocol while the control arm was not a randomized arm in that same protocol.”

 

Assess long-term outcomes with long-term extension studies

Decentralized long‑term extensions for RCT assess long-term outcomes (safety and effectiveness) with or without drug provisions. The extension enables follow-up of tokenized trial patients via real-world databases or direct-to-patient data collection.

Long‑term extension studies can:

  • Allow for long-term follow-up: Cost-effective data collection by reducing site and patient burden while collecting key safety and effectiveness endpoints over 10+ years.
  • Enable earlier launch: For breakthrough therapies and high unmet need, launch can occur as soon as clinical efficacy is proven if the sponsor commits to a Phase IV study to collect long-term data.
  • Improve representativeness: Loss to follow-up in long-term studies can lead to confounding, and RCTs often under-represent certain populations. The shift to real-world endpoints makes the insights more relevant to decision-makers.

 

Key takeaways

Consider RWE as a strategic asset: Integrate RWE early and anticipate post-marketing collection of long-term data and adopt causal inference methods to protect ideals of safety and effectiveness.

Invest in robust RWD: Invest in RWD quality and governance to ensure credibility with regulators and payers.

Adopt a comprehensive strategy: Adopt flexible, hybrid evidence strategies that combine synthetic data, ECAs, and long-term real-world data collection approaches.

Ensure cross-functional readiness: Medical, regulatory, biostats, and data science must operate as one evidence engine.

 

Interested in learning more?

Join Nathalie Horowicz Mehler at the CMO Summit as we step beyond the protocol — and into what’s possible.

Nathalie’s talk “Beyond the Protocol: Leveraging RWE to Inform Clinical Strategy and Strengthen Healthcare Decision-Making” will be on Tuesday, April 14, at 9:30 am.

What a New Study on AI Adoption in US Hospitals May Tell Us About the Future of Real-World Data

Artificial intelligence is becoming increasingly common in US hospitals. Nearly half of hospitals surveyed in 2023–2024 reported using AI-based predictive models — but adoption is not evenly distributed across the country. Some regions and health systems are moving quickly, while others — particularly those in healthcare shortage areas — are adopting more slowly.

These findings come from “The Landscape of AI Implementation in US Hospitals,” led by Yeon-Mi Hwang and colleagues and published in Nature Health in 2026.1 The study analyzes data from more than 3,500 hospitals nationwide and maps where predictive AI tools are being implemented — and where they are not.

At first glance, this may seem like a technology adoption story. In reality, it is also a data story.

As healthcare increasingly relies on real-world data (RWD) for research, regulatory decisions, safety monitoring, and value-based payment models, the way hospitals adopt AI could directly influence the quality and coverage of the data being produced across the United States.

 

AI adoption signals digital maturity

Hwang and colleagues found that interoperability — the ability of hospital systems to exchange and integrate data — was the strongest predictor of AI adoption. Hospitals with better health information exchange capabilities and fewer data-sharing barriers were much more likely to implement predictive AI tools.

This matters because AI systems require structured, standardized, and well-integrated data to function effectively. When hospitals invest in AI, they often strengthen their documentation practices, data governance, and system integration in the process. Those same improvements elevate the overall quality of clinical data.

In other words, hospitals that are ready for AI are often also ready to produce higher-quality RWD.

 

Why high-adoption regions may produce richer RWD

Predictive AI systems frequently generate structured outputs such as risk scores, alerts, and time-stamped predictions. These outputs are recorded in electronic health records and become part of the clinical data landscape.

As a result, regions with higher AI adoption may generate data that is more complete, more standardized, and better linked across care settings. Their records may contain clearer severity markers, earlier detection signals, and more consistent documentation of clinical decision points.

This is why high-adoption regions may produce richer RWD. The data is not only documented — it is more granular and more measurable.

Because the study shows that AI adoption clusters geographically, these differences in data richness may also cluster by region.

 

The geography gap

One of the more striking findings in the study is that hospitals in healthcare shortage areas and medically underserved regions were less likely to adopt predictive AI. These areas often include rural and resource-constrained institutions.

If these hospitals have less advanced digital infrastructure, the data they generate may be more fragmented and less standardized. Over time, this could create meaningful differences in data coverage across the country. Regions with strong AI adoption may produce deeper, more analyzable datasets, while underserved areas may remain underrepresented in national RWD pipelines.

That imbalance could influence which populations are most visible in research and regulatory evidence.

 

AI changes the shape of the data

AI adoption does not simply improve data capture — it can also shape how care is delivered and recorded. Predictive systems may trigger alerts, influence documentation patterns, and alter clinical workflows. These changes become embedded in patient records.

As a result, RWD from high-adoption environments may reflect AI-influenced care pathways, while RWD from lower-adoption settings reflects more traditional workflows. Differences in adoption may therefore create differences not only in data volume, but also in data structure and interpretation.

 

Why this matters for real-world evidence

Real-world data increasingly underpins post-market surveillance, comparative effectiveness research, regulatory decision-making, and value-based care arrangements. If richer, more granular data clusters in digitally advanced regions, then the evidence generated from national datasets may disproportionately reflect those environments.

This is not necessarily intentional. It is a structural consequence of uneven infrastructure development. But without attention to digital equity, disparities in AI adoption could gradually translate into disparities in evidence generation.

 

The bottom line

The nationwide analysis by Yeon-Mi Hwang and colleagues offers one of the clearest early views of how AI is spreading across US hospitals. Because AI adoption is closely tied to interoperability, digital maturity, and institutional capacity, it likely influences how real-world data is captured, structured, and represented.

High-adoption regions may produce richer RWD — data that is more complete, more granular, and better connected across care settings. At the same time, uneven adoption raises important questions about representativeness and equity in national datasets.

Understanding how AI adoption is expanding — and where it remains limited — may become a key factor in strengthening the US data ecosystem. If increasing AI adoption leads to more complete and structured RWD, it could significantly enhance the power and reliability of real-world evidence. But ensuring that this digital maturity is broadly distributed will be essential. Otherwise, the strength of future RWE may reflect infrastructure patterns as much as clinical reality.

As AI becomes more embedded in healthcare, how and where it is implemented may quietly shape not only care delivery — but the evidence base that guides it.

Breaking Barriers in Rare Disease Research with Generative AI and Synthetic Data

In healthcare innovation, one of the most pressing challenges lies in rare disease research. There are approximately 7,000 rare diseases affecting over 300 million people worldwide. With only a handful of patients dispersed globally, gathering sufficient data to power robust clinical studies or predictive models is a monumental hurdle. However, a solution is emerging at the intersection of generative AI and real-world data (RWD) — a novel approach with the potential to reshape possibilities and unlock insights to address unmet medical needs in rare diseases.

 

The rare disease data dilemma

In the U.S., rare diseases are defined as conditions affecting fewer than 200,000 people. Despite their low individual prevalence, rare diseases collectively impose a significant burden on both patients and healthcare systems.

Research and development in rare diseases often face a vicious cycle: low prevalence leads to data scarcity. Traditional clinical trials are often infeasible and/or statistically underpowered due to the limited pool of participants.

Meanwhile, RWD sources such as electronic health records (EHRs), insurance claims, registries, and patient-reported outcomes offer valuable, albeit messy and fragmented, glimpses into the patient journey. Yet even RWD struggles to paint a complete picture in rare diseases. This is where generative AI steps in.

 

Enter generative AI: Making data where there is none

Generative AI — especially models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and, more recently, large foundation models — has a transformative ability: it can learn patterns from limited datasets and generate synthetic yet realistic datasets.

How it works

  1. Learning from RWD: Even small datasets from rare disease patients can be used to train and fine-tune generative models. These models identify patterns, distributions, and time-dependent relationships present in the data.
  2. Synthesizing patients: Once trained, the model can create new, synthetic patient records that preserve the statistical properties and characteristics of the original data. These “digital patients” simulate disease progression, treatment responses, and comorbidities.
  3. Validating realism: Synthetic data must be validated to ensure it reflects the real-world data it was trained on. Techniques like distributional comparison, propensity scoring, and expert validation are used to ensure accuracy and utility.

 

Why synthetic data matters for rare diseases

Synthetic data can enhance rare disease clinical research in many ways, including:

 

1. Augmenting small cohorts

Synthetic data can boost sample sizes for rare disease studies, enabling:

  • Simulation of clinical trials
  • Development of more robust predictive models
  • Generation of synthetic control arms where traditional controls are ethically or logistically impractical

 

2. Enhancing privacy

In rare diseases, patient re-identification is an increased risk due to unique phenotypes or genetic markers. Synthetic data protects patient privacy, while at the same time preserves the utility of the data.

 

3. Facilitating global collaboration

As synthetic data is deidentified, it facilitates data sharing among researchers, institutions and borders, minimizing regulatory hurdles and fostering cross-collaborative discovery.

 

4. Accelerating drug development

Pharma and biotech companies can use synthetic data to:

  • Test drug targeting strategies
  • Model long-term outcomes
  • Conduct in silico trials in the earliest stages of development

 

Challenges and considerations

While promising, this approach is not without its challenges:

  • Bias amplification: Synthetic data reflects the biases of its training data. If the RWD is incomplete or skewed, so will the synthetic outputs be. Strategies to handle bias are essential.
  • Regulatory acceptance: Regulatory bodies are still evaluating how to incorporate synthetic data into approval pathways.
  • Validation standards: There is a need for consistent benchmarks and best practices for validating synthetic data — both in terms of privacy and utility, as well as broader generative AI applications in healthcare.

 

Looking ahead

The marriage of generative AI and RWD opens new doors for rare disease research. With the ability to synthesize patient data that preserves real-world complexity, we can begin to break free from the constraints of scarcity — generating insights, hypotheses, and interventions that were once out of reach.

As we move forward, interdisciplinary collaboration among clinicians, data scientists, regulatory bodies, and patient advocacy groups will be key to harnessing this potential ethically and effectively.

 

Interested in learning more?

Download our complimentary ebook, Rare Disease Clinical Trials: Design Strategies and Regulatory Considerations:

External Control Arms: A Powerful Tool for Oncology and Rare Disease Research

In clinical research, the randomized controlled trial (RCT) has been considered the gold standard. Yet in many areas — especially in oncology and rare diseases — running an RCT with a balanced control arm is not always possible. Patients, physicians, and regulators often face a difficult reality: how do we evaluate promising new therapies when traditional designs aren’t feasible?

This is where external control arms (ECAs) come into play. By carefully drawing on existing data sources and applying rigorous methodology, ECAs can help provide the context and comparative evidence needed to make better decisions.

Here, we will explore why ECAs are particularly valuable in oncology and rare diseases, how they support decision-making and study design, what data sources they can rely on, and which statistical methods are essential to reduce bias. We will also introduce the concept of quantitative bias analysis and conclude with why experienced statisticians are key to the success of this methodology.

 

Why external control arms matter in oncology and rare diseases

Oncology and rare disease research share several challenges that make traditional RCTs difficult:

  • Small patient populations: In rare diseases, the number of eligible patients is often extremely limited. Asking half of them to enroll in a control arm may make recruitment impossible.
  • High unmet need: In oncology, patients and families are eager for new options. Many consider it unacceptable to randomize patients to placebo or outdated standards of care.
  • Ethical constraints: For life-threatening conditions, denying patients access to an experimental therapy can be ethically challenging.
  • Rapidly changing standards of care: In oncology, new treatments are approved frequently. A control arm that was relevant when a trial began may become outdated by the time results are available.

In such contexts, single-arm studies (where all patients receive the experimental therapy) are common. But single-arm results alone are not sufficient. Without a comparator, how do we know if the observed survival or response rate truly reflects an advance? ECAs provide the missing context.

Even when a trial includes a control arm, unbalanced designs — such as smaller control groups or cross-over to experimental treatment — can limit the ability to make clean comparisons. External controls can augment these designs, helping to stabilize estimates and provide reassurance that results are robust.

 

Supporting internal and regulatory decision-making

ECAs serve multiple purposes:

  1. Internal decision-making:
    • Companies developing new therapies must decide whether to advance to the next trial phase, expand into new indications, or pursue partnerships.
    • ECAs help answer questions like: Is the observed benefit large enough compared to historical data? Do safety signals look acceptable in context?
  2. Regulatory decision-making:
    • Regulatory agencies such as FDA and EMA increasingly accept ECAs as part of submissions, especially in rare diseases and oncology.
    • While not a replacement for RCTs, ECAs can strengthen the evidence package and demonstrate comparative effectiveness in situations where randomization is not feasible.
  3. Helping the medical community:
    • Physicians, payers, and patients need to interpret trial results. An overall survival rate of 18 months in a single-arm study may sound promising, but how does it compare to similar patients receiving standard of care?
    • ECAs help put numbers into perspective, allowing the community to better understand the true value of a new therapy.

 

Designing better studies with ECAs

External controls are not only a tool for analyzing results — they can also improve study design.

  • Feasibility assessments: By examining real-world data or prior trial results, sponsors can estimate expected event rates, patient characteristics, and recruitment timelines. This reduces the risk of under- or over-powered studies.
  • Endpoint selection: Understanding how endpoints behave in historical or real-world settings helps refine choices for the trial, ensuring relevance to both regulators and clinicians.
  • Eligibility criteria: RWD and earlier trial data can reveal which inclusion/exclusion criteria are overly restrictive. Adjusting them can broaden access while maintaining scientific rigor.
  • Sample size planning: By leveraging ECAs, trialists may reduce the number of patients required for an internal control arm, easing recruitment in small populations.

In other words, ECAs can shape trials from the start, rather than being seen only as a “rescue” option after the fact.

 

Sources of external control data

An ECA is only as good as the data it relies on. Broadly, there are three main sources:

  1. Other clinical trials:
    • Prior trials of standard of care treatments can serve as external comparators.
    • Individual patient-level data (IPD) is preferred, but often only summary data is available.
    • These data are typically high quality but may not perfectly match the new study population.
  2. Published studies:
    • Systematic reviews and meta-analyses of the literature can provide comparator data.
    • Useful when IPD is unavailable but limited by reporting standards and heterogeneity across studies.
  3. Real-world data (RWD):
    • Sources include electronic health records, registries, and insurance claims databases.
    • These capture routine clinical practice, reflecting the diversity of real patients.
    • However, RWD often suffers from missing data, variable quality, and lack of standardized endpoints.

Each source has strengths and weaknesses. Often, the best approach is to triangulate across multiple sources, ensuring that conclusions do not rest on a single dataset.

 

The value of earlier clinical trials

Earlier-phase trials (Phase I and II) can be particularly valuable in constructing ECAs. These studies often include control arms, detailed eligibility criteria, and well-captured endpoints.

For rare diseases and oncology, earlier trials may be the only available benchmark. By carefully aligning populations and endpoints, statisticians can extract maximum value from these datasets.

The challenge, of course, is ensuring comparability. Patient populations may differ in prognostic factors, supportive care practices may evolve, and definitions of endpoints may shift over time.

This is where advanced statistical methods become essential.

 

Reducing bias with propensity scoring

One of the key criticisms of ECAs is the risk of bias. Without randomization, patients receiving the experimental therapy may differ systematically from those in the external control.

Propensity score methods are a powerful way to reduce this bias. The idea is simple:

  • For each patient, estimate the probability (the “propensity”) of receiving the experimental treatment based on baseline characteristics.
  • Match or weight patients in the external control group so that their distribution of covariates mirrors that of the trial patients.

This approach creates a “pseudo-randomized” comparison, balancing measured variables. While it cannot eliminate unmeasured confounding, it greatly improves fairness in comparisons.

 

Quantitative bias analysis: Addressing the unmeasured

Even with careful propensity scoring, unmeasured confounding remains a concern. Clinical researchers often ask: What if there are factors we didn’t account for?

This is where quantitative bias analysis (QBA) enters. QBA does not eliminate bias but helps us understand its potential impact.

For example:

  • Analysts can model how strong an unmeasured confounder would need to be to explain away the observed treatment effect.
  • Sensitivity analyses can simulate scenarios with different assumptions about unmeasured variables.

By explicitly quantifying uncertainty, QBA provides transparency. Regulators and clinicians gain confidence that conclusions are robust — or at least, that limitations are clearly understood.

 

The need for experienced statisticians

Constructing an ECA is not a “plug-and-play” exercise. It requires expertise across multiple domains:

  • Data curation: Selecting fit-for-purpose datasets, cleaning and harmonizing variables, and aligning endpoints.
  • Study design: Defining eligibility, follow-up time, and analysis plans that minimize bias.
  • Statistical methodology: Applying techniques like propensity scoring, inverse probability weighting, Bayesian borrowing, and QBA.
  • Regulatory communication: Explaining assumptions, limitations, and sensitivity analyses in language that regulators and clinicians can understand.

In short, ECAs demand both technical skill and strategic judgment. Partnering with experienced statisticians ensures that external controls provide credible, decision-grade evidence rather than misleading comparisons.

 

Final takeaways

External control arms are rapidly becoming an indispensable tool in modern clinical research — especially in oncology and rare diseases, where traditional RCTs often fall short.

They offer:

  • Context for single-arm studies and unbalanced designs.
  • Support for both internal and regulatory decisions.
  • Guidance in study design and feasibility planning.

By leveraging diverse data sources — from earlier trials to real-world evidence — and applying rigorous methods such as propensity scoring and quantitative bias analysis, ECAs can bring clarity and credibility to difficult development programs.

But the value of ECAs depends on how well they are planned and implemented. Done poorly, they risk misleading decisions. Done well, they empower researchers, regulators, and clinicians to make better choices for patients.

As the field evolves, one thing is clear: the expertise of skilled statisticians is the cornerstone of successful ECAs.

 

Interested in learning more?

Join Alexander Schacht, Steven Ting, and Vahe Asvatourian for their upcoming webinar, “Beyond the Standard Clinical Trial in Early Development: When and Why to Consider External Controls” on Thursday, October 16 at 10 a.m. ET:

Breathing Easier: How Wearables Are Revolutionizing Patient-Reported Outcomes in Respiratory Disease

The rise of wearable technology is transforming how clinicians track chronic respiratory diseases like asthma and COPD (chronic obstructive pulmonary disease). Traditionally, managing these conditions has relied heavily on intermittent clinic visits and subjective symptom reports. But what if we could continuously monitor how patients breathe, move, and feel — right from their homes?

Enter wearables: smart devices that collect real-time physiological and behavioral data. These devices typically work in tandem with smartphone apps that prompt patients to complete patient-reported outcome (PRO) measures — allowing for integrated, real-time tracking of a full range of patient-relevant outcomes. When combined, these tools offer a powerful new lens for respiratory health.

 

Why PROs matter in respiratory disease

PROs are essential for understanding the true impact of respiratory disease on daily life. PRO measures like the Asthma Control Test (ACT), COPD Assessment Test (CAT), and modified Medical Research Council (mMRC) Dyspnea Scale help patients communicate their symptoms and limitations. Yet, these snapshots — typically completed during in-clinic visits — often miss the nuances of fluctuating symptoms and the effects of lifestyle or environment.

This is where wearables shine: they offer objective, continuous, real-world data that can complement traditional PROs — typically administered in-clinic on paper or electronically — by adding daily context and physiological insight to self-reported symptoms. By enabling patients to complete PRO measures remotely, often via smartphone apps, paired with real-time wearable data, we gain a fuller, more continuous picture of their health and functioning.

 

What wearables can measure

Modern wearables can track a range of data relevant to respiratory care, including:

  • Physical activity (steps, walking time, exertion)
  • Heart rate and heart rate variability
  • Respiratory rate and breathing patterns
  • Sleep quality and disruptions
  • Environmental exposures (via linked apps or sensors)

While wearables provide continuous physiological data, PROs are typically captured via separate smartphone apps or digital platforms, where patients log symptoms, functioning, or side effects on a scheduled or event-triggered basis.

When patients report increased fatigue or shortness of breath, wearables can confirm whether activity levels dropped, sleep was disrupted, or physiologic stress markers changed — giving clinicians a fuller picture of disease impact and progression.

 

Applications in COPD and asthma

One of the most promising areas for wearables in respiratory care is pulmonary rehabilitation (PR). PR is a cornerstone therapy for COPD and increasingly recommended for severe asthma. However, adherence and engagement outside clinical settings can be challenging.

Wearables like Fitbit or Garmin devices are being used in PR programs to:

  • Monitor daily activity levels
  • Set and track exercise goals
  • Deliver motivational feedback
  • Correlate physical activity trends with PROs such as dyspnea and fatigue

Recent studies suggest that integrating wearables into PR not only boosts patient motivation but also correlates with improved self-reported symptoms and quality of life.

Another area of growth is early detection of exacerbations. New wearable patches and multi-sensor systems can detect subtle changes in respiratory rate, coughing, or oxygen saturation — sometimes days before a patient would seek help. When combined with self-reported symptoms like increased breathlessness or wheezing, these alerts could trigger early intervention and reduce hospitalizations.

 

Case in point: A digital lifeline for COPD patients

In one pilot program, COPD patients were equipped with a wearable sensor that tracked activity, respiratory patterns, and heart rate. They also submitted weekly symptom reports via an app. When wearable data indicated decreased activity and rising respiratory rate, and the patient-reported worsening breathlessness, clinicians were alerted and could intervene early — often adjusting treatment or scheduling a check-in before an exacerbation worsened.

This “digital safety net” approach is gaining traction as a way to personalize care and improve outcomes, especially in vulnerable or remote populations.

 

Challenges to widespread use

Despite their promise, wearables in respiratory care face several hurdles:

  • Data integration: Many devices still don’t seamlessly connect with electronic health records (EHRs).
  • Clinical validation: While feasibility is proven, more large-scale trials are needed to show that wearable-enhanced PRO monitoring improves long-term outcomes.
  • Implementation: Providers may require training in how to teach their patients to utilize wearables and the associated smartphone apps that collect PRO data, meaning that time spent on these activities should be considered billable.
  • Equity and access: Not all patients have smartphones, internet access, or feel comfortable using digital devices — particularly older adults, those in underserved or rural communities, and individuals facing technological or connectivity barriers.
  • Privacy and regulation: Health data from consumer-grade devices must be handled securely, and many wearables are not yet classified as medical devices.

 

The road ahead

With increasing support from healthcare systems, regulators, and tech companies, the future looks bright for wearable-assisted respiratory care. Remote patient monitoring is now reimbursable in countries like the U.S., and smart integration with PRO tools is making these technologies more usable and impactful.

As clinicians and researchers continue to validate these tools, we can expect wearables — and the PRO data they pair with — to become a routine part of respiratory disease management. Smartphone apps are now central to this ecosystem, not just for data capture but for delivering care.

Trustworthy AI in Action: Predicting Stroke Risk Transparently with Claims-Based Machine Learning

In recent years, deep learning and large neural networks have garnered most of the attention in the machine learning (ML) community. Their ability to model complex, high-dimensional data is indeed impressive. But in healthcare — where decisions can have serious consequences and interpretability is paramount — simpler, transparent models like logistic regression still have an important role to play.

Not every problem requires a black box. When it comes to predicting disease risk using structured data, such as insurance claims, traditional models can offer accuracy and insight.

 

Claims databases: An untapped resource for disease risk prediction

Claims databases are an increasingly valuable source of real-world data (RWD). Unlike clinical trial data, which is highly controlled but limited in scale and scope, administrative claims datasets cover millions of lives over multiple years, reflecting real patient behavior and care patterns.

These databases include information on diagnoses, procedures, prescriptions, and demographics — elements that, while lacking granular clinical detail, can still reveal important patterns in disease progression and risk. The scale of these datasets allows for robust statistical modeling, even for rare outcomes.

 

The case for explainable machine learning in claims-based risk prediction

When working with claims data, models like logistic regression, Lasso, or Ridge regression are not just sufficient — they are often ideal. These models:

  • Produce coefficients that quantify the relationship between features and outcomes.
  • Allow for transparent understanding of why a prediction was made.
  • Are easier to validate and communicate to clinicians, payers, and regulators.

In contrast, deep learning models often deliver slightly higher accuracy at the cost of interpretability — a trade-off that may not be acceptable in regulated healthcare environments.

 

A real-world example: Predicting stroke risk with claims data

In a recent study, Cytel used data from over 2.5 million insured individuals to predict the risk of stroke hospitalization. Using only claims-based features such as age, medication use, comorbidities (e.g., diabetes, hypertension), and health service utilization, we compared the performance of several models, including:

  • Logistic Regression
  • Regularized linear models (Lasso and Ridge)
  • XGBoost (a state-of-the-art ML algorithm)

The results? All models achieved similar predictive performance, with area under the ROC curve (AUC) values around 0.81. Logistic regression — simple, explainable, and well-established — performed on par with XGBoost, demonstrating that advanced complexity wasn’t necessary to achieve meaningful predictive power.

 

Transparency enables trust and action

What sets models like logistic regression apart is their explainability. Stakeholders can see precisely how risk factors like atrial fibrillation, hypercholesterolemia, or age contribute to predicted stroke risk. This level of clarity is essential not only for clinicians making decisions, but also for data governance, compliance, and patient communication.

In a time when “black box” AI models are under increasing scrutiny, explainable models offer a pragmatic path forward — especially when paired with large-scale real-world datasets like claims data.

 

Keep it simple, keep it transparent

Healthcare doesn’t just need powerful algorithms — it needs trustworthy ones. As our study shows, standard machine learning models remain highly relevant, especially when applied to well-structured real-world data. Claims databases, in particular, offer a rich foundation for developing these models and making preventive healthcare smarter, earlier, and more accessible.

Innovations in Clinical Trial Design for CNS Disorders

Clinical research in central nervous system (CNS) diseases has long been fraught with challenges. High failure rates, complex pathophysiology, variability in disease progression, strong placebo effects, and difficulties in recruitment and outcome measurement have made CNS disorders one of the riskiest areas for drug development. However, recent innovations in trial design — coupled with advances in digital health and statistical modelling — are transforming how we conduct clinical research in diseases like Huntington’s disease (HD), Alzheimer’s disease (AD), and multiple sclerosis (MS). This blog explores three recent trials that exemplify these innovations and proposes statistical advancements to strengthen their impact.

 

Adaptive designs in Huntington’s disease: The PIVOT-HD trial

Traditional fixed designs often struggle to efficiently explore dose-response relationships or adapt to emerging data. Adaptive trial designs offer a dynamic solution, particularly valuable in neurodegenerative diseases like Huntington’s disease, where treatment response and disease progression can vary widely.

Case study: PIVOT-HD trial (NCT05358717)

The PIVOT-HD trial, led by PTC Therapeutics, is a Phase II adaptive study evaluating the safety, pharmacodynamics, and early signs of efficacy of PTC518, a novel small-molecule HTT-lowering therapy. PTC518 modulates mRNA splicing to reduce levels of the mutant huntingtin protein, a key driver of HD pathology.

What sets this trial apart is its seamless adaptive design. The trial is structured to adjust dosing and the randomization ratios based on interim pharmacodynamic and safety readouts. By incorporating planned decision-making, PIVOT-HD minimizes exposure to ineffective doses and accelerates identification of promising therapeutic windows.

 

Digital biomarkers and remote monitoring in Alzheimer’s disease: The DETECT-AD trial

Cognitive decline in AD is insidious and can be difficult to quantify with infrequent clinic visits and subjective tests. Digital health technologies are revolutionizing outcome assessment through continuous, objective, and sensitive data collection.

Case study: DETECT-AD (Digital Evaluations and Technologies Enabling Clinical Translation in Alzheimer’s Disease)

The DETECT-AD initiative, part of a broader effort supported by the NIH and multiple research institutions, is employing wearables, mobile apps, and speech analysis to detect early signs of Alzheimer’s disease in at-risk populations.

In the DETECT-AD observational study, participants use smartphone apps and passive sensors to monitor activities like walking, typing speed, and even voice characteristics. These digital biomarkers are being correlated with traditional cognitive assessments and brain imaging data to predict cognitive decline before clinical symptoms emerge.

 

Platform trials in multiple sclerosis: The OCTOPUS trial

In diseases like MS, where multiple mechanisms may underlie relapses and progression, traditional “one drug, one trial” designs are increasingly inefficient. Platform trials offer a more flexible and scalable solution.

Case study: The OCTOPUS trial (UK MS Society)

The OCTOPUS (Optimal Clinical Trials Platform for Progressive MS) trial is the world’s first multi-arm, multi-stage platform trial in progressive MS. Spearheaded by the UK MS Society, this innovative study aims to test multiple repurposed therapies simultaneously, using a shared control group and adaptive design principles.

OCTOPUS promises faster answers with fewer patients and more efficient use of resources, particularly crucial in progressive MS where effective treatments are lacking.

 

Statistical challenges and opportunities

Despite these advances, several statistical hurdles remain. Novel designs require equally innovative statistical approaches to preserve validity and ensure robust interpretation.

Broader adoption of Bayesian statistical frameworks

Bayesian approaches allow the integration of prior knowledge (e.g., historical control data or early biomarkers) and offer probabilistic interpretations of trial results. In adaptive and platform trials, Bayesian methods facilitate:

  • Interim analyses with posterior probabilities guiding adaptations.
  • Dynamic borrowing from concurrent or historical control arms.
  • Greater flexibility in endpoint modelling across heterogeneous subgroups.

For example, the GBM AGILE platform trial in glioblastoma (a CNS tumor) successfully uses Bayesian methods to adapt enrollment and determine early stopping rules. A similar framework could benefit complex CNS conditions like MS or AD, where responses are highly individualized.

Incorporating real-world evidence (RWE) in trial planning and analysis

As clinical trials increasingly occur alongside large electronic health record (EHR) systems, real-world data (RWD) can inform trial design and enhance external validity. Specifically:

  • RWD can help refine eligibility criteria to better represent actual patient populations.
  • Real-world comparators can augment underpowered control groups or offer external validation.
  • Longitudinal RWE provides insight into long-term treatment effects beyond trial duration.

In Alzheimer’s disease, initiatives like the AHEAD 3-45 study are already incorporating observational cohorts and RWE in trial simulation and endpoint modelling.

 

The next generation of neuroscience trials

The future of CNS clinical trials is increasingly adaptive, digital, and data driven. Innovative designs like PIVOT-HD, DETECT-AD, and OCTOPUS illustrate the power of new methodologies to make trials more efficient, sensitive, and patient-centric. However, to fully realize their potential, we must integrate robust statistical techniques such as Bayesian modelling and real-world data frameworks. These tools will help overcome inherent complexities in CNS research and bring transformative treatments closer to patients in need.

As we look ahead, collaboration between statisticians, clinicians, regulators, and technology developers will be essential in shaping the next generation of neuroscience trials — where precision, agility, and real-world relevance are no longer luxuries, but necessities.

 

Interested in learning more?

Register now to watch James Matcham’s on-demand webinar, “Clinical Trial Design Innovation in CNS Disorders.” This webinar features a review of regulatory guidelines and showcase recent successful trials in Alzheimer’s disease and other neurological disorders.

From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval

Achieving positive topline results in a clinical trial marks a critical milestone in the drug development process, yet it is far from the end of the submission journey. Instead, it signals the start of a complex, fast-paced effort to prepare for regulatory submission and navigate the FDA’s multi-stage review. The final “regulatory defense” stage demands rigorous collaboration, meticulous planning, and adaptability to meet the expectations of regulatory agencies.

Here we discuss the key stages in the post-topline journey, exploring key milestones, unexpected challenges, and best practices for ensuring a strong submission and a smooth path to approval.

 

1. The Preparation: Post-topline readiness and strategic planning

The preparation phase begins immediately after topline results are available. During this critical window — often lasting several months — cross-functional teams shift their focus to assembling the final submission package. Statisticians and programmers play a central role here, finalizing the tables, listings, and figures (TLFs) that will populate the Clinical Study Report (CSR) and preparing submission-ready datasets following CDISC standards, including ADaM, SDTM, and associated documentation.

In parallel, a pre-BLA or pre-NDA meeting with the FDA is typically scheduled to align on expectations, identify potential concerns, and set the foundation for a smoother review process. This phase is not just about document generation; it’s about establishing a strategy, anticipating regulatory scrutiny, and ensuring the submission is both complete and compelling. The quality of the groundwork laid here often dictates the ease — or difficulty — of the phases that follow.

 

2. The Submission: Crossing the threshold to regulatory review

Once the submission is filed, the process transitions into a more structured phase governed by the FDA’s review protocols. The agency begins with a 60-day filing review to assess whether the BLA or NDA is complete and acceptable for full review. If so, the sponsor receives a Day 74 Letter, which provides early feedback, flags any immediate concerns, and confirms the Prescription Drug User Fee Act (PDUFA) date — typically 10 months post-filing for standard reviews or 6 months for priority reviews. Although this phase may seem procedural, its significance is high. A clean, well-organized submission can streamline the review process, limit questions, and reduce the risk of delays. This is also the point where rolling submissions, if applicable under Fast Track designation, can offer a tactical advantage by accelerating document delivery and potentially shortening review timelines.

For statistical and programming teams, this is not a time to sit back and relax — it’s an opportunity to ensure internal alignment and anticipate questions the FDA may raise based on known data complexities. Strong documentation and traceability within datasets and outputs are essential at this point, helping to support any needed follow-up. Proactive communication and readiness during this phase help lay the groundwork for the more intensive regulatory engagement that follows.

 

3. The Regulatory Defense: Responding, clarifying, and defending your data

The regulatory defense phase is where the bulk of agency interaction occurs — and where flexibility and responsiveness become essential. During this time, the FDA may issue multiple information requests (IRs), asking for clarification on statistical methodology, specific data points, or safety and efficacy outcomes. Mid-cycle communications, typically occurring around months 4–5 for standard reviews, offer a formal opportunity to assess the review’s progress and surface any significant concerns.

In some cases, the agency may convene an Advisory Committee (AdCom) meeting to gather expert input, particularly when there are outstanding safety questions or complex benefit-risk considerations. Throughout this phase, the ability to quickly respond to ad hoc requests, provide high-quality data outputs, and maintain close collaboration across functions is critical. It’s a high-stakes stage where well-prepared teams can help preserve timelines and ensure the submission stays on track.

 

4. The Unexpected: Adapting to setbacks and charting a new course

In some cases, the regulatory journey doesn’t lead directly to approval. If the FDA identifies significant deficiencies in the initial submission — whether related to clinical data, statistical interpretation, manufacturing, or safety — it may issue a Complete Response Letter (CRL). This marks a temporary halt in the process, requiring the sponsor to address the concerns before resubmission. Depending on the scope of the deficiencies, the resubmission may fall under Class I (minor issues, reviewed in 2 months) or Class II (major issues, reviewed in 6 months).

For statisticians and programmers, this could mean conducting additional analyses, integrating new data, or adjusting the structure and presentation of the submission package. While a CRL can be a setback, it’s also an opportunity to recalibrate, seek additional guidance from the FDA, and improve the likelihood of approval in the next cycle. The key is to approach this phase with transparency, strategic thinking, and a readiness to adapt and respond.

 

Final takeaways

The path from topline results to regulatory approval is rarely linear. Timelines can range from as little as 12 months in expedited reviews to over 30 months in cases involving major deficiencies and resubmissions. Success in this post-unblinding phase hinges on proactive planning, adaptable resourcing, and the ability to respond quickly and thoroughly to regulatory needs. Equally important is collaboration across functions — clinical, regulatory, biostatistics, programming, and operations must work closely and cohesively to anticipate challenges, align timelines, and respond efficiently to agency requests. Whether following a standard or accelerated route, the shared priority is a comprehensive, high-quality submission that stands up to regulatory scrutiny — and ultimately supports timely access to new therapies for patients.

 

Interested in learning more?

Watch Jasperlynn Kao and Florence Le Maulf’s recent webinar, “From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval”:

Career Perspectives: A Conversation with Angie Raad-Faherty

In this latest edition of our Career Perspectives series, we had the pleasure of speaking with Angie Raad-Faherty, Director, EVA Health Economics. Angie shares her journey from a background in applied mathematics and biology to a career in health economics. In this interview, she shares emerging trends in health economics and RWA, expertise that will be essential for the future of drug development, the importance of mentorship, and much more.

 

Can you give us a little background on your career so far? What inspired you to pursue an education in applied mathematics and biology, and how did that lead to a career in health economics?

My career journey has really been shaped by a deep passion for both mathematics and biology. Even back in high school, I loved the logic and problem-solving side of math, but I was equally fascinated by biology, especially in understanding how diseases affect the human body.

After completing my undergraduate degree, I took a graduate course that focused on disease mathematical modeling. This experience was pivotal, as it introduced me to the concept of integrating mathematical techniques with biological applications. I realized that my skills in mathematics could be effectively applied to address complex health-related issues, leading me to the field of health economics. I feel really fortunate that my background in applied mathematics and biology allows me to look at health problems from both a quantitative and biological lens.

 

In your current role, you balance leadership, coaching, and hands-on research. How do you manage this mix, and is it important for your job satisfaction to stay involved in both areas?

In my current role, I’ve found a rhythm that really works for me — balancing leadership, coaching, and hands-on research. Clear communication and thoughtful delegation are key, but I also make it a point to stay close to the actual work. I think it’s really important to empower my team to take ownership of their projects, while also being available to guide and support them when needed. What I’ve realized is that staying involved in the hands-on side of things isn’t just good for the work, it’s important for me personally. It keeps me engaged, helps me stay up to date with what’s happening in the field, and allows me to contribute in a meaningful way. Plus, it helps create a collaborative atmosphere where people feel supported and encouraged to try new things. That balance between leading and doing is what makes my role fulfilling. It not only makes me a more effective leader, but also helps us deliver stronger results as a team.

 

What do you like best about your role, and about working at Cytel?

What I love about my role is the opportunity to make a real difference in patients’ lives. By supporting my clients, I’m able to contribute to bringing innovative, cutting-edge treatments to patients in need.

I also enjoy the unique challenges that come with each project — no two are ever the same. Every new project is a chance to learn something new, whether it’s about a different disease area or an emerging therapy.

And finally, one of the things I truly value about working at Cytel is the people. I get to engage with a variety of clients and work on diverse projects and indications, but just as importantly, I’m surrounded by incredibly smart, driven, and supportive colleagues. It makes the work both meaningful and enjoyable.

 

In your opinion, which skills are critical to a function as a research consultant at Cytel?

I believe both hard and soft skills are critical for a research consultant in the HEOR field in general and at Cytel. On the hard skills side, strong technical skills in HEOR methods, evidence synthesis, and understanding HTA requirements is essential — especially since expectations and requirements differ between HTA bodies.

Soft skills are just as important, like critical thinking, flexibility, and clear communication. In my experience working on HTA submissions for both countries, I learned it’s not just about building strong models but also explaining the results clearly to different audiences and adapting based on local needs.

Success really depends on balancing technical excellence with the ability to collaborate and adjust strategies based on specific client and agency expectations.

 

Given how quickly the field of health economics evolves, continuous learning is crucial. Are there any skills or areas of expertise you’re currently focusing on that you believe will be key to the future of drug development in 2025 and beyond?

Absolutely, continuous learning is not just beneficial, it’s essential in our field to stay ahead of the curve in areas that will define the next decade of drug development and market access. While technical skill development is ongoing, my focus is also on strategic foresight. Right now, I’m particularly focused on three key areas:

  • First, the integration of real-world evidence into economic models. We’re seeing increasing acceptance from HTA bodies to go beyond clinical trial data. As such, building rigorous frameworks for incorporating real-world data, while maintaining methodological transparency is a top priority.
  • Second, understanding how machine learning and artificial intelligence can be leveraged in health economics. There’s huge potential here, but it’s critical we align these innovations with HTA standards and ensure that models remain transparent, logically sound, and valid for reimbursement decisions.
  • Finally, I’m very focused on global HTA alignment. As frameworks become more interconnected, strategically aligning value messages and evidence packages across jurisdictions will be key to driving efficiency and access.

 

Are there any emerging trends in health economics and RWA that excite you right now?

Absolutely, there are several exciting developments in health economics that I find particularly inspiring.

One major trend is the increasing use of real-world data earlier in the drug development process to inform trial design, support regulatory decision-making, and identify unmet needs or specific patient populations.

I’m also really encouraged by the growing emphasis on patient-centered outcomes and health equity. There’s a broader recognition that value goes beyond traditional metrics like QALYs or ICERs. Incorporating factors like caregiver burden, and access disparities is making economic evaluations more holistic and aligned with real-world impact. Another area that particularly interests me is the use of surrogate outcomes. Being able to translate clinical endpoints into meaningful modeling endpoints is crucial, especially when long-term outcomes are difficult to measure directly.

Lastly, the advancement of AI and machine learning is a trend I’m closely following. These technologies are opening doors to deeper insights and faster analyses of complex, unstructured data. While we still need to ensure transparency and methodological rigor, the potential to uncover patterns and generate predictive insights is incredibly exciting. Overall, it’s a dynamic time in our field, and these trends are not only transforming how we work but also reinforcing the importance of continuous learning and adaptability.

 

Could you share a project you’ve worked on that you’re particularly proud of, and why?

Every project I’ve worked on in health economics has felt important to me, because each one represents a chance to help patients access innovative therapies. But the project that stands out most is actually the very first HTA submission I worked on early in my career. It was for early prostate cancer therapy. We were able to build a strong case for both the clinical value and the cost-effectiveness of the therapy, and seeing it approved and knowing it would change lives was incredibly powerful. That experience stayed with me and really shaped my passion for HEOR, showing me how our work can directly contribute to patient access and better outcomes.

 

As someone completing a PhD in applied mathematics and with a leadership role in RWA — both areas where women are underrepresented — what advice would you give to young women or girls aspiring to enter STEM fields?

STEM is full of tough questions and complex challenges — but that’s exactly what makes it so rewarding and interesting. Diverse thinking drives better science, more inclusive solutions, and ultimately, stronger outcomes in healthcare. And that’s exactly what our field needs. Also, mentorship and community are important. Throughout my journey, having people around me who believed in my potential, even when I didn’t yet see it myself, made a huge difference. That kind of support helps build the confidence not just to grow, but to lead. STEM needs more women not just participating, but shaping its future, and I encourage every young woman to see herself as part of that transformation.

 

Have you had a female mentor during your education or career? How did that impact you? Would you be open to mentoring women yourself in the future?

Yes, I had the privilege of working with my supervisor, Dr. Jane Heffernan, during my graduate studies. Her guidance and support were instrumental in shaping my academic and professional development. She provided valuable insights and encouragement, which significantly impacted my confidence and skills in my field.

In the future, I would be open to mentoring women myself, as I believe in the importance of supporting the next generation of professionals and fostering an inclusive environment in academia and beyond.

 

As a remote employee, how do you maintain a healthy work-life balance? What strategies work for you, and do you feel supported by Cytel in this regard?

To maintain a healthy work-life balance while working from home, I believe two key factors are essential. First, establishing a consistent routine with clearly defined working hours and scheduled short breaks throughout the day is crucial. This structure helps me stay productive while also allowing time to recharge. Second, creating a designated workspace that is separate from my everyday home activities is vital. This physical distinction not only enhances my focus but also aids in mentally transitioning between my professional and personal life. I appreciate that Cytel supports this balance through its flexible working hours, which further enables me to manage my responsibilities effectively.

 

What are your main interests outside of work?

Outside of work, I really enjoy getting outside for walks and hikes with my dogs — it’s one of my favorite ways to unwind. I also love spending time in the kitchen, whether I’m baking something sweet or trying out a new dish from a different cuisine. It’s my way of relaxing and getting a little creative.

 

Finally, what’s one piece of career advice you wish you had received earlier?

A key piece of career advice that holds significant value is to prioritize networking and relationship-building within your industry. This includes not only connecting with colleagues beyond your immediate team but also engaging with clients. Cultivating a robust professional network can unlock new opportunities, provide essential support, and offer valuable insights that can greatly influence your career path. Additionally, actively engaging with mentors and peers fosters continuous learning and personal growth, making networking an essential component of career development that is often underestimated.

Advancing Equity in Health Technology Assessment: Lessons from CAR T-Cell Therapies

Chimeric antigen receptor (CAR) T-cell therapies, classified as advanced therapy medicine products, have revolutionized the treatment landscape for certain hematological cancers, providing new hope to patients who previously had limited options. Since the U.S. FDA approved tisagenlecleucel (Kymriah) and axicabtagene ciloleucel (Yescarta) in 2017 for relapsed or refractory B-cell precursor acute lymphoblastic leukemia and large B-cell lymphoma, respectively, evidence has suggested that CAR T-cell therapies could offer a potentially curative approach in a range of other hematological conditions.1,2,3,4

However, despite their potential to improve patient outcomes, access to CAR T-cell therapies remains inconsistent due to cost, delivery complexity, and manufacturing challenges. Additionally, disparities in access related to social determinants of health (SDOH) further limit equitable benefits, disproportionately impacting marginalized populations (such as those living in rural areas, individuals with no family or social networks, and older people).

Health technology assessment (HTA) has traditionally focused on clinical outcomes and cost-effectiveness. Although health equity has been recognized as a distinct value element in HTA, and relevant frameworks and guidelines exist, it is not routinely integrated into decision-making. As such, CAR T-cell therapies represent a valuable case study for better understanding and advancing equity considerations in HTA.

 

What are CAR T-cell therapies?

CAR T-cell therapies are a type of immunotherapy that modify a patient’s T-cells to target and attack cancer cells, offering effective options for relapsed or refractory hematological cancers. This process involves extracting, modifying, and reinfusing the cells, followed by close monitoring for severe adverse events. Beyond their current approved indications, CAR T-cell therapies are also being investigated for several other hematological malignancies, as well as in solid tumors and non-cancer indications such as autoimmune conditions.4

Delivering CAR T-cell therapies presents significant challenges for healthcare systems due to their complexity, high cost, and the need for specialized infrastructure and expertise. The treatment requires apheresis, cell manufacturing, conditioning therapy, and intensive post-infusion monitoring, all conducted at accredited centers, often located in major urban areas.5 Successful delivery also requires coordination among a multidisciplinary team of physicians, nurses, and pharmacists, along with investment in treatment center infrastructure, including intensive care unit capacity and specialized training to manage severe adverse events (e.g., cytokine release syndrome and neurotoxicity).5

 

CAR T-cell therapies: Highlighting equity concerns in access to innovative treatments

Ensuring that equitable access to healthcare is considered in the HTA decision-making, particularly for high-cost, innovative treatments like CAR T-cell therapy, has become a growing concern. Despite advancements in science, therapeutic applications, and complication management, access to CAR T-cell therapy remains limited, with only a small percentage of eligible patients receiving treatment.6,7 This restricted access stems from challenges specific to CAR T-cell therapy, such as high costs, complex logistics, and manufacturing constraints, which are compounded by factors related to SDOH and equity.

Equity gaps are evident in disease incidence and prevalence, treatment patterns, and outcomes of patients eligible for CAR T-cell therapies. For example, racial and ethnic minorities, particularly Black and Hispanic populations, experience higher rates of certain hematological malignancies, yet are underrepresented in clinical trials that inform CAR T-cell therapy approvals.8,9 This leads to gaps in effectiveness and safety data across populations. Furthermore, differences in diagnosis and referral patterns contribute to inequities, with marginalized groups less likely to be referred to specialized centers due to limited provider awareness or implicit biases. Older adults, who could benefit from CAR T-cell therapies, are often excluded from trials, limiting evidence for their use in this population.10 SDOH, such as geographic remoteness and socioeconomic status, exacerbate inequities in access to CAR T-cell therapies once they are approved. Patients living in rural areas face logistical and financial barriers to reaching treatment centers, while individuals from lower socioeconomic backgrounds struggle with transportation, caregiving responsibilities, and lost wages.11,12 These overlapping disparities create a cumulative burden, limiting equitable access and worsening outcomes for historically underserved groups.

 

Exploring equity factors in HTAs of CAR T-cell therapies and the journey toward inclusive access

Traditional HTA frameworks have historically overlooked equity considerations, prioritizing clinical efficacy and cost-effectiveness while neglecting how SDOH and equity factors affect patient access and outcomes. This gap not only exacerbates disparities but also fails to incentivize health technology developers to commit to systematic evidence gathering and addressing these issues in their evidence submissions. While several modified economic modeling approaches that account for equity considerations exist (e.g., distributional cost-effectiveness analyses, equity-based weighting, multi-criteria decision analysis), there is a lack of consensus on which approach is best and how these methods can systematically be incorporated into HTA.13,14 As a result, HTAs often do not account for the unique burdens faced by underserved populations, such as indirect costs related to travel, caregiving, and lost income, further exacerbating existing inequities.

Recent commitments to equity from HTA bodies present valuable opportunities to ensure fair access to novel, high-cost therapies.15,16 CAR T-cell therapies, with their complex delivery and high cost, serve as a compelling case study for examining how HTA bodies incorporate equity considerations into their assessments. To explore this further, we conducted a review of 18 HTAs from Canada’s Drug Agency and the National Institute for Health and Care Excellence, focusing on six CAR T-cell therapies. Our review found that most submissions acknowledged disparities in disease incidence, treatment, and outcomes based on race, socioeconomic status, diagnosis and referral patterns, and age. These disparities were often linked to financial and geographical barriers that disproportionately affect marginalized groups. However, there were limited and inconsistent efforts to quantify these factors in the economic modeling or in the analysis of the clinical evidence submitted. This likely reflects the fact that HTA bodies do not routinely require sponsors to quantify equity concerns within their submissions, leading both decision-makers and companies to potentially overlook these issues.

Cytel will present the results of this review at the 2025 ISPOR conference in Montreal, Canada, where we will explore how gaps in HTA evaluations can inadvertently perpetuate inequities in access to CAR T-cell therapies. Join us at our podium session to learn more about how incorporating equity considerations into HTA processes can promote more equitable outcomes and ensure that all patients, regardless of their background, can benefit from CAR T-cell therapies. Do not miss this opportunity to engage in the discussion on advancing inclusive access to high-cost, innovative therapies.

 

Addressing equity concerns in CAR T-cell therapies: Strategies for inclusive access

Cytel can support pharmaceutical clients in addressing equity concerns through the following offerings:

  • Innovative trial designs that consider elements of health equity
  • Generation of real-world evidence to supplement trial programs
  • Lifecycle evidence generation to support value in diverse groups of patients
  • Advanced analytics, such as transportability analyses, to maximize the use of evidence generated in other settings
  • Quantifying the impact of inequalities in the value proposition of new health technologies.