Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Real-World Data Strategies and Challenges: Making Data Work for Your External Control Arm Study

External control arms (ECAs) are gaining popularity in comparative effectiveness studies, driven by a growing emphasis on robust evidence across disease areas and regulatory body acceptance. ECAs can provide a control group for single-arm studies, complement a larger portfolio of evidence, and enable research for rare or genetic conditions for which randomized controlled trials may be unethical or infeasible.

At the same time, real-world data (RWD) is becoming an essential foundation for building credible ECAs. RWD offers unique advantages: it reflects real clinical practice, captures diverse patient populations, and can provide data for robust treatment effects.

However, integrating data from multiple sources, such as historical trials, concurrent trials, patient registries, and cross-population datasets, requires careful methodological planning to ensure validity and regulatory acceptance.

To fully harness the value of external control arms, sponsors must ensure selected data is fit-for-purpose, index dates are aligned with trial eligibility, and rigorous statistical methods are applied to ensure comparable patient profiles. Here, we outline these three essential elements.

 

Choosing the right data source for your external control arm

When building ECAs, different types of external data sources have different strengths.

 

Historical or concurrent randomized trials

Historical or concurrent randomized trials contain systematically collected data and well-defined endpoints, following a detailed protocol. However, they often have small sample sizes, and evolving standards of care or diagnostic criteria can limit comparability over time.

 

Electronic health records and insurance claims

Electronic health records and insurance claims contain large, diverse cohorts and broad population coverage. But they frequently lack clinical details such as out-of-hospital care and non-prescription medications.

 

Patient registries

Patient registries provide systematic, detailed data collection, the potential for linkage​ and long-term follow up. Yet they can have high missingness and over-represent healthier patients, which could reduce the overlap in characteristics with trial populations.

 

Selecting the best data sources should be guided by fit-for-purpose assessments. These studies include exploring the availability of key prognostic characteristics and missingness, along with practical considerations such as access and timelines.

 

Defining appropriate eligibility criteria and index dates

Carefully establishing index dates is critical yet challenging when incorporating an ECA. In a trial population, the index date is clearly defined as when the patient meets eligibility or is randomized. The same eligibility criteria need to be applied to ECA patients using variables in the external data source. The index date should reflect the point at which those criteria are met. Misalignment of the index date leads to specific types of selection bias, including immortal time. This bias occurs when periods during which an outcome could not have occurred are misclassified, potentially creating a false treatment benefit.

 

Ensuring treatment and control patients are similar

In RCTs, randomization naturally balances prognostic factors between treatment arms. ECAs, by contrast, require explicit identification and adjustment of these variables. Clinical expertise is essential for determining which characteristics matter most. Comparing the distributions of these variables between the treated versus control arm helps to assess similarity. Statistical techniques including propensity-matched controls and inverse treatment of probability weighting can improve comparability and approximate the balance achieved through randomization. Assessing pre- and post-adjustment distribution of baseline characteristics quantifies the success of the method.

 

Final takeaways

Overall, to fully harness the value of external control arms, three elements are essential:

  1. Selecting fit-for-purpose data
  2. Defining index dates that align with trial eligibility
  3. Applying rigorous statistical methods to ensure comparable patient profiles

When executed thoughtfully, ECAs can meaningfully strengthen evidence generation and expand the possibilities for clinical research.

 

Interested in learning more?

Join Deepa Jahagirdar and Vartika Savarna for their upcoming webinar, “Driving Credibility in External Control Arms with Real-World Data,” on Thursday, April 9 at 10 am ET.

External Control Arms: A Powerful Tool for Oncology and Rare Disease Research

In clinical research, the randomized controlled trial (RCT) has been considered the gold standard. Yet in many areas — especially in oncology and rare diseases — running an RCT with a balanced control arm is not always possible. Patients, physicians, and regulators often face a difficult reality: how do we evaluate promising new therapies when traditional designs aren’t feasible?

This is where external control arms (ECAs) come into play. By carefully drawing on existing data sources and applying rigorous methodology, ECAs can help provide the context and comparative evidence needed to make better decisions.

Here, we will explore why ECAs are particularly valuable in oncology and rare diseases, how they support decision-making and study design, what data sources they can rely on, and which statistical methods are essential to reduce bias. We will also introduce the concept of quantitative bias analysis and conclude with why experienced statisticians are key to the success of this methodology.

 

Why external control arms matter in oncology and rare diseases

Oncology and rare disease research share several challenges that make traditional RCTs difficult:

  • Small patient populations: In rare diseases, the number of eligible patients is often extremely limited. Asking half of them to enroll in a control arm may make recruitment impossible.
  • High unmet need: In oncology, patients and families are eager for new options. Many consider it unacceptable to randomize patients to placebo or outdated standards of care.
  • Ethical constraints: For life-threatening conditions, denying patients access to an experimental therapy can be ethically challenging.
  • Rapidly changing standards of care: In oncology, new treatments are approved frequently. A control arm that was relevant when a trial began may become outdated by the time results are available.

In such contexts, single-arm studies (where all patients receive the experimental therapy) are common. But single-arm results alone are not sufficient. Without a comparator, how do we know if the observed survival or response rate truly reflects an advance? ECAs provide the missing context.

Even when a trial includes a control arm, unbalanced designs — such as smaller control groups or cross-over to experimental treatment — can limit the ability to make clean comparisons. External controls can augment these designs, helping to stabilize estimates and provide reassurance that results are robust.

 

Supporting internal and regulatory decision-making

ECAs serve multiple purposes:

  1. Internal decision-making:
    • Companies developing new therapies must decide whether to advance to the next trial phase, expand into new indications, or pursue partnerships.
    • ECAs help answer questions like: Is the observed benefit large enough compared to historical data? Do safety signals look acceptable in context?
  2. Regulatory decision-making:
    • Regulatory agencies such as FDA and EMA increasingly accept ECAs as part of submissions, especially in rare diseases and oncology.
    • While not a replacement for RCTs, ECAs can strengthen the evidence package and demonstrate comparative effectiveness in situations where randomization is not feasible.
  3. Helping the medical community:
    • Physicians, payers, and patients need to interpret trial results. An overall survival rate of 18 months in a single-arm study may sound promising, but how does it compare to similar patients receiving standard of care?
    • ECAs help put numbers into perspective, allowing the community to better understand the true value of a new therapy.

 

Designing better studies with ECAs

External controls are not only a tool for analyzing results — they can also improve study design.

  • Feasibility assessments: By examining real-world data or prior trial results, sponsors can estimate expected event rates, patient characteristics, and recruitment timelines. This reduces the risk of under- or over-powered studies.
  • Endpoint selection: Understanding how endpoints behave in historical or real-world settings helps refine choices for the trial, ensuring relevance to both regulators and clinicians.
  • Eligibility criteria: RWD and earlier trial data can reveal which inclusion/exclusion criteria are overly restrictive. Adjusting them can broaden access while maintaining scientific rigor.
  • Sample size planning: By leveraging ECAs, trialists may reduce the number of patients required for an internal control arm, easing recruitment in small populations.

In other words, ECAs can shape trials from the start, rather than being seen only as a “rescue” option after the fact.

 

Sources of external control data

An ECA is only as good as the data it relies on. Broadly, there are three main sources:

  1. Other clinical trials:
    • Prior trials of standard of care treatments can serve as external comparators.
    • Individual patient-level data (IPD) is preferred, but often only summary data is available.
    • These data are typically high quality but may not perfectly match the new study population.
  2. Published studies:
    • Systematic reviews and meta-analyses of the literature can provide comparator data.
    • Useful when IPD is unavailable but limited by reporting standards and heterogeneity across studies.
  3. Real-world data (RWD):
    • Sources include electronic health records, registries, and insurance claims databases.
    • These capture routine clinical practice, reflecting the diversity of real patients.
    • However, RWD often suffers from missing data, variable quality, and lack of standardized endpoints.

Each source has strengths and weaknesses. Often, the best approach is to triangulate across multiple sources, ensuring that conclusions do not rest on a single dataset.

 

The value of earlier clinical trials

Earlier-phase trials (Phase I and II) can be particularly valuable in constructing ECAs. These studies often include control arms, detailed eligibility criteria, and well-captured endpoints.

For rare diseases and oncology, earlier trials may be the only available benchmark. By carefully aligning populations and endpoints, statisticians can extract maximum value from these datasets.

The challenge, of course, is ensuring comparability. Patient populations may differ in prognostic factors, supportive care practices may evolve, and definitions of endpoints may shift over time.

This is where advanced statistical methods become essential.

 

Reducing bias with propensity scoring

One of the key criticisms of ECAs is the risk of bias. Without randomization, patients receiving the experimental therapy may differ systematically from those in the external control.

Propensity score methods are a powerful way to reduce this bias. The idea is simple:

  • For each patient, estimate the probability (the “propensity”) of receiving the experimental treatment based on baseline characteristics.
  • Match or weight patients in the external control group so that their distribution of covariates mirrors that of the trial patients.

This approach creates a “pseudo-randomized” comparison, balancing measured variables. While it cannot eliminate unmeasured confounding, it greatly improves fairness in comparisons.

 

Quantitative bias analysis: Addressing the unmeasured

Even with careful propensity scoring, unmeasured confounding remains a concern. Clinical researchers often ask: What if there are factors we didn’t account for?

This is where quantitative bias analysis (QBA) enters. QBA does not eliminate bias but helps us understand its potential impact.

For example:

  • Analysts can model how strong an unmeasured confounder would need to be to explain away the observed treatment effect.
  • Sensitivity analyses can simulate scenarios with different assumptions about unmeasured variables.

By explicitly quantifying uncertainty, QBA provides transparency. Regulators and clinicians gain confidence that conclusions are robust — or at least, that limitations are clearly understood.

 

The need for experienced statisticians

Constructing an ECA is not a “plug-and-play” exercise. It requires expertise across multiple domains:

  • Data curation: Selecting fit-for-purpose datasets, cleaning and harmonizing variables, and aligning endpoints.
  • Study design: Defining eligibility, follow-up time, and analysis plans that minimize bias.
  • Statistical methodology: Applying techniques like propensity scoring, inverse probability weighting, Bayesian borrowing, and QBA.
  • Regulatory communication: Explaining assumptions, limitations, and sensitivity analyses in language that regulators and clinicians can understand.

In short, ECAs demand both technical skill and strategic judgment. Partnering with experienced statisticians ensures that external controls provide credible, decision-grade evidence rather than misleading comparisons.

 

Final takeaways

External control arms are rapidly becoming an indispensable tool in modern clinical research — especially in oncology and rare diseases, where traditional RCTs often fall short.

They offer:

  • Context for single-arm studies and unbalanced designs.
  • Support for both internal and regulatory decisions.
  • Guidance in study design and feasibility planning.

By leveraging diverse data sources — from earlier trials to real-world evidence — and applying rigorous methods such as propensity scoring and quantitative bias analysis, ECAs can bring clarity and credibility to difficult development programs.

But the value of ECAs depends on how well they are planned and implemented. Done poorly, they risk misleading decisions. Done well, they empower researchers, regulators, and clinicians to make better choices for patients.

As the field evolves, one thing is clear: the expertise of skilled statisticians is the cornerstone of successful ECAs.

 

Interested in learning more?

Join Alexander Schacht, Steven Ting, and Vahe Asvatourian for their upcoming webinar, “Beyond the Standard Clinical Trial in Early Development: When and Why to Consider External Controls” on Thursday, October 16 at 10 a.m. ET:

Patient Centricity in Comparative Effectiveness Research

Inclusion of the patient voice in clinical research is becoming increasingly important. Oncology trials, for example, may be shifting toward a more patient-centered approach to outcomes, such as quality of life and time to symptomatic progression,1 as endpoints like progression-free survival and response rate may not be meaningful to patients in the absence of symptom alleviation.2

The primary purpose of a patient-centric approach to clinical research is to improve patient outcomes, which include well-being and quality of life.3 As part of that endeavor, here we discuss increasing patient centricity in comparative effectiveness research by including patient-reported outcome data in indirect treatment comparison analyses.

 

What are patient-reported outcomes (PRO)?

One way of increasing patient centricity in clinical trials is by incorporating patient-reported outcomes (PRO) as trial endpoints. A PRO is “any report of the status of a patient’s health condition that comes directly from the patient without interpretation of the patient’s response by a clinician or anyone else.”4 PRO measures assess broad concepts like health-related quality of life (HRQoL) as well as specific symptoms, such as pain or fatigue. However, there are concerns regarding the extent to which PRO results are published.5 That is, in addition to growing interest in measuring PRO, there are also ongoing efforts to promote the reporting of PRO findings from clinical trials. Specifically, the Consolidated Standards of Reporting Trials (CONSORT) PRO extension guidelines recommend that PRO results be reported in peer-reviewed publications, ideally with a trial’s primary endpoints.6

 

Benefits to publishing PRO findings

There are various benefits of publishing PRO findings from clinical trials. These PRO data can be used to:

  • Facilitate shared decision-making between patients and providers, and
  • Influence treatment guidelines and drug approvals.7

Additionally, health technology assessment (HTA) agencies recognize patient-reported HRQoL and treatment satisfaction data when confirming clinical benefits and supporting reimbursement decisions.8,9 Specifically, the inclusion of PRO data in peer-reviewed publications is considered especially influential.8 It has been recommended that in order for PRO data to have the largest possible impact, they must be maximized.7 One way to enhance the impact of PRO data from clinical trials is to include them in comparative effectiveness research.

 

What are indirect treatment comparisons (ITC)?

Indirect treatment comparison (ITC) analyses can be used to evaluate the efficacy and safety of interventions that have not been compared in a head-to-head trial. Several ITC methods are available with network meta-analysis (NMA) being the most common.10 Other techniques include those that adjust for patient differences, such as population-adjusted indirect comparison (PAIC). Specifically, PAIC utilizes individual patient-level data (IPD) from one trial (i.e., the index trial) and published aggregate data for comparator trials.10 Using PAIC as an example, the PRO IPD from the index trial would be utilized along with published aggregated PRO data from comparator trials. This would be contingent upon PRO data being publicly available for the comparator(s) of interest, which is one limitation of conducting comparative effectiveness analyses for PRO data. Moreover, the definition and assessment criteria would need to be similar.

 

What does the intersection of PRO and ITC look like?

Examples can be found in the oncology literature. Specifically, ITC analyses of PRO have been published for HR+/HER2- breast cancer. In one study, palbociclib + fulvestrant was associated with better outcomes than abemaciclib + fulvestrant in terms of changes from baseline in several domains of a common oncology PRO measure, the EORTC QLQ-C30 (overall quality of life, emotional functioning, nausea/vomiting, appetite, and diarrhea).11 Another study utilized a matching-adjusted indirect comparison to evaluate time to symptom deterioration, revealing favorable outcomes for ribociclib over abemaciclib in the EORTC QLQ-C30 domains of appetite, diarrhea, and fatigue.12

 

Including PRO data in ITC analyses

Although patient engagement is at the center of conversations about clinical research, incorporating the patient perspective in a meaningful manner remains challenging. As efforts to increase patient centricity continue, including PRO data in ITC analyses may represent an overlooked area of opportunity. Specifically, long before a clinical trial begins, resources are used to select optimal PRO measures, determine when and how they will be administered, and develop an analytical plan. During the trial, patients devote their time and energy to participation, including the completion of PRO measures. When considering all the work that goes into this one component of the clinical trial, it only makes sense that using these data as comprehensively as possible is a way of honoring patients and researchers who share a common goal. As stated earlier, these data should be maximized by including them in comparative effectiveness research.

Including PRO data in ITC analyses both begins and ends with dissemination: that is, only when PRO results from clinical trials are publicly available can they be included in ITC analyses. Along the same lines, the results of the ITC analyses should also be published, ideally in an open-access format, so that patients and providers alike can utilize the findings to develop treatment plans. Additional recommendations beyond dissemination include ongoing collaboration between patients, HTA agencies, and health economics and outcomes research (HEOR) specialists to align on goals and enhance decision-making for the adoption of new technologies.

 

Final takeaway

Although patient-centered comparative effectiveness research is only one piece of the puzzle, it contributes to the larger endeavor of increasing patient centricity in clinical research and may also shed light on other areas previously viewed as separate from these efforts.

Career Perspectives: A Conversation with Victor Laliman-Khara

In this latest edition of our Career Perspectives series, we had the pleasure of speaking with Victor Laliman-Khara, Research Principal at Cytel. Victor shares his journey from a background in biostatistics and health economics to a career focused on comparative effectiveness within health research. In this interview, he shares his passion for data-driven decision-making in healthcare, discusses the evolving landscape of analytical methodologies, and reflects on how flexibility in work has shaped both his professional and personal life.

 

Can you give us a little background on your career so far? What inspired you to pursue a degree as a statistical engineer and how did that lead to a career as a research analyst in health economics?

I am originally from France, where I earned a degree in biostatistics and health economics. Before that, I completed prépas, an intense two-year program focused on mathematics, physics, and robotics, where I discovered my keen interest in statistics. This led me to join ENSAI, a French school specializing in statistics.

During my studies, I had the opportunity to interview with a pharmaceutical company, where I was first introduced to the world of clinical trials and their real-world applications. I became incredibly passionate about the field — seeing how the design of an experiment could lead to groundbreaking treatments that truly change patients’ lives. However, I also recognized the critical importance of integrity and scientific rigor in ensuring we make the right decisions.

In my final year at ENSAI, I realized that health economics was the perfect fit for me. It allowed me to work closely on clinical trials, focusing on understanding patient populations while also exploring market dynamics through indirect treatment comparisons (ITCs). I found it incredibly fulfilling to not only meet efficacy thresholds but also to define a treatment’s place in the market, identify target prescription populations, and guide clients through the rigorous HTA process.

I spent five years in consulting, moving from France to London and then to Toronto in 2016 — a city that was rapidly growing in innovation and technology. From there, I transitioned to pharma, joining Roche for three years to gain firsthand experience in designing clinical trials, particularly adaptive trial designs.

Then the pandemic hit. For the first time in my career, I had to pivot entirely to remote work. At the end of the pandemic and with the return to work, I realize that my situation had changed, with my son being born in the pandemic and having found it easier to balance work-life with remote work, I realized the new way of working was what I was looking for. I also missed the HTA work and ITCs. So I explored companies dedicated to this model and in the space of consulting and HTAs, and that’s when Cytel caught my attention. Even better, they had an open position in the EVA department, where I could once again work on ITCs, strategic advising, and HTA submissions. That’s how I ended up here, and I’ve been enjoying myself ever since.

 

What do you like best about your role, and about working at Cytel?

I appreciate the diversity of projects we work on at Cytel and the numerous opportunities to collaborate across project teams. Recently, I was involved in a multi-team project where I had the chance to work alongside the Systematic Literature Review (SLR) and Comparative Effectiveness (CE) teams. This experience has been particularly exciting, as it allows me to see the full process — from evidence generation (SLR) to synthesis (my work) and its application in modeling (Health Economics, HE). Working on such projects is highly engaging as it provides the opportunity to collaborate with experts from different fields and discuss the best strategic approaches.

Additionally, Cytel’s flexible hours policy is ideal for both my work and personal life, allowing me to balance professional commitments with family responsibilities. With my wife working as a doctor in a remote area of Northern Manitoba and my little four-year-old at home, this flexibility is invaluable.

 

Your work involves advanced Indirect Treatment Comparisons (ITC) analytical techniques — could you walk us through some of the key methodologies you specialize in and their impact on clinical research?

ITC is a broad term that encompasses various methods used to better understand a product’s comparative effectiveness against others. Including all available treatments in a clinical trial is often not feasible, which is where ITCs provide value. They offer a methodological framework for conducting comparisons between all available treatments, accompanied by a thorough review of the risk of bias in such comparisons. The goal is to provide a comprehensive overview of how different treatments are performing, and direct comparisons between them, benefiting health review bodies, physicians, and researchers alike.

Having this summary also helps to identify populations with unmet needs, uncovering areas where specific subgroups may have lower treatment performance or where evidence (such as clinical trials) may be lacking for certain therapeutic options.

On a personal note, I’ve seen my wife, a doctor, discuss ITC results in osteoporosis at her university hospital, underscoring how critical this type of work is in guiding physicians to better understand their therapeutic options when supporting patients. Unfortunately, it wasn’t my research, though!

 

Given how fast analytics and data science evolve, how do you stay up to date with new methodologies and industry best practices? Are there particular resources, strategies, or mentors you rely on?

First, it’s important to recognize that the pharmaceutical industry, particularly health outcomes research, is an area where data science and the methodologies we apply are evolving at a rapid pace. The way we conduct comparative effectiveness and ITC now is vastly different from when I started my career. I’m a firm believer in self-learning to stay at the forefront of these changes. Thanks to platforms like LinkedIn, PubMed, and various conferences, there are always opportunities to discover new methodologies and engage in discussions to learn from others in the industry. By actively participating in these communities and keeping an open mind, I’ve managed to keep myself up to date with the latest developments — or at least, I hope so.

 

You’ve been deepening your expertise in machine learning (Python & R) and RShiny. How do you see these tools shaping the future of Comparative Effectiveness research?

Clinical research is advancing through tools like RShiny and machine learning, revolutionizing the field. Historically, there was often a significant delay between evidence generation (data collection) and evidence synthesis (such as ITCs and trial summaries). However, these new tools are helping to bridge that gap, bringing us closer to real-time evidence synthesis. For example, RShiny allows us to explore different model specifications in real-time and assess their impact on ITCs, providing a framework to build dynamic dashboards. Machine learning is also enhancing our capabilities in SLRs, significantly reducing the time required to locate relevant publications and extract key information.

Despite these advancements, challenges remain. For instance, generative AI tends to produce “hallucinations,” making the output occasionally unreliable, and RShiny demands substantial upfront work. However, I am confident that these issues can be overcome, and such tools will ultimately shape the future of comparative effectiveness.

 

What makes Comparative Effectiveness so important, and how does it influence clinical research outcomes?

In my view, comparative effectiveness is a critical component of health research, alongside clinical trials and health economics modeling. Without this component, understanding the full range of treatment options becomes challenging, often leading to qualitative choices rather than evidence-based decisions. As research and increasing life expectancy have demonstrated, evidence-based decision-making is essential — not only to make the right choice but to do so consistently. In this context, comparative effectiveness provides valuable insights by helping us understand how treatments rank, identify the most appropriate treatment strategies based on patient characteristics, and model the costs and opportunities of new treatments. This enables the development of sustainable healthcare strategies that are grounded in replicable science, rather than individual decisions, and ensures that clinical research can effectively translate into real-world applications.

Could you share a project you have worked on that you feel particularly proud of, and why?

This is particularly challenging for me to answer, as I take great pride in many of the projects I have worked on. However, one project that stands out is our recent work on Multilevel Network Meta-Regression (ML NMR) in an indication that is still in its early stages. The unique aspect of using the ML NMR approach was its ability to provide deeper insights into how different treatments’ efficacy are affected based on their baseline characteristics. Given that this disease area is still emerging, there is a significant unmet need among patients, and it was crucial for us to understand how their baseline traits might influence the effectiveness of various treatment options, especially as different treatment classes are available for it.

Through this work, we were able to identify specific subgroups of patients who would benefit most from certain treatments, while suggesting alternative treatment classes for others. Our client was exceptional in their commitment — not only did they want to understand the comparative effectiveness of their product, but they also collaborated closely with the academic community to ensure that the guidelines evolved to reflect their new findings. Despite the complex methodology involved, the insights we gained were invaluable. Collaborating with the client to enhance guidelines and support their product is a source of immense pride for me. It also highlights the high caliber of clients that Cytel is privileged to work with.

 

As a remote employee, how do you maintain a healthy work-life balance? What strategies work for you, and do you feel supported by Cytel in this regard?

First, I think it’s important to define what a healthy work-life balance means for me. I am passionate about my work, but I also live in the great north, where nature can be unpredictable. For me, a healthy work-life balance means maintaining a productive day while also being present for my family when needed. For example, I want to be available if my child needs to come home early due to a snowstorm. If my wife needs to travel to a nearby community due to an outbreak or to help manage the aftermath of a wildfire, I want the flexibility to travel to her over the weekend.

In this context, Cytel provides many options to help maintain a work-life balance, offering flexible work hours and a management team that listens. For instance, I can block time to pick up my child or adjust my schedule to finish early on Fridays if I need to travel. This flexibility has truly helped me balance my family and professional life. Additionally, I make sure to have a dedicated workspace, which allows me to quickly get back into work mode and stay focused.

 

What are your main interests outside of work?

I have been a traveler for many years, having left my parents’ house to study very early and never settled in one place for more than five years ever since. As a result, I have always been passionate about discovering the cultures of the places I’ve lived in. You can often find me spending a lot of time in museums — especially now that my child can handle longer visits.

Beyond that, I am also deeply interested in supporting the communities I’ve been a part of. This has led me to actively participate in local clean-up initiatives, social events for the elderly, and various activities that allow me to give back to the communities that have welcomed me.

 

Finally, what’s one piece of career advice you wish you had received earlier?

The piece of advice I was given was, “You work to live, not live to work.” Most people misinterpret this as saying we shouldn’t be excited about our work. However, as my manager explained to me, it’s perfectly fine to be passionate about work but we should also take time to live, whether that means resting, pursuing hobbies, or spending time with family. Work will always be there; new projects and opportunities will come. By cultivating balance, we can excel and build a long, sustainable career by learning to manage all aspects of life effectively.

 

Thank you, Victor, for sharing your experience!

New CADTH Guidance on RWE Is Now Available, but Critical Aspects Are Still Missing

By Grammati Sarri, Evie Merinopoulou, Vinusha Kalatharan, and Jason Simeone

The Canadian Agency for Drugs and Technologies in Health’s (CADTH) long-expected guidance on real-world evidence (RWE) is now in the public domain; however, it solely provides standardized (core) reporting practices for manufacturers submitting RWE studies to support their drugs applications, and important aspects of the incorporation of RWE in decision-making are missing.
Read more »

How Target Trial Emulation Can Take the Guesswork Out of Comparative Effect Estimates in Medicare Drug Price Negotiation

An interview with Miguel Hernán, Harvard University Kolokotrones Professor of Biostatistics and Epidemiology

 

On March 15, 2023, the United States Center for Medicare and Medicaid Services (CMS) issued draft guidance on the implementation of the Drug Price Negotiation Program, established under the Inflation Reduction Act (IRA) of 2022. This program includes a definition of maximum fair price, based on key elements including comparative effectiveness data and information about a drug’s impact on specific Medicare populations. To inform those topics, the CMS intends to review existing literature and real-world evidence, conduct internal analytics, and consult subject matter and clinical experts.

Read more »