Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Women’s Health Is Society’s Wealth: Unlocking Economic Value When Bridging the Gender Health Gap

When the facts are loud and clear: investing in women’s health could unlock $1 trillion in global gross domestic product annually by 2040, prevent 24 million female life years lost to disability, and yield exponential returns to economy for every investment across obstetrics and gynecology, female and maternal health, immunology, neurology, cardiology, and oncology.

 

Improving global health equity has been increasingly recognized as a strategic priority for different stakeholders in healthcare,1 including policymakers, industry, governments, investors, and global health organizations. Beyond an ethical and human rights imperative, reducing health inequities and ensuring that everyone has a fair opportunity to achieve their full health potential — independent of socioeconomic status, sex, gender, geography, or race/ethnicity — leads to economic and societal benefits and resilient healthcare systems.2 Although progress continues to be made toward improving general health outcomes, Cytel researchers have found that this has not translated equally for men and women.3

 

The gender health gap: A global health crisis

Attention to women’s health inequities and the potential economic impact from closing this health gap remain largely unnoticed. The gender health gap — the long-standing, unfair differences in health outcomes between women and men — has been only recently recognized as a medical and healthcare issue. The underinvestment in female health research, the absence of systematic data collection to understand and document the unique biological needs of females and assess disparities, as well as biases in male-dominant clinical trial programs have all contributed to the neglect of women’s health issues. The survival paradox is documented, with women outliving men but experiencing poorer general health, including mental health; recent data showing that women live approximately five years longer than men does not adequately categorize the fact that women spend more than one-quarter of their lives in poor health.4, 5 This health gap is a global health crisis that affects women of all ages to varying extents depending on geography and income levels.4

The gender health gap is generally defined by the conditions that affect women uniquely, differently, or disproportionately, and are not limited to those related to sexual and reproductive health.4 For example, women from the general population are at significantly greater risk of mental health disorders (e.g., depression, suicide) than men, and women with type 2 diabetes mellitus have a disproportionally higher risk of adverse cardiac events, including mortality.6 Men, on the other hand, are significantly more likely to have adverse events after specific types of surgeries and higher mortality after COVID-19.6 Despite the fact that cardiovascular disease is the top cause of death for women in the US, males outnumber females two to one in related clinical trials.7

 

Quantifying the economic benefit of closing the women’s health gap

Quantifying the economic benefit of closing the women’s health gap for societies and economies is important for several reasons and makes visible the “invisible” topic of women and their health. By attaching a measurable economic gain — such as productivity gains, increased workforce participation, reduced healthcare spending — policymakers and investors can grasp the tangible impact on global economies and growth. As financial pressures are restraining healthcare spending, prioritization of resource allocation where interventions yield the greatest returns to economies, such as women’s health, may be placed higher in the list of investment priorities.7 Therefore, systematic efforts to quantify the economic value when bridging the gender health gap will push reframing health equity as a driver for inclusive and sustainable growth, making it a strategic imperative for governments and businesses and overturn the negligible investment in women’s health (only 5% in 2020).8

 

Our findings: The value of investment to improve women’s health

We conducted a comprehensive literature review that aimed to systematically investigate and summarize quantitative evidence on the economic impact of investments to close the women’s health gap globally. We identified robust evidence to demonstrate that when investment is made to improve women’s health, there is return to this investment by bringing back higher value to economies.

A recent report jointly published by the World Economic Forum and the McKinsey Health Institute, for example, highlights that investments in addressing the women’s health gap could not only extend life years and healthy life years, but also have the potential to boost the global economy by $1 trillion annually by 2040.4 These findings were supported by an additional impact analysis conducted by Women’s Health Access Matters across three indications: rheumatoid arthritis, coronary artery disease, and Alzheimer’s disease. Key findings indicated that an investment of $300 million in women’s health research across these three diseases would conservatively result in a $13 billion return to the US economy.7

Over the past 70 years, the influx of women into the workforce has been closely linked to economic growth.4 Since nearly half of the health burden affects women in their working years, this can have serious consequences for the income-earning potential of women, causing a ripple effect on society.4 Economic benefits in the same direction were also documented by simulation studies in other countries such as the United Kingdom whereas limited data were identified for low- and middle-income countries.

We are committed to standing at the forefront of assessing public policy trends and critical policy matters that highlight emerging challenges and seizing opportunities for improving public health. Some examples include our environmental scan of publicly available data repositories to address disparities in healthcare decision-making,9 an umbrella review of the impacts of climate change on maternal health and birth outcomes,10 and blueprints for collective action to close the women health gap.

 

 

Interested in learning more?

Grammati Sarri, Lilia Leisle, and Jeffrey M. Muir will be at the upcoming ISPOR Europe conference in Glasgow, Scotland, where they will present “The Economic Case for Gender Equity: How Closing the Women’s Health Gap Benefits Healthcare Systems and Economies” on Wednesday, November 12, 2025, from 9 to 11:30 a.m. Register below to book a meeting or visit us at Booth #1024 to connect with our experts:

The “M” in CMC: Manufacturing as the Engine Room of Drug Development

In the acronym CMC (Chemistry, Manufacturing, and Controls), each element represents a critical pillar supporting pharmaceutical development. If chemistry is the molecule’s origin story, then manufacturing is the bridge between scientific innovation and real-world application. It’s where an idea becomes a product, where theory meets reproducibility, and where a single successful synthesis must evolve into a global-scale operation.

Too often, manufacturing is treated as a downstream task, something to be sorted once the drug product is developed. In reality, early, strategic attention to manufacturing can make or break a drug candidate’s path to patients.

 

What manufacturing really means in CMC

Manufacturing in the CMC context refers to the entire process of scaling, reproducing, and standardizing the production of a drug, starting with the API and ending with the final drug product. It spans chemical synthesis or bioproduction, formulation, process optimization, packaging, and integration with quality systems.

Where chemistry defines “what” the drug is, manufacturing defines “how” it gets made: reliably, safely, and at scale.

 

Key challenges and pillars of CMC manufacturing

Scale-up: From flask to factory

One of the biggest shifts in CMC occurs when a lab process needs to be expanded. Processes that work at a 1-liter scale can behave radically differently at 1,000 liters. Engineers and chemists must navigate:

  • Heat transfer and mixing limitations
  • Solvent and waste management
  • Safety and containment protocols

This scale-up process requires engineering foresight, not just chemical knowledge. It’s a multidisciplinary effort that balances yield, purity, cost, and safety, all under regulatory scrutiny.

 

Process robustness and reproducibility

Manufacturing is about repetition without deviation. Every batch must meet exacting standards. To achieve this, companies develop and validate:

  • Critical Process Parameters (CPPs)
  • In-Process Controls (IPCs)
  • Standard Operating Procedures (SOPs)

Without tight control of variability, especially in biologics, manufacturers risk batch failures, product recalls, or regulatory penalties.

 

Formulation and final dosage form

The active ingredient is only part of the medicine. Manufacturing must also address:

  • Excipients selection
  • Drug delivery format
  • Stability

These considerations aren’t just technical, they’re deeply connected to patient experience and adherence.

 

Facility and supply chain readiness

Manufacturing also involves practical, logistical, and geopolitical factors:

  • Are facilities GMP-compliant and inspected?
  • Are raw materials available from reliable, audited suppliers?
  • Can the manufacturing strategy scale to meet global demand?

This is why companies often invest in tech transfer strategies, moving manufacturing knowledge and processes between sites or external CMOs (contract manufacturing organizations).

 

Biologics: A whole new manufacturing frontier

Biological drugs, including monoclonal antibodies, mRNA therapies, and cell and gene therapies, introduce even more complexity. Their manufacturing often involves:

  • Living cells as factories
  • Complex purification processes
  • Rigorous cold chain and contamination control

Unlike chemical synthesis, biologics can vary subtly between batches due to the inherent variability of biological systems. Here, process control becomes process science, and robust analytics are essential.

 

Regulatory expectations: Manufacturing under the microscope

Regulatory bodies such as the FDA, EMA, and WHO expect clear documentation demonstrating that a manufacturer can produce consistent, high-quality product batches over time. Requirements include:

  • Detailed process descriptions and flow diagrams
  • Batch records and deviation reports
  • Validation of equipment, cleaning, and personnel training

Manufacturing documentation is also central to regulatory submissions like the IMPD (Investigational Medicinal Product Dossier) for quality, the IND (Investigational New Drug), the MAA (Market Authorisation Application), and the NDA (New Drug Application).

 

Manufacturing is the beating heart of CMC

Without robust, scalable, and compliant manufacturing, no drug product, no matter how effective, can reach a patient’s hand. It’s not glamorous, but it is essential. And it requires a fusion of science, engineering, logistics, and regulatory strategy.

So the next time we talk about “CMC,” let’s remember that manufacturing is not just a step, it’s a continuous commitment to quality, reproducibility, and impact at scale. Coming up is the last “C” in CMC which stands for controls — stay tuned.

 

Interested in learning more?

Read Bengt’s series on CMC, where he discusses each of Chemistry, Manufacturing, and Controls:

CMC in Drug Development: The Bridge from Lab to Market

The “C” in CMC: Why Chemistry Is the Cornerstone of Drug Development

Generative AI in Evidence Synthesis: Harnessing Potential with Responsibility

The integration of AI into the healthcare research landscape is accelerating, with one obvious area of application being evidence synthesis. From early scoping reviews to comprehensive systematic literature reviews (SLRs), AI promises to reduce manual burden and enhance efficiency by saving time. However, it is crucial to understand both the strengths and limitations of using AI in this broad context to ensure compliance, reliability, and scientific rigor.

 

Knowing where it works: A targeted approach

Artificial intelligence, including generative AI models, shines when used for targeted literature reviews (TLRs) or when generating summaries of scientific articles to support evidence-based decision-making at an early development stage. AI can synthesize large volumes of information quickly, offering valuable insights during exploratory or early-phase research.

However, it’s critical to distinguish these from regulatory-facing systematic literature reviews, especially those intended for payer or health technology assessment (HTA) submissions. In this context, SLR extractions have traditionally been completed by two independent human reviewers. This human oversight ensures objectivity and reproducibility, key elements of regulatory compliance.

 

Expertly trained models vs. generalist giants

The current landscape is filled with large generalist language models trained on diverse internet-scale data. While impressive, these models often exhibit hallucinations — the generation of plausible but incorrect or fabricated content — particularly in domain-specific applications like evidence synthesis.

This is why domain-trained expert models are preferred. These models are fine-tuned on biomedical and scientific corpora, ensuring higher reliability and reducing the risk of misinterpretation or erroneous conclusions. They understand field-specific terminology, data structures, and compliance requirements far better than their generalist counterparts.

 

The imperative of data traceability

In evidence synthesis, transparency is non-negotiable. Any AI-generated output must allow users to:

  • Highlight the exact source (i.e., sentence or section) of the original scientific article from which a conclusion or data point was extracted.
  • Compare the model’s interpretation with the source text to identify discrepancies or nuances that could affect meaning or validity.

Using structured tags to annotate key terms, qualifiers, and relationships can make these comparisons clearer and more systematic but also inform advanced search and retrieval activities. By surfacing subtle differences, tagging supports expert review, preserves contextual integrity, and strengthens the reliability and defensibility of the synthesized evidence.

 

Measuring what matters: Precision and beyond

Traditional evaluation metrics like precision, recall, and F1 score (the harmonic mean of precision and recall) remain foundational when assessing AI model performance in literature screening and data extraction.

But in generative contexts — where the task may be summarization, paraphrasing, or abstract reasoning — additional measures become valuable:

  • Answer correctness: Does the output convey a factual, verifiable point?
  • Semantic similarity: How closely does the AI output align in meaning with the ground truth?
  • BLEU, ROUGE, and BERTScore: These Natural Language Processing metrics offer quantitative insights into the quality of generated text, especially for summarization and content generation tasks.

Selecting the right mix of these metrics provides a comprehensive view of model performance and reliability.

 

Where AI makes a difference: Screening and beyond

One of the most promising applications of generative AI in evidence synthesis is in literature screening, or the ability to assess whether a publication (abstract or full text) meets the criteria for inclusion. Studies and pilot implementations suggest that AI can reduce screening time by up to 40%, making it a powerful ally for research teams.

AI tools have been leveraged to assign a probability of inclusion to a title or abstract or full text to guide the screening process but also to allow researchers to quickly understand the impact of modifying search strategies on yield. By automating this repetitive and time-consuming phase, organizations can reallocate expert human resources to higher-value tasks, such as:

  • Resolving ambiguous or context-dependent data extractions
  • Validating nuanced findings and offering insights into implications of these findings
  • Ensuring alignment with HTA submission standards

In this way, AI doesn’t replace human reviewers but augments them, driving efficiency without compromising accuracy.

 

AI with guardrails

Generative AI is reshaping the landscape of evidence synthesis, but its integration must be strategic, measured, and compliant. By combining domain-trained models, robust traceability, appropriate evaluation metrics, and human oversight, organizations can unlock the true value of AI — accelerating workflows without sacrificing quality or compliance.

When used thoughtfully, generative AI becomes more than just a tool — it becomes a partner in advancing scientific research.

 

Meet with us at ISPOR 2025!

Manuel Cossio and Nathalie Horowicz-Mehler will be in Glasgow for ISPOR Europe 2025! Click the link below to book a meeting, or stop by Booth #1024 to connect with our experts:

Finding the Optimal Biological Dose with New PKBOIN-12 Method

With the rise of targeted and immunotherapies, we have recently seen a shift away from finding a drug’s maximum tolerated dose (MTD) in Phase II dose-finding studies and toward identifying the optimal biological dose (OBD) the dose that optimally balances safety, tolerability, and early efficacy. A new method, PKBOIN-12, extends the BOIN12 framework to integrate Pharmacokinetic (PK) parameters to refine the dose-finding and final OBD selection.

Here, we discuss PKBOIN-12, recent regulatory shifts regarding dose finding, including the FDA’s Project Optimus, and Cytel’s East Horizon™ dose-finding module.

 

What is PKBOIN-12?

PKBOIN-12, developed by Dr. Hao Sun of Bristol Myers Squibb and Tu Jieqi of the University of Illinois Chicago, is an innovative dose-finding method that enhances the established BOIN12 algorithm by incorporating Pharmacokinetic (PK) information into the Optimal Biological Dose (OBD) determination process. In recent years, particularly with the rise of targeted and immunotherapies, the focus in early-phase dose-finding studies has shifted away from finding the Maximum Tolerated Dose (MTD) and toward identifying the OBD, the dose that optimally balances safety, tolerability, and early efficacy.

BOIN12 is one such method that assesses both safety and efficacy, but, like many dose-finding designs, it typically does not formally use auxiliary data. Researchers routinely collect PK measurements in order to characterize drug exposure associated with the various tested dose levels, but these are not usually incorporated into the risk-benefit analysis when designing clinical trials. PKBOIN-12 addresses this by extending the BOIN-12 framework to integrate collected PK data to refine the dose-finding and final OBD selection.

Indeed, simulation results comparing PKBOIN-12 and BOIN-12 demonstrate that the former more effectively identified the OBD and allocated a greater proportion of patients to that optimal dose.

 

Project Optimus: A regulatory shift toward the OBD

In addition to the general industry trend in collecting and considering a broader set of data in early-phase dose-finding oncology studies, we have seen a real shift in regulatory interest in this area, encapsulated in the FDA’s Project Optimus.

In a previous blog post, James Matcham and Michael Fossler highlight how a recognition of the changing nature of oncology therapies — away from chemotherapies and towards more advanced biologics — necessitated a change in how these products are developed and assessed for efficacy and safety.

Project Optimus posits that the dose-finding paradigm must shift away from safety and tolerability alone, and towards incorporating efficacy considerations at this stage. An ideal dose-finding study under the Project Optimus lens emphasizes the determination of a dose range that does not focus on the MTD, but rather the OBD, or the dose range that considers efficacy, tolerability, safety, and pharmacokinetics.

PKBOIN12 is therefore well-suited to meet the challenges presented by Project Optimus and is indeed at the forefront of both industry trends and regulatory expectations.

 

Dose finding with the East Horizon™ platform

Cytel’s software development teams will soon be launching the dose-finding module, the sixth installation of the East Horizon platform. This module completes an almost two-year journey of migrating Cytel’s flagship software heritage, East, into a cloud-native, modern, and updated East Horizon platform. Over these months, our teams worked tirelessly to select from our wide repertoire of software solutions, those features, methods, and tests most relevant to our user base, and thoughtfully curated additional frequentist and Bayesian methods that are completely new for Cytel software. One such method is the new PKBOIN-12 dose-finding method.

 

Interested in learning more?

On November 18, 2025, Cytel will host Dr. Hao Sun for a webinar to discuss this new method in depth, and to highlight the technical as well as tactical aspects of implementing this method. Register today and join us for a fascinating conversation:

Breaking Barriers in Rare Disease Research with Generative AI and Synthetic Data

In healthcare innovation, one of the most pressing challenges lies in rare disease research. There are approximately 7,000 rare diseases affecting over 300 million people worldwide. With only a handful of patients dispersed globally, gathering sufficient data to power robust clinical studies or predictive models is a monumental hurdle. However, a solution is emerging at the intersection of generative AI and real-world data (RWD) — a novel approach with the potential to reshape possibilities and unlock insights to address unmet medical needs in rare diseases.

 

The rare disease data dilemma

In the U.S., rare diseases are defined as conditions affecting fewer than 200,000 people. Despite their low individual prevalence, rare diseases collectively impose a significant burden on both patients and healthcare systems.

Research and development in rare diseases often face a vicious cycle: low prevalence leads to data scarcity. Traditional clinical trials are often infeasible and/or statistically underpowered due to the limited pool of participants.

Meanwhile, RWD sources such as electronic health records (EHRs), insurance claims, registries, and patient-reported outcomes offer valuable, albeit messy and fragmented, glimpses into the patient journey. Yet even RWD struggles to paint a complete picture in rare diseases. This is where generative AI steps in.

 

Enter generative AI: Making data where there is none

Generative AI — especially models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and, more recently, large foundation models — has a transformative ability: it can learn patterns from limited datasets and generate synthetic yet realistic datasets.

How it works

  1. Learning from RWD: Even small datasets from rare disease patients can be used to train and fine-tune generative models. These models identify patterns, distributions, and time-dependent relationships present in the data.
  2. Synthesizing patients: Once trained, the model can create new, synthetic patient records that preserve the statistical properties and characteristics of the original data. These “digital patients” simulate disease progression, treatment responses, and comorbidities.
  3. Validating realism: Synthetic data must be validated to ensure it reflects the real-world data it was trained on. Techniques like distributional comparison, propensity scoring, and expert validation are used to ensure accuracy and utility.

 

Why synthetic data matters for rare diseases

Synthetic data can enhance rare disease clinical research in many ways, including:

 

1. Augmenting small cohorts

Synthetic data can boost sample sizes for rare disease studies, enabling:

  • Simulation of clinical trials
  • Development of more robust predictive models
  • Generation of synthetic control arms where traditional controls are ethically or logistically impractical

 

2. Enhancing privacy

In rare diseases, patient re-identification is an increased risk due to unique phenotypes or genetic markers. Synthetic data protects patient privacy, while at the same time preserves the utility of the data.

 

3. Facilitating global collaboration

As synthetic data is deidentified, it facilitates data sharing among researchers, institutions and borders, minimizing regulatory hurdles and fostering cross-collaborative discovery.

 

4. Accelerating drug development

Pharma and biotech companies can use synthetic data to:

  • Test drug targeting strategies
  • Model long-term outcomes
  • Conduct in silico trials in the earliest stages of development

 

Challenges and considerations

While promising, this approach is not without its challenges:

  • Bias amplification: Synthetic data reflects the biases of its training data. If the RWD is incomplete or skewed, so will the synthetic outputs be. Strategies to handle bias are essential.
  • Regulatory acceptance: Regulatory bodies are still evaluating how to incorporate synthetic data into approval pathways.
  • Validation standards: There is a need for consistent benchmarks and best practices for validating synthetic data — both in terms of privacy and utility, as well as broader generative AI applications in healthcare.

 

Looking ahead

The marriage of generative AI and RWD opens new doors for rare disease research. With the ability to synthesize patient data that preserves real-world complexity, we can begin to break free from the constraints of scarcity — generating insights, hypotheses, and interventions that were once out of reach.

As we move forward, interdisciplinary collaboration among clinicians, data scientists, regulatory bodies, and patient advocacy groups will be key to harnessing this potential ethically and effectively.

 

Interested in learning more?

Download our complimentary ebook, Rare Disease Clinical Trials: Design Strategies and Regulatory Considerations:

Analyzing Endpoints in Multiple Sclerosis Clinical Trials: Statistical Considerations

Clinical trials studying multiple sclerosis (MS) — a chronic, inflammatory, progressive, autoimmune disease affecting the central nervous system — employ various common endpoints. These typically target frequency of relapses, progression of disability, and MRI activity, as well as “no evidence of disease activity” (NEDA), which is a concept/composite endpoint combining the prior three components. Analyzing these can present several statistical challenges.

Here, we provide an overview of the common endpoints (including their definitions) in MS clinical trials and key statistical considerations together with the statistical modeling techniques to analyze them, as well as considerations of how to overcome several statistical challenges that we encountered in this indication.

 

Frequency of relapses

A key clinical feature of MS is the occurrence of relapses, i.e., episodes of new or progressing neurological dysfunction, lasting for a period, followed by periods of remission. Distinguishing a relapse from other clinical conditions may not be straightforward; therefore, an accurate definition should be included in the protocol.  A typical endpoint here is the number of relapses occurring within one year, i.e., the annualized relapse rate (ARR).

From a statistical point of view, this constitutes count data and thus, we analyze it using:

  • Poisson regression model, or
  • Negative binomial model. This model accounts better for a high number of zero counts (i.e., zero inflation) and overdispersion (i.e., greater variability than expected in terms of variance being larger than the mean).

Both models are often adjusted for MS prognostic factors.

In recent years, we’ve seen a decrease in number of relapses, largely due to earlier MS diagnoses and the widespread use of high-efficacy disease-modifying therapies. In the case of absence of relapses, derivation of ARR might go wrong. When performing quality checks of a sponsor’s analyses, especially if a not-adjusted approach was followed for cases with no relapses, the exposure or observation time in the study is mistakenly not accounted.

Another common method is time to first relapse, applying the survival data analysis methods. At Cytel, our teams have also explored recurrent event analysis using the Mean Cumulative Function (MCF) method, though these are limited by the evolving nature of relapse patterns.

 

Progression of disability

The most widely used measurement tool to describe disease progression in patients with MS is the Expanded Disability Status Scale (EDSS).1 The EDSS includes a neurological evaluation of seven functional systems (plus “other”) in conjunction with observations and information concerning gait and use of assistive devices to rate the level of disability, resulting in a single score.

While EDSS is a widely accepted measure, it has been criticized for certain limitations. For example, a 1-point increase at lower EDSS levels (e.g., 2.0 to 3.0) may reflect different functional implications than the same increase at higher levels (e.g., 6.0 to 7.0).

To address these limitations, we commonly use Confirmed Disability Progression (CDP), which is based on an increase in the EDSS score (e.g., 0.5, 1.0, or 1.5 points) that is confirmed after a specified period (e.g., 3 or 6 months), depending on the baseline EDSS value.

It’s important to note that neither terminology nor definition are standardized; there are several variations in its application across different sponsors.

When Cytel analyses CDP as a binary endpoint (yes/no), we typically use logistic regression adjusting for relevant MS prognostic factors. One of the challenges encountered with this approach is the presence of incomplete data when attempting to obtain a relevant assessment to confirm the disease progression:

  • Some patients withdraw from the study.
  • In other cases, the EDSS assessments are not frequent or consistent enough to confirm progression. For example, if the confirmation of CDP is required in 6 months, the study protocol should define the corresponding visits frequency, and the statistical plan should consider the minimum time interval required for the confirmatory assessment, e.g. 6 months x 30 days – 14 days, so that the confirmation would not be missed by few days.

Both scenarios result in patients being classified with “unknown” status, which can complicate the interpretation and robustness of the analysis.

Another approach is to analyze time to first CDP via survival data analysis methods.

 

Magnetic resonance imaging (MRI)

Relapses or EDSS may not be a sufficient indicator of MS activity. The inflammation caused by MS does not always result in a relapse or any visible symptoms and may only be seen with an MRI.

The most common MRI-related endpoints are T2 lesions count or volume, active (i.e., new or enlarging) T2 lesion count, and T1 (Gd+ / hypointense) lesions count or volume.

The expectation from the treatment with the MS drug is that MRI activity is also “suppressed” (e.g., broadly speaking, we do not observe new or enlarging T2 lesions; new T1 Gd+ do not appear, etc.). This usually happens at the early stage of the study according to the foreseen onset of action for a given drug. However, new lesions may eventually appear, or others may grow in size.

For the statistical analysis, MRI-derived endpoints reflect MRI activity such as counts of lesions that are new or enlarging. The counts can be further classified on:

  • binary scale where at least one new or enlarging lesion is present vs none (coded as 1/0 for lesion count: >0/ =0).
  • continuous scale, e.g. changes in MRI activity such as counts of lesions that are new or enlarging compared to baseline (or any other relevant visit).

Such data is challenging, but can be handled using parametric approaches, including:

  • counts via Poisson regression, or in case of zero inflation via generalized linear model assuming negative binomial distribution adjusted for MS prognostic factors.
  • binary scale using the McNemar test since a shift in the number of lesions through visits from present to none is frequently of interest.
  • continuous endpoint (change from baseline) can be handled via (mixed; if random effects accounted) linear regression model adjusted for MS prognostic factors.

(The non-parametric methods are not discussed here.)

The analysis is, however, much more complex due to the nature of data collection for lesion count: the assessment of the lesions may be performed multiple times per visit (such as for T1 Gd+ lesions), or in reference to previous MRI scan to detect new or enlarging lesions. This is reflected in the analysis by random effects, standardization, or by using offset in the count models, accounting for the number of scans or time since baseline.

 

NEDA

No evidence of disease activity (NEDA), also referred to as freedom from disease activity, is one of the composite endpoints taking clinical and imaging endpoints into account.

NEDA is defined by the absence of:

  • Relapse
  • CDP
  • MRI activity

NEDA is typically analyzed as a binary outcome (yes/no), using logistic regression adjusted for MS prognostic factors.

While the goal of most MS DMDs is to reduce relapse frequency, relapses tend to be less common in treated people with MS. As a result, the two remaining components of NEDA, CDP and MRI, may have greater impact on its outcome.

However, there is no crisp definition regarding CDP and MRI endpoints (frequency and parameter selection), and thus the reports on NEDA between different studies might be incomparable:

  • We observe that studies present 3-months, 6-months, or even 12-months CDPs,
  • MRI activity may be defined in various ways, including
    • frequency of MRI scans,
    • selection of relevant MRI readouts (e.g. presence of T1 Gd+ lesions or new or enlarging T2 lesions), etc.

The more frequently the measures are taken, the higher the likelihood of identifying the progression. In case of post-marketing studies that reflect real-world clinical practice (half-yearly or yearly visits), EDSS measures necessary for CDP definition and MRI scans are often not collected in the necessary frequency.

In addition, for assessments that have rather rare scheduled frequency, each missing assessment may affect NEDA heavily and by experience, there is no harmonization of dealing with such missing across different companies. In such situations, NEDA might be analyzed differently including time to first NEDA using survival analysis methods. Overall, it is important that the potential challenges (whether related to data collection or analytical methods) need to be carefully considered already at the early stages of study planning.

 

Final takeaways

While these endpoints provide valuable frameworks for assessing disease progression and treatment efficacy of MS patients, statistical challenges remain. Addressing these challenges, in close collaboration with medical experts, is essential to ensure that the analyses remain both scientifically sound and clinically meaningful.

From Metadata to Submission: Rule-Based Robotic Process Automation for Statistical Programming Excellence

In the race to modernize data operations in clinical research and regulatory submissions, Robotic Process Automation (RPA) powered by rule-based systems has emerged as a dependable and high-impact solution. These systems offer clarity, control, and reproducibility — critical traits for industries like biopharma where regulatory compliance and data integrity are non-negotiable.

Here, we discuss rule-based RPA as the foundation for a scalable and auditable standards automation pipeline.

 

Rule-based automation: Transparent, trusted, and tunable

Unlike more probabilistic models, rule-based systems operate on deterministic logic. Every output is traceable back to an explicit rule, which enhances trust and simplifies troubleshooting. This transparency is particularly valuable when the processes must be easily explained to stakeholders and auditors.

Key strengths of rule-based RPA include:

Transparency

Each step in the workflow is rule-driven, making the logic easy to inspect, validate, and justify. This ensures regulatory reviewers can clearly understand how data was transformed or outputs generated — vital in submission contexts.

Consistency

Standard rules applied across studies generate consistent outputs. For example, Cytel’s ALPS system creates SDTM and ADaM code from structured specifications, producing reliable results that hold up across different projects and teams.

Customizability

Rule-based systems are modular. Teams can easily adapt existing rules to accommodate study-specific needs without overhauling the entire system. Tools like Prism allow this by applying both generic rules and study-specific layers for enriched metadata processing.

 

Cytel’s metadata-driven RPA workflow in action

Our internal automation pipeline demonstrates the power of rule-based RPA. It’s built on a modular architecture where each tool performs a specific, rules-driven task:

  • ALPS: Converts metadata specifications into ready-to-run SAS code for SDTM and ADaM datasets, reducing manual programming and minimizing error risks.
  • Lighthouse: Enables biostatisticians to build mock shells using reusable templates, ensuring consistency in table and listing structures.
  • Prism: Extracts metadata from mock shells and transforms it into XML-format ARMs (Analysis Results Metadata), enriching it through rules and generating code for up to 60% of standard safety outputs.
  • TAB Macros and CytelDocs: Automate the creation of summary tables and documentation, saving hours of effort and ensuring compliance with standardized formats.

This end-to-end pipeline reduces manual touchpoints, maintains high quality, and boosts team efficiency.

 

Where generative AI complements RPA

While rule-based systems are ideal for tasks requiring consistency and auditability, generative AI can complement these systems — particularly in areas where variability is acceptable and outputs don’t require deterministic reproducibility. For example, Gen AI can assist with:

  • Drafting exploratory narratives or documentation
  • Suggesting code for non-critical outputs
  • Enhancing user interfaces with intelligent prompts
  • Enrich the set of study specific rules to be used

However, these AI-driven capabilities are best applied where hallucinations won’t compromise integrity, and outputs don’t demand rigid consistency.

 

Business and quality benefits of rule-based RPA

By relying on rule-based RPA for core data workflows, we’ve realized several tangible gains:

  • Time efficiency: Standard code is generated automatically, freeing time for custom analysis.
  • Reduced redundancy: Developers no longer rewrite common code across projects.
  • Improved QA: Outputs are independently validated and built on rigorously tested rule sets.
  • Collaboration at scale: Uniform rules simplify onboarding and knowledge transfer.
  • Focus on what matters: Teams can concentrate on non-standard elements that require expertise.

 

Final takeaways

Rule-based RPA systems provide the transparency, structure, and adaptability required for high-stakes data environments. At Cytel, we’ve found them indispensable in our mission to expedite regulatory submissions without compromising on quality or compliance. As AI continues to evolve, generative technologies may enrich this foundation — but rule-based automation remains the core engine that ensures accuracy, accountability, and speed.

External Control Arms: A Powerful Tool for Oncology and Rare Disease Research

In clinical research, the randomized controlled trial (RCT) has been considered the gold standard. Yet in many areas — especially in oncology and rare diseases — running an RCT with a balanced control arm is not always possible. Patients, physicians, and regulators often face a difficult reality: how do we evaluate promising new therapies when traditional designs aren’t feasible?

This is where external control arms (ECAs) come into play. By carefully drawing on existing data sources and applying rigorous methodology, ECAs can help provide the context and comparative evidence needed to make better decisions.

Here, we will explore why ECAs are particularly valuable in oncology and rare diseases, how they support decision-making and study design, what data sources they can rely on, and which statistical methods are essential to reduce bias. We will also introduce the concept of quantitative bias analysis and conclude with why experienced statisticians are key to the success of this methodology.

 

Why external control arms matter in oncology and rare diseases

Oncology and rare disease research share several challenges that make traditional RCTs difficult:

  • Small patient populations: In rare diseases, the number of eligible patients is often extremely limited. Asking half of them to enroll in a control arm may make recruitment impossible.
  • High unmet need: In oncology, patients and families are eager for new options. Many consider it unacceptable to randomize patients to placebo or outdated standards of care.
  • Ethical constraints: For life-threatening conditions, denying patients access to an experimental therapy can be ethically challenging.
  • Rapidly changing standards of care: In oncology, new treatments are approved frequently. A control arm that was relevant when a trial began may become outdated by the time results are available.

In such contexts, single-arm studies (where all patients receive the experimental therapy) are common. But single-arm results alone are not sufficient. Without a comparator, how do we know if the observed survival or response rate truly reflects an advance? ECAs provide the missing context.

Even when a trial includes a control arm, unbalanced designs — such as smaller control groups or cross-over to experimental treatment — can limit the ability to make clean comparisons. External controls can augment these designs, helping to stabilize estimates and provide reassurance that results are robust.

 

Supporting internal and regulatory decision-making

ECAs serve multiple purposes:

  1. Internal decision-making:
    • Companies developing new therapies must decide whether to advance to the next trial phase, expand into new indications, or pursue partnerships.
    • ECAs help answer questions like: Is the observed benefit large enough compared to historical data? Do safety signals look acceptable in context?
  2. Regulatory decision-making:
    • Regulatory agencies such as FDA and EMA increasingly accept ECAs as part of submissions, especially in rare diseases and oncology.
    • While not a replacement for RCTs, ECAs can strengthen the evidence package and demonstrate comparative effectiveness in situations where randomization is not feasible.
  3. Helping the medical community:
    • Physicians, payers, and patients need to interpret trial results. An overall survival rate of 18 months in a single-arm study may sound promising, but how does it compare to similar patients receiving standard of care?
    • ECAs help put numbers into perspective, allowing the community to better understand the true value of a new therapy.

 

Designing better studies with ECAs

External controls are not only a tool for analyzing results — they can also improve study design.

  • Feasibility assessments: By examining real-world data or prior trial results, sponsors can estimate expected event rates, patient characteristics, and recruitment timelines. This reduces the risk of under- or over-powered studies.
  • Endpoint selection: Understanding how endpoints behave in historical or real-world settings helps refine choices for the trial, ensuring relevance to both regulators and clinicians.
  • Eligibility criteria: RWD and earlier trial data can reveal which inclusion/exclusion criteria are overly restrictive. Adjusting them can broaden access while maintaining scientific rigor.
  • Sample size planning: By leveraging ECAs, trialists may reduce the number of patients required for an internal control arm, easing recruitment in small populations.

In other words, ECAs can shape trials from the start, rather than being seen only as a “rescue” option after the fact.

 

Sources of external control data

An ECA is only as good as the data it relies on. Broadly, there are three main sources:

  1. Other clinical trials:
    • Prior trials of standard of care treatments can serve as external comparators.
    • Individual patient-level data (IPD) is preferred, but often only summary data is available.
    • These data are typically high quality but may not perfectly match the new study population.
  2. Published studies:
    • Systematic reviews and meta-analyses of the literature can provide comparator data.
    • Useful when IPD is unavailable but limited by reporting standards and heterogeneity across studies.
  3. Real-world data (RWD):
    • Sources include electronic health records, registries, and insurance claims databases.
    • These capture routine clinical practice, reflecting the diversity of real patients.
    • However, RWD often suffers from missing data, variable quality, and lack of standardized endpoints.

Each source has strengths and weaknesses. Often, the best approach is to triangulate across multiple sources, ensuring that conclusions do not rest on a single dataset.

 

The value of earlier clinical trials

Earlier-phase trials (Phase I and II) can be particularly valuable in constructing ECAs. These studies often include control arms, detailed eligibility criteria, and well-captured endpoints.

For rare diseases and oncology, earlier trials may be the only available benchmark. By carefully aligning populations and endpoints, statisticians can extract maximum value from these datasets.

The challenge, of course, is ensuring comparability. Patient populations may differ in prognostic factors, supportive care practices may evolve, and definitions of endpoints may shift over time.

This is where advanced statistical methods become essential.

 

Reducing bias with propensity scoring

One of the key criticisms of ECAs is the risk of bias. Without randomization, patients receiving the experimental therapy may differ systematically from those in the external control.

Propensity score methods are a powerful way to reduce this bias. The idea is simple:

  • For each patient, estimate the probability (the “propensity”) of receiving the experimental treatment based on baseline characteristics.
  • Match or weight patients in the external control group so that their distribution of covariates mirrors that of the trial patients.

This approach creates a “pseudo-randomized” comparison, balancing measured variables. While it cannot eliminate unmeasured confounding, it greatly improves fairness in comparisons.

 

Quantitative bias analysis: Addressing the unmeasured

Even with careful propensity scoring, unmeasured confounding remains a concern. Clinical researchers often ask: What if there are factors we didn’t account for?

This is where quantitative bias analysis (QBA) enters. QBA does not eliminate bias but helps us understand its potential impact.

For example:

  • Analysts can model how strong an unmeasured confounder would need to be to explain away the observed treatment effect.
  • Sensitivity analyses can simulate scenarios with different assumptions about unmeasured variables.

By explicitly quantifying uncertainty, QBA provides transparency. Regulators and clinicians gain confidence that conclusions are robust — or at least, that limitations are clearly understood.

 

The need for experienced statisticians

Constructing an ECA is not a “plug-and-play” exercise. It requires expertise across multiple domains:

  • Data curation: Selecting fit-for-purpose datasets, cleaning and harmonizing variables, and aligning endpoints.
  • Study design: Defining eligibility, follow-up time, and analysis plans that minimize bias.
  • Statistical methodology: Applying techniques like propensity scoring, inverse probability weighting, Bayesian borrowing, and QBA.
  • Regulatory communication: Explaining assumptions, limitations, and sensitivity analyses in language that regulators and clinicians can understand.

In short, ECAs demand both technical skill and strategic judgment. Partnering with experienced statisticians ensures that external controls provide credible, decision-grade evidence rather than misleading comparisons.

 

Final takeaways

External control arms are rapidly becoming an indispensable tool in modern clinical research — especially in oncology and rare diseases, where traditional RCTs often fall short.

They offer:

  • Context for single-arm studies and unbalanced designs.
  • Support for both internal and regulatory decisions.
  • Guidance in study design and feasibility planning.

By leveraging diverse data sources — from earlier trials to real-world evidence — and applying rigorous methods such as propensity scoring and quantitative bias analysis, ECAs can bring clarity and credibility to difficult development programs.

But the value of ECAs depends on how well they are planned and implemented. Done poorly, they risk misleading decisions. Done well, they empower researchers, regulators, and clinicians to make better choices for patients.

As the field evolves, one thing is clear: the expertise of skilled statisticians is the cornerstone of successful ECAs.

 

Interested in learning more?

Join Alexander Schacht, Steven Ting, and Vahe Asvatourian for their upcoming webinar, “Beyond the Standard Clinical Trial in Early Development: When and Why to Consider External Controls” on Thursday, October 16 at 10 a.m. ET:

Agentic Autonomy: How Multi-Agent Systems Could Orchestrate the Future of Clinical Development

In recent years, artificial intelligence has evolved beyond basic pattern matching to become capable of autonomous reasoning, multi-step planning, and even delegation. This transition — from passive tools to goal-driven, reasoning agents — marks the rise of agentic AI.

For the life sciences sector, and especially clinical development, this evolution arrives at a critical time. Clinical trials are increasingly complex, cross-functional, and data-intensive. Agentic AI offers not just faster tools, but the possibility of autonomous collaboration — teams of agents working in harmony to reduce burden, increase efficiency, and shorten timelines.

Here we explore the evolution of agentic AI and how higher levels of autonomy could transform clinical development from reactive execution to proactive, intelligent orchestration.

 

The evolution of agentic AI

Agentic AI evolves through distinct levels of capability. Each stage unlocks new functionality — from static models to ecosystems of communicating agents. Here’s a clear breakdown of the five major levels:

 

 

Each level builds toward intelligent autonomy. The transition from Level 3 to Levels 4 and 5 introduces intentional behavior, goal-setting, and inter-agent collaboration — the foundations of autonomous operations in clinical development.

 

Agentic AI in clinical development: A new operating model

Clinical development is not just complex — it’s interdependent. Every milestone relies on the seamless handoff and integration of data, code, documents, and decisions. Agentic AI, particularly at Levels 4 and 5, promises to re-architect this model.

 

Level 4: Planning and reasoning agents

These agents can independently break down goals, design execution paths, and adapt to changing environments. Here’s how they can drive value:

  • Medical writing agents
    • What they do: Generate drafts for protocols, CSRs, and patient narratives.
    • How they help: Understand document structures, integrate real-time data, and adapt language for regulatory or clinical audiences.
    • Outcome: Faster document turnaround, reduced rework, and scalable writing support.

 

  • Statistical programming agents
    • What they do: Develop and validate analysis code in SAS, R, or Python.
    • How they help: Plan logical sequences, debug outputs, and dynamically update based on protocol amendments.
    • Outcome: Accelerated code generation with built-in quality assurance.

 

  • Information synthesis agents
    • What they do: Retrieve and synthesize information from multiple domains — scientific literature, regulatory guidelines, real-world data, health system policies, and reports on unmet medical needs.
    • How they help: Prioritize and contextualize inputs to support clinical design, indication selection, and risk-benefit assessments.
    • Outcome: Broader strategic alignment and better-informed cross-functional planning.

 

Level 5: Multi-agent systems

At this level, clinical development becomes an ecosystem of agents, each with a specialized role, working under the coordination of orchestrator agents that function like project managers.

  • Orchestrator agents
    • What they do: Assign tasks, monitor progress, and realign workflows in real time.
    • How they help: Adjust deliverables dynamically as inputs change or downstream agents complete their tasks.
    • Outcome: Continuously managed, self-optimizing trial execution.

 

  • Agent networks
    • Example: A data management agent processes raw datasets and hands outputs to a statistical agent, which triggers a writing agent to draft updated narratives — all autonomously.
    • Value: End-to-end automation with minimal human handoffs.
    • Outcome: Real-time trial updates and agility under pressure.

 

The benefits of the agent ecosystem

 

From automation to autonomy

Agentic AI reflects an evolution from “AI that assists” to “AI that takes initiative” — supporting actions, learning from experience, and extending expertise across domains. In clinical development, where complexity continues to rise and efficiency is critical, this shift offers a meaningful opportunity rather than just an advantage.

As we look toward Levels 4 and 5, we can imagine a future where trials increasingly manage themselves, where teams are supported by networks of intelligent agents, and where human professionals gain more space to focus on innovation, thoughtful oversight, and meaningful patient outcomes.

 

Meet with us at ISPOR 2025!

Manuel Cossio will be in Glasgow for ISPOR Europe 2025! Click the link below to book a meeting, or stop by Booth #1024 to connect with our experts:

Career Perspectives: A Conversation with Naydene Slabbert

In this edition of our Career Perspectives series, we are delighted to feature Naydene Slabbert, Principal Clinical Data Manager, at Cytel. Naydene shares insights from her career journey, discusses the critical role of early-stage clinical trial setup in ensuring the delivery of high-quality, actionable data, and reflects on the evolving role of data managers in clinical trials.

 

Can you give us a little background on your career so far? What led you to clinical data management, and how has your path evolved over the years?

My journey in clinical data management started over 23 years ago, and it’s been such a rewarding experience filled with growth, learning, and a lot of exciting challenges. I began my career at Quintiles (now IQVIA), where I started as an Assistant Data Coordinator and eventually became a Data Team Lead. Those early years gave me a solid foundation in clinical trial operations and sparked my interest in data quality and process improvement.

In 2021, I moved to DF/Net Research, where I led several high-profile studies and contributed to infrastructure and software development. That role helped me expand my technical and strategic skills, especially in managing complex, multi-site trials.

Now, I’m proud to be part of Cytel as a Principal Clinical Data Manager. My focus is on enhancing end-to-end data management processes, working closely with cross-functional teams, and making sure our systems support both scientific excellence and regulatory success. Over the years, my role has evolved from hands-on data work to strategic leadership, and I continue to be inspired by the impact that well-managed clinical data can have on public health and patient outcomes.

 

You’ve been supporting the lead on a major study that went live in September. What did your day-to-day work look like at this stage of the project?

During the go-live phase of the study I’m working on, my daily focus was to make sure our data management systems and processes were running smoothly and in sync across teams. It’s a crucial time where accuracy, quick thinking, and strong teamwork really matter.

I partnered closely with the study lead and various cross-functional teams to validate the Electronic Data Capture (EDC) system, double-checking that all edit checks and Case Report Forms (CRFs) were working as expected. We held daily huddles and status meetings to keep everyone aligned and moving forward, which made it easier to spot and tackle any issues early on.

This stage demanded a lot of agility, collaboration, and attention to detail — all with the goal of setting the study up for long-term success.

 

From preparing documents to getting the database ready for data collection — how do these early tasks set the foundation for a successful study?

The early stages of a clinical study really lay the groundwork for everything that follows. It’s where we take the scientific goals outlined in the protocol and turn them into practical, workable data processes. Getting this part right is key to the trial’s overall success.

A big part of this involves preparing core documents like the Data Management Plan, validation guidelines, and Standard Operating Procedures (SOPs). These aren’t just paperwork — they’re the playbook that keeps everyone aligned on exactly how data will be collected, reviewed, and reported. They help ensure consistency, compliance, and quality from start to finish.

At the same time, building and testing the database, from CRF design to edit checks and system integrations, is just as critical. This is where we make sure the tools for capturing data are user-friendly, accurate, and fully aligned with the protocol. A well-designed database helps reduce errors, speeds up query resolution, and supports faster decision-making.

By putting in time and care upfront, we’re able to minimize potential risks, boost efficiency, and set the stage for a study that’s not only regulatory-ready but also delivers high-quality, actionable data. In my experience, a strong launch phase really sets the tone for everything that follows.

 

Now the study has gone live, you’re overseeing the data. What does that oversight involve, and how do you ensure data quality and consistency throughout the trial?

Once a study goes live, my role shifts into a proactive oversight phase where the focus is on maintaining data integrity, consistency, and compliance across all participating sites and systems.

Ultimately, my goal is to create a system of continuous quality assurance. By fostering strong communication, leveraging technology for real-time insights, and maintaining rigorous documentation, I help ensure that the data collected is accurate, timely, and meaningful. This supports both scientific outcomes and regulatory success, and ultimately, the patients.

 

What do you like best about your role, and about working at Cytel?

What I enjoy most about my role is the opportunity to lead complex studies that have real-world impact, while collaborating with talented teams across disciplines. I thrive on problem-solving and ensuring data quality from start to finish, and I appreciate the autonomy and trust I’m given to manage projects effectively.

As for Cytel, I value the supportive culture and global collaboration. The company encourages continuous learning and innovation, and I’ve found the environment to be both respectful and intellectually stimulating. It’s rewarding to be part of an organization that’s committed to advancing clinical research through data-driven solutions.

 

Is there a particular project or initiative you’ve worked on recently that you’re especially proud of?

One project I’m especially proud of is the trial I mentioned earlier, which went live recently. It’s a high-profile study with complex data requirements, and I’ve been deeply involved from the early planning stages through to go-live. I helped translate the protocol into robust data collection tools, oversaw database setup and testing, and now manage ongoing data oversight. What makes this project stand out is the level of collaboration and precision required. It’s been incredibly rewarding to see our preparation pay off in a smooth launch!

 

You’ve held leadership roles across several organizations. What’s one piece of career advice you wish you had received earlier?

If I could go back and give myself one piece of advice early in my career, it would be: “Don’t shy away from getting your hands dirty.” I used to think leadership was mostly about strategy and oversight, but some of the most valuable lessons, and the biggest impacts made, came from jumping into the details.

Whether it’s troubleshooting a tricky data issue, reviewing CRFs, or helping build out a database, being hands-on keeps you sharp and connected to the work. It also builds trust with your team. They can see you’re not just directing from the sidelines, but genuinely in it with them. That kind of involvement helps you lead with more empathy, insight, and credibility.

 

How has your approach to managing clinical data changed over time, especially as you’ve moved into more strategic roles?

Over time, my approach to managing clinical data has shifted from task execution to strategic oversight. Early in my career, I focused on operational details such as CRF design, data cleaning, and query resolution. As I moved into leadership roles, I began shaping data strategies, aligning them with protocol goals, regulatory requirements, and sponsor expectations. I now prioritize proactive planning, cross-functional collaboration, and system optimization to ensure data quality and efficiency across the entire study lifecycle.

Now, my approach is focused on seeing the bigger picture and guiding teams toward smarter, scalable solutions.

 

Clinical trials can be complex, especially when managing data across different regions and systems. What are some of the biggest challenges you’ve faced in data management, and how did you tackle them?

One of the biggest challenges in clinical data management is keeping data consistent and reliable across multiple regions, especially in large, global studies. Each site often has its own workflows, varying levels of experience, and different infrastructure, which can lead to inconsistencies in how data is captured and handled.

To manage this, I focus on creating clear, well-structured documentation and providing centralized training to ensure everyone is on the same page. I also put strong validation processes in place to catch issues early. Working closely with vendors and site teams is key — it allows us to resolve problems in real time and keep the data aligned across systems.

Strategic planning and open communication play a big role too. By staying connected with all stakeholders and anticipating potential challenges, we’re able to maintain high-quality, harmonized data throughout the trial. It’s all about building trust, being proactive, and keeping the bigger picture in mind.

 

The field is evolving quickly. How do you see the role of data managers changing with the rise of AI, machine learning, and decentralized trials?

The role of data managers is indeed evolving rapidly with the rise of AI, machine learning, and decentralized trials. We’re moving from purely operational roles to more strategic ones, where we not only manage data but also help shape how it’s collected, interpreted, and used.

One trend I’m particularly excited about is the integration of AI and machine learning into data cleaning and query management. These tools help us move from reactive to proactive data oversight, identifying patterns and anomalies much earlier in the process. Decentralized trials are also reshaping how we collect and manage data — requiring more flexible systems and real-time validation strategies. As a data manager, I now focus more on system integration, data governance, and ensuring that new technologies align with regulatory standards and study goals.

These innovations are pushing us to become more strategic, tech-savvy, and collaborative, which I find both challenging and energizing. It’s an exciting shift that requires both adaptability and a strong foundation in data quality principles.

 

What skills do you think will be essential for future data managers entering the field?

I think future professionals in this space will need a mix of technical know-how, strategic thinking, and flexibility to really thrive.

For starters, being comfortable with data, understanding how to interpret it, analyze it, and use tools that support automation and predictive insights, is going to be key. With AI, machine learning, and real-time data becoming more common, data managers will need to be confident working with more complex systems and datasets.

Technical skills will always be important. You’ll still need to work with EDC platforms, understand coding and data standards, and know how to manage data integrations. But we’re also seeing a growing need to understand APIs, interoperability, and data governance, especially as decentralized trials become more widespread.

Just as important are the soft skills. Strong communication, collaboration, and leadership are essential because data managers often act as the link between clinical, statistical, and operational teams. Being able to bring people together and keep everyone aligned makes a huge difference.

And finally, I’d say curiosity and a willingness to keep learning are vital. The field is changing fast, and those who stay open to new ideas and keep building their skills will be best positioned to lead the way.

 

As a remote employee, how do you maintain a healthy work-life balance? What strategies work for you, and do you feel supported by Cytel in this regard?

Working remotely definitely has its perks, but maintaining a healthy work-life balance takes a bit of intention. For me, it starts with having a clear plan for the day. I like to set goals, block out time for focused work, and make sure I take regular breaks. I also try to stick to a consistent “log-off” time, which helps me mentally switch from work mode to personal time.

One thing that’s really helped is having a dedicated workspace that’s separate from my living space. It makes it easier to stay focused during the day and disconnect in the evenings. I also make time for walks, family, and activities that help me recharge as those are just as important as meetings and deadlines.

Cytel has been incredibly supportive when it comes to flexibility and balance. There’s a lot of trust and autonomy, and the culture really respects personal time. Leadership encourages us to take care of ourselves, which makes remote work not only manageable but genuinely enjoyable.

 

You have been with Cytel for around 6 months now. What aspects of Cytel’s culture stood out to you when you joined?

What really stood out for me when I joined Cytel was how collaborative and welcoming the culture is. From day one, I felt like part of a team. People are generous with their time, open to new ideas, and genuinely invested in working together to achieve shared goals. It’s not just about getting the job done; it’s about how we support each other along the way.

I also really appreciate the company’s focus on quality and innovation. There’s a strong drive for continuous improvement, and strategic thinking is encouraged. That’s something I value deeply in my own work, especially when it comes to refining processes and contributing to cross-functional initiatives.

Another thing that impressed me is how well remote employees are supported. Even though I’m based in South Africa, I’ve felt fully connected to the global team. Communication is seamless, and there’s a real effort to make sure remote staff feel included and empowered.

Overall, Cytel fosters a culture that supports both professional growth and personal well-being, and that’s something I truly appreciate.

 

Finally, what are your main interests outside of work? What helps you recharge and stay inspired?

When I’m not working, you’ll probably find me out in the beautiful South African bushveld, book in hand, or enjoying coffee in the sun — my personal reset button. I love getting creative in the kitchen (even if some dinners end up as “learning experiences”) and tackling home improvement projects just for the fun of it.

I’m also a mom to teenagers, which means my life is a mix of deep chats, dramatic eye rolls, and trying to keep up with slang that changes weekly. They keep me laughing, grounded, and constantly on my toes.

Spending time with family and friends is what really recharges me. It’s the fuel that keeps everything else running smoothly.

Thank you, Naydene, for sharing your experience with us!

Naydene Slabbert