Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Interim Decision-Making in Clinical Trials: A Focus on Sample Size Re-Estimation and Population Enrichment

In the evolving landscape of clinical trial design, flexibility and efficiency have become essential for success. Sample size re-estimation (SSR) and population enrichment — both adaptive trial design methods — use interim data to make informed mid-trial adjustments. While they address different aspects — SSR focusing on how many patients to enroll and population enrichment focusing on which patients to include — both approaches aim to optimize trial outcomes, reduce unnecessary exposure, and make better use of limited resources.

This blog explores how these two methods work, their statistical underpinnings, and how they can be used to build more ethical, targeted, and cost-effective trials.

 

Sample size re-estimation

Sample size re-estimation is a type of clinical trial design adaptation in which the sample size can be reassessed at an interim look, based on accumulated data. Over the years, this method has grown in popularity for several reasons:

  1. SSR designs address variability in an observed treatment effect when the treatment shows some promise, but the effect size is not as pronounced as originally expected.
  2. SSR designs produce more ethical trials, as they limit the number of patients exposed to treatment until sufficient efficacy evidence is collected.
  3. These designs provide flexibility in trial implementation in cases of hard-to-recruit patient populations or rare disease.
  4. They allow for gatekeeping of investment for biotech companies who may undergo additional scrutiny to justify additional R&D spend.
  5. They limit the pursuit of relatively small treatment effects that may not be clinically meaningful.

 

The CHW and CDL statistical methods for SSR

Following the seminal work on adaptive interim analysis by Bauer and Kohne (1994) and others, Cui, Hung, and Wang proposed a method that is today widely accepted in the field of biostatistics, combining statistics with pre-specified weights to preserve Type I error now known as CHW (1999). An alternative method proposed by Chen, DeMets, and Lan (2004) and known as CDL, provides an alternative to the use of the weighted statistic in a confirmatory two-arm, two-stage design where the sample size of the second stage is increased based on an unblinded analysis of the data at the first stage.

Both CHW and CDL are accepted by regulatory bodies such as the FDA in cases where such an adaptation is deemed appropriate. The CHW method applies a lower weight to the contributions of the second stage of the design relative to those of the first stage, and the CDL method permits the use of conventional statistics for testing the primary endpoint at the end of the study while still preserving Type I error.

 

Population enrichment

Population enrichment is a clinical trial design adaptation that allows for the use of data from an ongoing clinical trial to adjust the sample size of the entire study population, or a promising subpopulation based on a specific biomarker or other characteristics. At the outset, the overall trial population is enrolled in the study, regardless of biomarker status or other subgroup attribute. At the time of an interim analysis, a decision can be taken to either continue enrollment of the overall population, a subgroup of the population that is showing promise, or terminate the entire study for futility. Restricting enrollment to a specific subgroup enriches the data collected for this subpopulation.

There are several benefits for this adaptation, including:

  • Optimizing resource allocation by enriching promising subpopulations while avoiding continued investment in less-successful subpopulations.
  • It allows investigators the opportunity to examine a larger population while reducing the risk of trial failure or unnecessary drug exposure due to heterogeneity among the study’s subpopulations.
  • At the same token, it increases the probability of success of a study, by increasing the sample size of promising subgroups.

 

How to model SSR and population enrichment

Both CDL and CHW methods for sample size re-estimation and population enrichment, are adaptations that can be modeled using Cytel’s East Horizon™ platform. Find out more by booking a product demonstration.

 

Final takeaways

Sample size re-estimation and population enrichment approaches are powerful adaptations in the biostatistician’s toolbox for advanced, cost-effective, and ethical clinical trial design. They empower sponsors to allocate R&D resources more appropriately towards promising treatments, while limiting exposure of patients to potentially ineffective or harmful treatments.

Smartwatches Are Transforming Clinical Trials: Insights from Digital Primary Endpoints

The landscape of clinical research is continually evolving, with a growing emphasis on leveraging digital technologies to enhance efficiency and data quality. Among these innovations, wearable devices like the Apple Watch have emerged as promising tools for continuous and remote patient monitoring.

We recently analyzed the current application of smartwatches in clinical trials, focusing on their role in capturing digital primary endpoints across a variety of therapeutic areas. Here, I share some of our key findings, including major application areas as well as the benefits and challenges associated with their wider adoption in clinical research.

 

Digital primary endpoints

One way smartwatches are being used in clinical trials is to collect digital primary endpoints — sensor-generated data often collected outside a clinical setting.

To understand the potential impact of smartwatches in this context, we analyzed 87 completed or terminated clinical trials listed on ClinicalTrials.gov that used Apple Watch technology, examining key variables such as therapeutic focus, endpoint types, geographic distribution, and study design. Here is what we found:

 

Key Findings

  • High completion rate: 93.1% of the trials were completed successfully.
  • Top therapeutic areas: Cardiology led with 28.7% of studies, followed by neurology (21.8%) and oncology (11.5%).
  • Common endpoints: ECG changes (18.4%), heart rate variability (12%), and oxygen saturation (10%) were the most frequently measured.
  • Study design: Interventional trials dominated (64%), with high recruitment rates across the board.
  • Geographic trends: North America hosted the majority of trials (55%), followed by Europe (30%).

 

Importantly, validation studies confirmed the diagnostic accuracy of these devices, supporting their potential for regulatory approval.

Leveraging Consumer-Grade Wearables in Clinical Trials: Insights From Digital Primary Endpoints Figure 1

 

Cossio, M. & Gilardino, R. (2025, May 15). Leveraging Consumer-Grade Wearables in Clinical Trials: Insights From Digital Primary Endpoints [Conference presentation]. ISPOR 2025, Montreal, Canada.

 

Why wearables matter in clinical trials

In clinical trials, smartwatches offer several unique advantages:

  • Continuous, remote monitoring: Smartwatches enable continuous, remote monitoring of patients, reducing the need for in-person visits and enhancing data collection.
  • Scalability: Smartwatch use is ideal for decentralized or hybrid trials, where flexibility and patient engagement are key, enabling participation across wide geographies.
  • Reduced costs: Smartwatches can also help reduce trial costs by requiring fewer site visits, enabling decentralized trials, providing real-time data collection and automated uploads, and so on.
  • Improved patient adherence and engagement: Smartwatches often include reminders, notifications, and user-friendly interfaces that help patients stay compliant with treatment schedules, data input, and study protocols.
  • Objective, high-frequency data: Smartwatches gather physiological metrics (e.g., heart rate, activity levels, sleep patterns) with high frequency and objectivity, reducing reliance on subjective self-reporting.
  • Increased accessibility and inclusivity:
    Smartwatches can broaden trial access for populations who may face barriers to travel or mobility, thereby enhancing demographic diversity and generalizability of trial findings.

 

The growing use of wearables and the future of clinical trials

The growing use of wearables in clinical trials signals a shift toward more scalable, cost-effective, and patient-friendly research models. However, challenges remain — particularly around technical reliability and patient adherence. Future research should focus on integrating wearables into value-based healthcare and global trial frameworks.

How CDISC and CDASH (CRF Standards) Streamline Clinical Trials

In today’s global research landscape, clear and consistent communication is more than a necessity — it’s a strategic advantage. It is particularly critical in clinical trials, where data must speak a universal language across teams, geographies, and regulatory frameworks.

The CDISC (Clinical Data Interchange Standards Consortium) and CRF (Case Report Form) standards serve as the universal language of clinical trials, ensuring consistency, clarity, and collaboration across the entire study lifecycle. By implementing these essential frameworks, organizations can optimize data collection, management, and submission — driving cost efficiency and accelerating medical advancements.

Here, we discuss CDISC and CRF standards and how they support the design, execution, and analysis of clinical trials.

 

The need for standardization

Overall, ensuring consistent and reliable data across multiple clinical studies requires the standardization of processes, procedures, and data collection methods. This uniformity can improve data quality, facilitate data sharing and analysis, and ultimately enhance the efficiency and validity of clinical research.

There are many benefits to utilizing CDISC and CRF standards, such as:

  • Improved data quality and reliability
  • Enhanced data sharing and integration
  • Increase efficiency
  • Improved communication and collaboration
  • Support for regulatory compliance
  • Scalability and repeatability

Let’s take a closer look at how CDISC and CDASH standards help create a foundation for data collection, presentation, and submission in clinical trials.

 

CDISC Foundational Standards

CDISC (Clinical Data Interchange Standards Consortium), a global non-profit organization, develops and promotes standards for data exchange in clinical research. The CDISC Foundational Standards support end-to-end clinical and non-clinical research processes, focusing on the core principles for defining data standards, and include models, domains, and specifications for data representation.

 

FDA guidance on CDISC standards

In recent years, the FDA has clearly stated its preference for receiving both clinical and analysis data formatted in compliance with CDISC standards. This has been communicated through a series of guidance documents, correspondence with sponsors, and presentations at conferences. As a result, CDISC models have become the de facto standard for submitting data to the FDA.

As of today, the FDA requires the following CDISC standards:

  • Controlled terminology
  • SEND
  • SDTM
  • ADaM
  • Define-XML

 

CDASH: Maximizing data quality

CDASH (Clinical Data Acquisition Standards Harmonization), a foundational standard developed by CDISC, focuses on harmonizing data collection in clinical trials, providing guidance on how to design and populate case report forms (CRFs) to ensure consistent data collection across studies. These standards help maximize data quality in order to streamline processes across the entire spectrum of medical research, from crafting clinical research protocols to reporting and regulatory submissions.

CDASH Model v1.3 — the latest version — was released in September 2023.

 

Key features of CDASH

  • Provides guidance on designing and populating CRFs/eCRFs, covering all therapeutic areas and phases of clinical trials
  • Specifies standard field names, meanings, and how to fill them
  • Characterizes fields as highly recommended, conditional, or optional
  • Includes a CDASH Model and CDASH Implementation Guide

 

The benefits of CDASH

Instead of following bespoke standards, CDASH’s guidelines for CRFs/eCRFs help sponsors collect data consistently across studies. This further aids in producing data in SDTM format for submission purposes and allows regulators to review data submission packages more accurately and efficiently, identifying concerns or making approvals faster. In addition, you can remove the duplication of trials and post-marketing evaluation, improving patient centricity.

CDASH standards also provide guidance for the development of data collection tools, which are clear, understandable, and precise. Following CDASH standards ensures traceability of trial data from the time the data is collected at the site until the data is ready for final analysis and regulatory submission. This maintains the integrity of source data to support the trial’s outcome/findings.

Sponsors can further save on time required for setting up new studies following the CDASH standards as most of the data collection and associated programming can be standardized across studies.

 

CRF libraries

A Case Report Form (CRF) library in clinical trials is a collection of standardized, reusable CRFs designed to streamline data collection and management. These libraries, whether electronic or physical, offer templates and guidelines for collecting data across different trials and therapeutic areas. They ensure uniformity, accuracy, and efficiency in data collection, ultimately benefiting trial conduct and analysis.

CRF libraries can reduce the cost and time budgeted for the clinical trial database preparation by:

  • Streamlining processes
  • Reducing training
  • Accelerating clinical trials
  • Using resources more efficiently
  • Improving adaptability and consistency
  • Focusing on design

 

Final Takeaways

CDISC standards, including CDASH and CRF standards, have revolutionized the way clinical data is managed, presented, and submitted, enhancing its integrity and efficiency in clinical research and drug development. Conformance to these standards is thus a critical aspect of clinical studies to ensure uniform data collection and submission processes, ultimately bringing quality treatments to patients faster.

 

Interested in learning more?

Watch our on-demand webinar, “Boosting Efficiency with CRF and CDISC Standards”:

Career Perspectives: A Conversation with Joe Maginnity

In this edition of our Career Perspectives series, we had the pleasure of speaking with Joe Maginnity, Biostatistician II at Cytel. With a background in biological sciences, Joe shares insights into his professional journey, the collaborative nature of his role as a biostatistician in Data Monitoring Committees (DMC), and how biostatistics is evolving alongside advances in AI and machine learning. He also reflects on the importance of communication, remote work strategies, and the value of maintaining balance beyond the screen.

 

Can you give us a little background on your career so far? What inspired you to pursue a degree in biostatistics and a career as a biostatistician?

After graduating with a degree in Biological Sciences from the University of California, Davis, I originally considered pursuing a career as a physician, but ultimately discovered the great field of biostatistics. I wanted to apply both my knowledge of medicine and mathematics and biostatistics was the perfect fit. I graduated from the Ohio State University with my MS in Biostatistics in 2020 and was hired by Cytel in March 2021 as a Biostatistician I. The following year, I was promoted to Biostatistician II. Over the past four years, I have grown into a more independent role within the DMC and have been the lead biostatistician on multiple projects.

 

Can you walk us through what a typical day looks like in your role? What kinds of tasks do you usually focus on, and how closely do you work with clients?

I am based in Seattle, Washington, and my clients range from all over the United States and Europe. I usually start my workday early to stay in contact with clients in Europe, with the remainder of my morning reserved for meetings. Then I arrange my day around my high priority work. In addition to daily tasks such as QC reports, report deliveries, and minutes reviews, I also attend DMC meetings, working very closely with clients beforehand to ensure everything runs smoothly and all bases are covered.

 

Are there any common misconceptions about being a biostatistician in clinical trials?

I think a common misconception is that biostatisticians only work on data analysis and statistics. However, to be a successful biostatistician in clinical trials, communication is very important. It is a huge part of this job. You have to complete many time-sensitive tasks to ensure that you are producing high-quality deliverables and providing insightful statistical knowledge for many different clients. Without the ability to communicate effectively and perform tasks in a timely manner, you would not be able to execute the tasks required of a biostatistician here.

 

What makes for a successful collaboration between statisticians and other members of a clinical trial team?

Successful collaboration is built primarily on great communication. Having a complete understanding of what work is being expected from us and being able to communicate with the clinical trial team when we are in need of more clarification or in need of some more statistical insight goes a long way. I always try to be as communicative and clear as possible with all the clinical trial teams and DMC I work with in order to build a strong and successful partnership.

 

In your thesis research, you used machine learning methods and statistical model building. How do you see the role of biostatistics evolving in the next 5–10 years, especially with the increased use of AI and machine learning?

I think in the next 5–10 years, biostatistics will likely become more intertwined with AI and machine learning, leading to new biostatistics roles and the redefinition of existing ones. The increasing demand for AI-powered tools and data analysis will most likely require biostatisticians to expand their expertise in these areas. This includes using AI to improve risk prediction, identify patterns in large datasets, and personalize treatment plans. In using machine learning, biostatisticians may become more proficient in analyzing complex data and making statistical predictions.

 

As a remote employee, how do you maintain a healthy work-life balance? What strategies work for you, and do you feel supported by Cytel in this regard?

My home is my office, so I enjoy creating a fun workspace that keeps me motivated and focused. I have a standing desk where I do most of my work, and it is located next to my record player. Throughout the day — when I am not in a meeting, of course — I like to listen to different types of records, as it requires me to take breaks when one side of a record is done playing. It helps me stay focused while also reminding me to take small breaks away from the computer screen.

By being remote, I am also allowed the privilege of working while I am traveling. This has allowed me to visit friends and family in many different cities while saving up vacation time for when I want to travel, but not work. I feel very supported by my manager and team. I just need to give them enough notice of where I may be working remote from, especially when the time zones are much different.

 

What are your main interests outside of work?

Being in Seattle, there are so many amazing activities in this lively city. I really enjoy going to live music concerts. I probably attended 50 concerts last year alone! I also enjoy baking for my friends — and they all enjoy eating baked goods, especially my chocolate chip cookies. Seattle also has many different record stores, and I like browsing all their different varieties of music. And as you may have noticed earlier, I especially love traveling, both within the United States and internationally. I recently visited Japan, and this summer I plan to travel to Europe for 6 weeks, visiting places like London, Dublin, Oslo, Copenhagen, and Amsterdam.

 

Finally, what’s one piece of career advice you wish you had received earlier?

Set boundaries early and stick to them. I give 100% of myself when I’m at work, and I give 100% of myself to me, my family, and friends after work.

 

New Guidelines Aim to Improve the Quality of Pharmaceutical Suspensions

The United States Pharmacopeia (USP) is developing a new general chapter, <1003> Resuspendability and Redispersability, to standardize the evaluation of these characteristics in pharmaceutical suspensions. This initiative addresses critical quality attributes in suspension formulations to ensure consistent dosing and therapeutic efficacy. The proposal is scheduled for publication in Pharmacopeial Forum 51(4) on July 1, 2025, with final comments due September 30, 2025.

Here, I discuss why resuspendability matters, its challenges, and how this new chapter can improve the quality of pharmaceutical suspensions and facilitate compliance with regulatory standards.

 

Why resuspendability matters

Pharmaceutical suspensions are biphasic systems in which solid particles are dispersed within a liquid medium. Over time, these particles may agglomerate and settle, leading to sedimentation. The ease with which this sediment can be re-suspended and re-dispersed upon agitation — known as resuspendability and redispersability — is vital for maintaining dose uniformity. Poor resuspendability and redispersability may result in inconsistent dosing, potentially compromising patient safety and treatment outcomes.

 

Challenges in formulating a good suspension

Developing a suspension with optimal resuspendability and redispersability involves addressing several formulation challenges:

Particle size distribution

Smaller, uniformly sized particles tend to remain suspended longer and are easier to resuspend. However, achieving this uniformity requires precise manufacturing processes, and variations can affect sedimentation behavior.

 

Suspending agents

Appropriate suspending agents can increase the viscosity of the medium, reducing sedimentation rates. However, excessive viscosity may hinder pourability and reduce patient acceptability.

 

Wettings agents

Used to improve the dispersion of hydrophobic drug particles within the liquid medium, wetting agents are crucial to prevent particle agglomeration, which can impair redispersability.

 

Ionic strength

The ionic strength of the suspension can influence particle interactions. Proper electrolyte balance is necessary to prevent flocculation or deflocculation, both of which can negatively impact resuspendability.

 

How resuspendability and redispersability are tested

Assessing resuspendability and redispersability is a multifaceted process that combines both qualitative and quantitative methods:

Visual inspection

This simple method involves observing the sediment’s behavior upon agitation. While it offers a basic assessment, it lacks precision and reproducibility.

 

Settled sediment to total suspension volume ratio

This quantitative metric measures the ratio of settled sediment volume to the total suspension volume over time, offering insights into sediment characteristics.

 

Rheological measurements

These assessments evaluate the flow properties of the suspension, offering insight into its structural integrity and resuspension behavior under different shear conditions.

 

Implications of USP’s new chapter

The introduction of USP’s general chapter <1003> will provide standardized methodologies for assessing resuspendability and redispersability across various suspension types and administration routes. These new guidelines may help:

Enhance consistency

Clear evaluation guidelines will support manufacturers in producing suspensions with reliable resuspendability profiles, ensuring consistent therapeutic outcomes.

 

Facilitate regulatory compliance

Standardized assessment frameworks can simplify the review process for regulatory agencies, potentially expediting product approvals.

 

Looking ahead

The forthcoming USP chapter on resuspendability and redispersability represents a significant advancement in the quality assessment of pharmaceutical suspensions. It’s anticipated to aid compliance with international guidelines such as ICH Q6A “Specifications: Test Procedures and Acceptance Criteria for New Drug Substances and New Drug Products: Chemical Substances,” as well as existing USP chapters such as USP <1> “Injections and Implanted Drug Products (Parenterals) — Product Quality Tests” and USP <2> “Oral Drug Products — Product Quality Tests With Respect to Resuspendability and Redispersability.”

By addressing formulation challenges and standardizing evaluation methods, this initiative aims to ensure that suspensions deliver consistent, safe, and effective therapeutic outcomes. Formulators and manufacturers should proactively adapt to these guidelines, fostering innovation and maintaining high-quality standards in suspension-based drug products.

Health State Utility Values in Fabry Disease: Insights from the Pegunigalsidase Alfa Clinical Trials. Khashayar Azimpour, Patricia Dorling, Irene Koulinska, Swati Kunduri, Zhiyi Lan, Julia Poritz, Gabriel Tremblay & Angie Raad-Faherty. Advances in Therapy 2025

Effect of Tezepelumab on Sino-Nasal Outcome Test (SNOT)-22 Domain and Symptom-Specific Scores in Patients with Severe, Uncontrolled Asthma and a History of Chronic Rhinosinusitis with Nasal Polyps, Joshua S. Jacobs, Joseph K. Han, Jason K. Lee, Tanya M. Laidlaw, Nicole L. Martin, Scott Caveney, Christopher S. Ambrose, Neil Martin, Joseph D. Spahn & Flavia C. L. Hoyte, Advances in Therapy 2025

Best Practices for Ensuring Data Quality in Clinical Trials

Good data is essential for successful clinical trials. It helps ensure accurate analysis, guides important decisions, and supports the approval and safe use of new treatments. As trials become more complex with remote setups, many data sources, and stricter rules, keeping data quality high is more important than ever.

In this post, we’ll look at simple, effective ways to protect the accuracy and trustworthiness of data in clinical trials.

 

Create a strong data management plan

A good Data Management Plan (DMP) is the first step to quality data. It explains how data will be collected, checked, cleaned, and stored during the trial. It also helps everyone involved know their role.

A strong DMP should include:

  • Clear roles and responsibilities
  • Information about study set up including the electronic data capture used and the audit trail.
  • Step-by-step instructions for entering and handling data
  • Data cleaning process and details
  • Management of Serious Adverse Event (SAE) reconciliation and medical coding within the study

If you start your DMP early and keep it up to date, it will help avoid confusion and keep the trial consistent.

How to create a strong data management plan infographic

Use standardized data collection methods

Data collection should follow a consistent approach. It starts with designing a smart Case Report Form (CRF) that only asks for the necessary information and matches the trial goals. Using standard forms (like CDASH) across studies makes data easier to manage and review.

Other ways to keep data collection consistent:

  • Use standard medical terms (e.g., MedDRA, WHO Drug Dictionary)
  • Train staff on correct data entry
  • Use reliable electronic systems with built-in checks to catch errors
  • Avoid the comment or text field as much as possible
  • Do not collect data twice (duplicated data)

These strategies will reduce mistakes and save time fixing issues later.

 

Monitor data actively

To keep data quality high, you need to prevent problems and catch them early. Active monitoring either remotely or based on risk can help spot problems before they get worse.

Examples of active monitoring:

  • Dashboards that show missing or unusual data
  • Review of top priority data for the primary analysis
  • Regular review of key items like side effects or medication use
  • Focus monitoring on high-risk sites and processes

Finding and fixing issues early keeps your data reliable. Moreover, fixing problems as early as possible enables the site to avoid recurring issues.

 

Handle queries quickly and clearly

Resolving data queries (questions or issues) takes time, so it’s important to manage this well.

Tips for efficient query handling:

  • Use automated checks to catch simple issues
  • Focus manual review on complex or safety related data
  • Keep clear records of how each query is resolved by adding a comment
  • Pay attention to queries opened for several days to check with the sites on the reason why

Good query management keeps the trial moving and ensures the data is clean and complete.

 

Combine data from different sources carefully

Today’s trials often use data from many places like labs, apps, devices, and imaging systems. Keeping this data consistent is key.

Best practices include:

  • Creating a Data Transfer Agreement (DTA) detailing data transfer specifications, like for instance how the data will be transmitted, the frequency of the transfer, and the data to be transmitted
  • Checking and validating all incoming data
  • Setting up checks to make sure sources agree (e.g., comparing lab data with system data)

Good data integration helps you understand results more clearly and trust the final data.

 

Follow regulatory guidelines

High data quality also means following the rules. Agencies like the FDA and EMA expect clean, traceable, and well-documented data.

To be compliant:

  • Have clear procedures and test your systems
  • Run regular data audits
  • Make sure your data follows ALCOA+ principles (Attributable, Legible, Contemporaneous, Original, Accurate, and more)

Meeting these rules protects your study and shows the data is trustworthy.

 

Staff training and communication

Even with great tools, skilled people are essential. Train all team members regularly so they understand their role in protecting data quality. Make sure communication is clear between everyone’s sites, teams, and vendors. Write eCRF Completion Guidelines and perform a training video to train the sites and the investigator, explaining how the system works and how to perform data entry.

Sharing knowledge and working together helps build a culture where quality comes first.

 

Final thoughts

Keeping data quality high in clinical trials takes planning, careful checks, and teamwork. By following best practices like clear data collection, active monitoring, and smart integration, you can ensure your data is accurate and ready for review.

As clinical trials continue to evolve, one thing stays the same: quality data is key to faster approvals and better treatments for patients.

Moving to Agile: A New Approach to Statistical Programming

Prior to recent advances, traditional software development processes have been characterized by rigid methods that required teams to follow pre-defined processes. However, the advent of Agile programming revolutionized traditional development processes by shifting the focus to flexibility, collaboration, and continuous improvement. Unlike traditional methods, Agile embraces change and enables teams to respond quickly to new requirements.

Now, the Agile approach has moved from software development into statistical programming, allowing teams to work in small increments rather than following a linear, pre-planned process. Instead of extensive upfront planning, Agile encourages adaptability and frequent reassessment of project goals.

Here, I discuss Agile methodologies, the benefits and challenges, and invite readers to learn more with our new case study on implementing Agile and Scrum for SAS programming in clinical development.

 

What is Agile programming?

Agile is an iterative project management and development approach that prioritizes flexibility, collaboration, and responsiveness to change. Though originally developed for software engineering, Agile has since gained widespread adoption across various industries, including healthcare and clinical research.

At the heart of Agile is the concept of breaking down complex projects into smaller, manageable units of work, called “sprints,” typically lasting one to four weeks. At the end of each sprint, the team delivers a functional product increment, ensuring continuous feedback and the ability to adjust course as needed.

Key tenets of Agile in statistical programming include:

  • Prioritizing individuals and interactions over processes and tools to foster teamwork and effective communication.
  • Prioritizing customer collaboration over contract negotiation to involve stakeholders throughout the process.
  • Prioritizing responding to change over following a plan to support remaining flexible to evolving needs.

These tenets support incremental delivery of outputs, frequent feedback loops to all programmers, and overall team collaboration.

 

Benefits of Agile programming

Agile methodologies offer numerous additional advantages, making them a preferred choice for modern development teams:

 

Faster delivery times

Agile focuses on small, manageable iterations (sprints), allowing teams to release interim deliverables frequently rather than waiting for the entire product to be complete.

 

Higher customer satisfaction

Continuous delivery and ongoing stakeholder involvement ensure products align with user needs, leading to better adoption and positive feedback.

 

Reduced risk of project failure

By regularly assessing project goals, teams can detect potential issues early and make adjustments before they become costly problems.

 

Agile methodologies

Agile methodologies come in different flavors, each tailored to unique team dynamics and project needs.

 

Scrum

Scrum is one of the most widely used Agile frameworks. It divides development into short cycles called sprints (typically 2 weeks), during which teams work on prioritized tasks. Scrum incorporates daily stand-up meetings and reviews to track progress and remove obstacles.

 

Kanban

Kanban is a visual workflow management system that emphasizes continuous delivery. Teams use a Kanban board to track tasks in various stages (To-Do, In Progress, Completed), ensuring transparency and limiting work in progress to prevent bottlenecks.

 

XP

XP focuses on high-quality development practices like test-driven development (TDD) and continuous integration (CI). It encourages pair programming and frequent code reviews to enhance software quality.

 

Challenges to adopting Agile

While Agile offers many benefits, teams may face challenges when adopting Agile practices. Rapid development cycles can lead to frequent scope changes, making it hard to maintain focus. This can be avoided by clearly defining priorities and using backlog refinement sessions to keep scope manageable.

Additionally, Agile relies heavily on collaboration, but without proper communication, misunderstandings can arise. Strategies for preventing this include encouraging daily stand-ups, using standard project management tools, and fostering a culture in which open commentary is encouraged.

Finally, transitioning to Agile can be difficult, especially in organizations accustomed to traditional methods. But a gradual approach to this new methodology is warranted: provide Agile training, start with pilot projects, and celebrate early wins to build confidence.

 

Final takeaways

Agile programming is more than just a methodology — it’s a mindset that promotes adaptability, efficiency, and collaboration. By embracing Agile, teams can deliver high-quality software faster while continuously improving their processes. Whether you’re a startup or an enterprise, adopting Agile can lead to better productivity and customer satisfaction.

 

Interested in learning more?

Download our new white paper that provides a detailed case study on implementing Agile and Scrum for SAS programming in clinical development.

From Data Standards to Open Source and Beyond with AI

Key takeaways from CDISC EU Interchange and PHUSE-CSS

As clinical data science evolves rapidly, the CDISC EU Interchange and PHUSE-CSS conferences offer a glimpse into the future of regulatory submissions, standardization, and the rise of open-source tools and AI in drug development. In May, I had the privilege of attending both the events, in Geneva and in Utrecht. I’d like to share here some highlights from both conferences.

 

Data submission in Europe: EMA delays

As anticipated in my previous blog, we were waiting for further announcements from the EMA regarding the outcome of their pilot raw data submission project, for which an interim report was published last year.

Those, like me, who were expecting a final announcement were likely disappointed. The requirement for data submission to the EMA in support of drug approval has been postponed to 2028. The EMA, which was well represented at PHUSE-CSS, needs to further evaluate factors such as tools and technological impacts more broadly. At PHUSE-CSS, they showed particular interest in topics such as Dataset-JSON, BIMO, the use of tools such as R-Shiny and the {teal} framework, as well more advanced topics still under development such as the “Analysis Concept.” The pilot continues, and the EMA is seeking more volunteers. It was guaranteed that submitting data will not negatively affect or delay your drug approval!

It was clear, while discussing with EMA representatives, that a number of stakeholders within the agency still need to be convinced of the benefits of receiving datasets in addition to PDF documents and reports. Some appeared concerned about the additional time and effort required to assess submitted datasets. As we all know, updating regulations, as well as releasing new standards, require a great deal of “diplomacy” and the consensus among multiple stakeholders.

 

Open source: {teal} and R-Shiny adoption

The “open source” revolution continues to gain its momentum in our industry. At PHUSE-CSS, I attended the “{teal} Success Stories” workshop, where various sponsors, including J&J, Sanofi, Novartis, Boehringer, and Roche, shared their experiences.

I was fascinated by the solutions those sponsors have already implemented using {teal}, and how straightforward it seems to develop R-Shiny applications using the framework provided by this R package, which was originally developed by Roche.

For a deeper insight into the capabilities of this package, I recommend reading the paper presented by Roche at PHUSE US 2024.

 

Dataset-JSON pilot update

Another interesting workshop I attended at PHUSE-CSS was on Dataset-JSON, where we reviewed and contributed to a consolidated set of comments and feedback in response to the “FDA Requests for Public Comment on CDISC Dataset-JSON Standard,” which closes next week on June 9, 2025.

While the benefits of such a standard were widely acknowledged, particularly in accelerating drug approval and improving overall interoperability, the discussions also highlighted potential risks and implementation challenges. These included concerns about numeric precision when importing Dataset-JSON to and from SAS, as well as handling special characters.

We therefore emphasized the need for the FDA to provide additional guidance to support future adoption; there was also interest in possible future extensions of Dataset-JSON, such as the inclusion of more metadata and the potential to embed define.xml.

 

BIMO

BIMO was the focus of another PHUSE-CSS workshop. Among all the various topics discussed, such as the presentation of the BIMO PHUSE template reviewer guide, it was particularly interesting to learn that PHUSE will soon be developing a dedicated FAQ to support sponsors and CROs on the gray area of the BIMO requirement such as the definition of “major” studies, currently the object of BIMO requirements.

 

CDISC 360i reboot: Toward an end-to-end digital pipeline

The CDISC 360 initiative is back, and stronger than before, with a major shift toward a fully digitalized and standard-driven clinical development lifecycle. The goal is to break down silos through the application of standards such as the USDM and Biomedical concept. The mission is ambitious, but unlike when CDISC 360 was first launched, we now have more mature standards and technology to support it.

 

Use of AI to support clinical standards

AI remains a hot topic, and as in 2024, a full session was dedicated to it at the CDISC EU Interchange. The common theme across most presentations was the use of generative AI to support the implementation of data standards, such as AI acting as a subject matter expert (SME) for study teams. Although many of the showcased solutions from Argenx, SGS, and AstraZeneca are still in beta, they clearly demonstrate how proper model training can enhance search and navigation within complex data standards libraries, or help manage complex, multidimensional data (e.g., omics, wearable biosensors). Other AI use cases were also featured in several posters at PHUSE-CSS; for example, the application of AI to generate synthetic data or automate local lab ranges.

 

Other topics

For topics such as Digital Data Flow and USDM, I’ll refer you to the LinkedIn newsletter “View From The Coffee Shop,” curated by my friend Dave Iberson-Hurst. In it, he regularly shares insightful thoughts and updates on the ongoing digitalization efforts within our industry. He also summarized some key takeaways from both the CDISC EU Interchange and PHUSE-CSS.

I also had the opportunity to see good use cases of Analysis Results Standards (ARS) at CDISC EU Interchange, showing this relatively new standard have been well perceived by sponsors as well as vendors.

On the regulatory side, aside from the news from the EMA, I found the presentation from Sanofi and GSK particularly interesting. It covered a cross-industry initiative aimed at harmonizing vaccine regulatory submission to FDA-CBER, by sharing experience with this unique division, which often has its own set of sometime unexpected requirements (see also my previous blog on submission experience with FDA-CBER).

For other topics, see also here official CDISC posts for other conference sessions content:

 

Ongoing innovation

Overall, both conferences continue to showcase ongoing innovation in our Industry. It’s clear that change is happening at a pace I have never seen before in my 30-year career, and that’s good for patients, as well as an exciting time for those of us working in biometrics.

 

Interested in learning more?

Download Angelo’s new ebook, The Good Data Submission Doctor on Data Submission and Data Integration to the FDA, a collection of Angelo’s most critical insights on clinical data standards submission to the FDA, including key updates from the new FDA Study Data Technical Conformance Guide: