Solutions
About Us
Insights
Careers
Contact us
Contact Us
Customer Support
Customer Support

Finding the Optimal Biological Dose with New PKBOIN-12 Method

With the rise of targeted and immunotherapies, we have recently seen a shift away from finding a drug’s maximum tolerated dose (MTD) in Phase II dose-finding studies and toward identifying the optimal biological dose (OBD) the dose that optimally balances safety, tolerability, and early efficacy. A new method, PKBOIN-12, extends the BOIN12 framework to integrate Pharmacokinetic (PK) parameters to refine the dose-finding and final OBD selection.

Here, we discuss PKBOIN-12, recent regulatory shifts regarding dose finding, including the FDA’s Project Optimus, and Cytel’s East Horizon™ dose-finding module.

 

What is PKBOIN-12?

PKBOIN-12, developed by Dr. Hao Sun of Bristol Myers Squibb and Tu Jieqi of the University of Illinois Chicago, is an innovative dose-finding method that enhances the established BOIN12 algorithm by incorporating Pharmacokinetic (PK) information into the Optimal Biological Dose (OBD) determination process. In recent years, particularly with the rise of targeted and immunotherapies, the focus in early-phase dose-finding studies has shifted away from finding the Maximum Tolerated Dose (MTD) and toward identifying the OBD, the dose that optimally balances safety, tolerability, and early efficacy.

BOIN12 is one such method that assesses both safety and efficacy, but, like many dose-finding designs, it typically does not formally use auxiliary data. Researchers routinely collect PK measurements in order to characterize drug exposure associated with the various tested dose levels, but these are not usually incorporated into the risk-benefit analysis when designing clinical trials. PKBOIN-12 addresses this by extending the BOIN-12 framework to integrate collected PK data to refine the dose-finding and final OBD selection.

Indeed, simulation results comparing PKBOIN-12 and BOIN-12 demonstrate that the former more effectively identified the OBD and allocated a greater proportion of patients to that optimal dose.

 

Project Optimus: A regulatory shift toward the OBD

In addition to the general industry trend in collecting and considering a broader set of data in early-phase dose-finding oncology studies, we have seen a real shift in regulatory interest in this area, encapsulated in the FDA’s Project Optimus.

In a previous blog post, James Matcham and Michael Fossler highlight how a recognition of the changing nature of oncology therapies — away from chemotherapies and towards more advanced biologics — necessitated a change in how these products are developed and assessed for efficacy and safety.

Project Optimus posits that the dose-finding paradigm must shift away from safety and tolerability alone, and towards incorporating efficacy considerations at this stage. An ideal dose-finding study under the Project Optimus lens emphasizes the determination of a dose range that does not focus on the MTD, but rather the OBD, or the dose range that considers efficacy, tolerability, safety, and pharmacokinetics.

PKBOIN12 is therefore well-suited to meet the challenges presented by Project Optimus and is indeed at the forefront of both industry trends and regulatory expectations.

 

Dose finding with the East Horizon™ platform

Cytel’s software development teams will soon be launching the dose-finding module, the sixth installation of the East Horizon platform. This module completes an almost two-year journey of migrating Cytel’s flagship software heritage, East, into a cloud-native, modern, and updated East Horizon platform. Over these months, our teams worked tirelessly to select from our wide repertoire of software solutions, those features, methods, and tests most relevant to our user base, and thoughtfully curated additional frequentist and Bayesian methods that are completely new for Cytel software. One such method is the new PKBOIN-12 dose-finding method.

 

Interested in learning more?

On November 18, 2025, Cytel will host Dr. Hao Sun for a webinar to discuss this new method in depth, and to highlight the technical as well as tactical aspects of implementing this method. Register today and join us for a fascinating conversation:

From Metadata to Submission: Rule-Based Robotic Process Automation for Statistical Programming Excellence

In the race to modernize data operations in clinical research and regulatory submissions, Robotic Process Automation (RPA) powered by rule-based systems has emerged as a dependable and high-impact solution. These systems offer clarity, control, and reproducibility — critical traits for industries like biopharma where regulatory compliance and data integrity are non-negotiable.

Here, we discuss rule-based RPA as the foundation for a scalable and auditable standards automation pipeline.

 

Rule-based automation: Transparent, trusted, and tunable

Unlike more probabilistic models, rule-based systems operate on deterministic logic. Every output is traceable back to an explicit rule, which enhances trust and simplifies troubleshooting. This transparency is particularly valuable when the processes must be easily explained to stakeholders and auditors.

Key strengths of rule-based RPA include:

Transparency

Each step in the workflow is rule-driven, making the logic easy to inspect, validate, and justify. This ensures regulatory reviewers can clearly understand how data was transformed or outputs generated — vital in submission contexts.

Consistency

Standard rules applied across studies generate consistent outputs. For example, Cytel’s ALPS system creates SDTM and ADaM code from structured specifications, producing reliable results that hold up across different projects and teams.

Customizability

Rule-based systems are modular. Teams can easily adapt existing rules to accommodate study-specific needs without overhauling the entire system. Tools like Prism allow this by applying both generic rules and study-specific layers for enriched metadata processing.

 

Cytel’s metadata-driven RPA workflow in action

Our internal automation pipeline demonstrates the power of rule-based RPA. It’s built on a modular architecture where each tool performs a specific, rules-driven task:

  • ALPS: Converts metadata specifications into ready-to-run SAS code for SDTM and ADaM datasets, reducing manual programming and minimizing error risks.
  • Lighthouse: Enables biostatisticians to build mock shells using reusable templates, ensuring consistency in table and listing structures.
  • Prism: Extracts metadata from mock shells and transforms it into XML-format ARMs (Analysis Results Metadata), enriching it through rules and generating code for up to 60% of standard safety outputs.
  • TAB Macros and CytelDocs: Automate the creation of summary tables and documentation, saving hours of effort and ensuring compliance with standardized formats.

This end-to-end pipeline reduces manual touchpoints, maintains high quality, and boosts team efficiency.

 

Where generative AI complements RPA

While rule-based systems are ideal for tasks requiring consistency and auditability, generative AI can complement these systems — particularly in areas where variability is acceptable and outputs don’t require deterministic reproducibility. For example, Gen AI can assist with:

  • Drafting exploratory narratives or documentation
  • Suggesting code for non-critical outputs
  • Enhancing user interfaces with intelligent prompts
  • Enrich the set of study specific rules to be used

However, these AI-driven capabilities are best applied where hallucinations won’t compromise integrity, and outputs don’t demand rigid consistency.

 

Business and quality benefits of rule-based RPA

By relying on rule-based RPA for core data workflows, we’ve realized several tangible gains:

  • Time efficiency: Standard code is generated automatically, freeing time for custom analysis.
  • Reduced redundancy: Developers no longer rewrite common code across projects.
  • Improved QA: Outputs are independently validated and built on rigorously tested rule sets.
  • Collaboration at scale: Uniform rules simplify onboarding and knowledge transfer.
  • Focus on what matters: Teams can concentrate on non-standard elements that require expertise.

 

Final takeaways

Rule-based RPA systems provide the transparency, structure, and adaptability required for high-stakes data environments. At Cytel, we’ve found them indispensable in our mission to expedite regulatory submissions without compromising on quality or compliance. As AI continues to evolve, generative technologies may enrich this foundation — but rule-based automation remains the core engine that ensures accuracy, accountability, and speed.

Strategies to Streamline the MHRA Inspection Process

The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) plays a critical role in ensuring the safety, quality, and efficacy of medicines and medical devices. MHRA inspections are often a key step in bringing new therapies to market, but poor preparation can result in delays or regulatory setbacks.

Here, we outline the types of MHRA inspections, provide an overview of the essential steps and documents, and discuss challenges and how to overcome them to streamline your MHRA inspection process.

 

Types of MHRA inspections

There are two main MHRA inspection types. They include:

 

  • Statutory Good Clinical Practice (GCP) Inspection (a “routine” inspection): This inspection is performed as part of the risk-based compliance program and can be either systems-based or trial specific. Inspectors examine how an organization’s trial procedures are applied, considering previous inspection history, organizational changes, or intelligence from other external sources. Sponsors are usually notified six months ahead of time.

 

  • Triggered Inspection: Triggered inspections are initiated by concerns regarding a clinical trial’s conduct, often from sources like serious breach notifications, whistleblowers, or other MHRA departments. The nature of the information determines the level of notice provided, which may be short or none at all.

 

Essential steps and documents

The MHRA inspection process consists of three primary phases: planning, inspection, and reporting.

 

  • In the planning phase, sponsors receive an “Advance Notice of Statutory Inspection” notification, prepare an Inspection Dossier, and develop an Inspection Plan.

 

  • The inspection phase involves the main site inspection.

 

  • The reporting phase comprises issuing an Inspection Report and identifying Corrective and Preventive Actions (CAPA).

 

Strategies to streamline the inspection process

Apply the framework for engagement

Define clear duties and responsibilities, emphasize collaboration, and foster productive dialogue. This approach includes four considerations: steering group and communication, resource management and flexibility, documentation, preparation and strategic output. (We will discuss this framework in detail in our upcoming webinar, click the link below to register).

 

Prepare supportive documents

Create supportive documents that enable quick access to detailed and precise information during high-pressure situations, enhance understanding of the procedures and trial materials under review, and prepare tools and responses for anticipated critical discussion points.

 

Ensure staff is adequately trained

Mandatory training on current and historical versions of quality assurance documents, including SOPs, WIs, and related tools, is required to ensure understanding of both current processes and those used at the time of production of the deliverables for each study inspected.

 

Use live demos or “show and tell” sessions

Live demos can help visualize the process and ensure it is understood by the inspector as well as provide the opportunity to delve into the details.

 

Final takeaways

Navigating MHRA inspections requires a proactive approach, strategic preparation, and a deep understanding of evolving regulatory expectations. By leveraging innovative strategies, organizations can streamline their inspection readiness and enhance compliance outcomes. Equally crucial is the establishment of a well-defined framework that fosters effective collaboration among all stakeholders — sponsors and key vendors alike — ensuring a streamlined and coordinated approach to the inspection process.

 

Want to learn more?

Join Stephanie Dontenville and Nicolas Rouillé for their upcoming webinar to gain practical insights from Cytel’s MHRA inspection experience. Learn how to prepare thoroughly, execute precisely, and turn post-inspection feedback into innovation.

From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval

Achieving positive topline results in a clinical trial marks a critical milestone in the drug development process, yet it is far from the end of the submission journey. Instead, it signals the start of a complex, fast-paced effort to prepare for regulatory submission and navigate the FDA’s multi-stage review. The final “regulatory defense” stage demands rigorous collaboration, meticulous planning, and adaptability to meet the expectations of regulatory agencies.

Here we discuss the key stages in the post-topline journey, exploring key milestones, unexpected challenges, and best practices for ensuring a strong submission and a smooth path to approval.

 

1. The Preparation: Post-topline readiness and strategic planning

The preparation phase begins immediately after topline results are available. During this critical window — often lasting several months — cross-functional teams shift their focus to assembling the final submission package. Statisticians and programmers play a central role here, finalizing the tables, listings, and figures (TLFs) that will populate the Clinical Study Report (CSR) and preparing submission-ready datasets following CDISC standards, including ADaM, SDTM, and associated documentation.

In parallel, a pre-BLA or pre-NDA meeting with the FDA is typically scheduled to align on expectations, identify potential concerns, and set the foundation for a smoother review process. This phase is not just about document generation; it’s about establishing a strategy, anticipating regulatory scrutiny, and ensuring the submission is both complete and compelling. The quality of the groundwork laid here often dictates the ease — or difficulty — of the phases that follow.

 

2. The Submission: Crossing the threshold to regulatory review

Once the submission is filed, the process transitions into a more structured phase governed by the FDA’s review protocols. The agency begins with a 60-day filing review to assess whether the BLA or NDA is complete and acceptable for full review. If so, the sponsor receives a Day 74 Letter, which provides early feedback, flags any immediate concerns, and confirms the Prescription Drug User Fee Act (PDUFA) date — typically 10 months post-filing for standard reviews or 6 months for priority reviews. Although this phase may seem procedural, its significance is high. A clean, well-organized submission can streamline the review process, limit questions, and reduce the risk of delays. This is also the point where rolling submissions, if applicable under Fast Track designation, can offer a tactical advantage by accelerating document delivery and potentially shortening review timelines.

For statistical and programming teams, this is not a time to sit back and relax — it’s an opportunity to ensure internal alignment and anticipate questions the FDA may raise based on known data complexities. Strong documentation and traceability within datasets and outputs are essential at this point, helping to support any needed follow-up. Proactive communication and readiness during this phase help lay the groundwork for the more intensive regulatory engagement that follows.

 

3. The Regulatory Defense: Responding, clarifying, and defending your data

The regulatory defense phase is where the bulk of agency interaction occurs — and where flexibility and responsiveness become essential. During this time, the FDA may issue multiple information requests (IRs), asking for clarification on statistical methodology, specific data points, or safety and efficacy outcomes. Mid-cycle communications, typically occurring around months 4–5 for standard reviews, offer a formal opportunity to assess the review’s progress and surface any significant concerns.

In some cases, the agency may convene an Advisory Committee (AdCom) meeting to gather expert input, particularly when there are outstanding safety questions or complex benefit-risk considerations. Throughout this phase, the ability to quickly respond to ad hoc requests, provide high-quality data outputs, and maintain close collaboration across functions is critical. It’s a high-stakes stage where well-prepared teams can help preserve timelines and ensure the submission stays on track.

 

4. The Unexpected: Adapting to setbacks and charting a new course

In some cases, the regulatory journey doesn’t lead directly to approval. If the FDA identifies significant deficiencies in the initial submission — whether related to clinical data, statistical interpretation, manufacturing, or safety — it may issue a Complete Response Letter (CRL). This marks a temporary halt in the process, requiring the sponsor to address the concerns before resubmission. Depending on the scope of the deficiencies, the resubmission may fall under Class I (minor issues, reviewed in 2 months) or Class II (major issues, reviewed in 6 months).

For statisticians and programmers, this could mean conducting additional analyses, integrating new data, or adjusting the structure and presentation of the submission package. While a CRL can be a setback, it’s also an opportunity to recalibrate, seek additional guidance from the FDA, and improve the likelihood of approval in the next cycle. The key is to approach this phase with transparency, strategic thinking, and a readiness to adapt and respond.

 

Final takeaways

The path from topline results to regulatory approval is rarely linear. Timelines can range from as little as 12 months in expedited reviews to over 30 months in cases involving major deficiencies and resubmissions. Success in this post-unblinding phase hinges on proactive planning, adaptable resourcing, and the ability to respond quickly and thoroughly to regulatory needs. Equally important is collaboration across functions — clinical, regulatory, biostatistics, programming, and operations must work closely and cohesively to anticipate challenges, align timelines, and respond efficiently to agency requests. Whether following a standard or accelerated route, the shared priority is a comprehensive, high-quality submission that stands up to regulatory scrutiny — and ultimately supports timely access to new therapies for patients.

 

Interested in learning more?

Watch Jasperlynn Kao and Florence Le Maulf’s recent webinar, “From Toplines to Triumph: Visualizing the Pathways to Regulatory Approval”:

Data Submission to Health Authorities: Current Practices and Future Directions

How far is 2041? Update on data submission to health authorities

Back in the summer 2023, I was invited to present “Standards and Open-Source Hand-in-Hand: Leveraging Automation to Expedite Drug Market Request Review Process” at PharmaSUG-China. I was trying to imagine the future of data submission, travelling to 2041 and envisioning how AI can support and expedite the regulatory drug submission process, and how AI could enhance the preparation and review of data submission packages. I then brought the discussion back to the present, sharing some reflections on the journey ahead — a journey that will inevitably require better use of standards, open-source adoption and solutions, and collaborative industry initiatives.

About 18 months later, the topic of AI became predominant in our industry. This is clearly reflected by the growing number of AI-related presentations at conferences, including the recent PHUSE US Connect Conference held last March in Orlando, and the upcoming CDISC-EU Interchange this May, just a few steps from our offices here in Geneva.

Here, I would like to provide a brief overview of the latest updates on data submission requirements, as well as industry initiatives aimed at improving how we create clinical data packages for submission to health authorities in support of market drug approval.

 

FDA data submission requirements update

Regulatory data submission requirements, more specifically those from the US FDA, have been refined through various updates of their guidance. Since my January 2024 summary of the latest changes, the following additional requirements have been added:

 

  • Submit a dataset, LC, copy of LB with US conventional unit as standard unit (March 2024)
  • Viral load results should be placed in the MB domain, confirming there is still misuse of specific laboratory related data domain e.g., LB, IS and MB (October 2024)
  • The requirement for US conventional unit was recently extended to ADaM, with ADLC dataset (March 2025)

 

See here the latest March 2025 version of the FDA Study Data Technical Conformance Guide.

It’s also worthwhile to mention the FDA’s “Protocol Deviations for Clinical Investigations of Drugs, Biological Products, and Devices,” which provides various recommendations around the management of protocol deviations. This includes some specific recommendations for SDTM mapping, such as including a variable in the DV domain that provides the sponsor’s determination of whether the protocol deviation was important.

 

EMA data submission requirements

While the EMA has not made data submission mandatory — nor specified a required data format — the European Medicines Agency launched the “Raw Data” pilot proof-of-concept project about two years ago. In this initiative, selected applicants were invited to submit structured clinical trial data as part of their initial applications and post-authorization procedures. Clinical trial data in this context refers to individual patient-level data, including:

 

  • Clinical laboratory results
  • Images
  • Medical records

 

The aim of the pilot is to assess whether the use of structured clinical trial data can help speed up and improve the drug assessment process.

An initial outcome of the project was published in a report released last October. It summarizes lessons learned from five data submissions received between September 2022 and December 2023, out of the ten originally planned. Among the key learnings and outcomes, CDISC standards, namely SDTM and ADaM with define.xml and a data reviewer’s guide, were confirmed as suitable formats for data review. The software tools being explored included SAS and R for statistical analysis, and SAS JMP Clinical for visualization. While SAS XPT files were required, other transport formats such as XML or JSON were also accepted, upon mutual agreement between EMA and the applicant.

Although these standards and formats are not yet mandatory, additional guidance has been provided in a Q&A document (e.g., regarding maximum data package size). Since then, the EMA has decided to extend the project’s duration. Final recommendations are expected in 2025 — potentially with some early updates to be shared at the upcoming CDISC EU Interchange in May.

 

Industry initiatives update

Since my speech at PharmaSUG-China, the industry initiatives I discussed there have progressed quite rapidly:

 

   The R Pilot Submission Experience: All four planned pilots have been completed, and in February a fifth pilot was announced. This time, the goal is to establish the new dataset-JSON format as a CDISC standard for clinical data submissions (see here a report from successful pilot submitting data with the new format to the FDA.

•   R Packages for SDTM and ADaM: Both the SDTM (oak) and ADaM (admiral) R packages are now widely used in our industry for submission projects.

•   The Analysis Results Standard (ARS): The first version of the ARS was released in April 2024, along with a new initiative, the eTFL Portal, which shares examples and templates for the most common TFLs.

•   The CORE Project: The project continues its mission to develop Open Conformance Rules, alongside a growing number of Open Source initiatives.

 

Interested in learning more?

Get your copy of Angelo Tinazzi’s latest ebook, “The Good Data Submission Doctor on Data Submission and Data Integration to the FDA”:

Expediting the Regulatory Submission Process with Automated Tools

In the biopharmaceutical industry, expediting regulatory submissions is crucial for timely access to life-saving medications. As a statistical programming team, our role involves accelerating the drug approval process by meticulously preparing Electronic Common Technical Document (eCTD) packages, including the statistical review and programming process of mapping SDTM, deriving ADaM, and TLF generation.

Here we discuss the process and benefits of the metadata-driven approach. From mapping to report, this approach enhances the efficiency in attaining results and generating submission packages promptly by reducing manual interventions.

 

What are eCTD packages and how are they prepared?

The eCTD is the “standard format for submitting applications, amendments, supplements, and reports to FDA’s Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER).”1 It facilitates the electronic submission of dossiers for market approval requests, such as for a new drug (NDA).

Among files stored in the eCTD, there are some key components related to Biometrics deliverables:

  • SDTM Dataset: The Study Data Tabulation Model (SDTM) is one of the most important CDISC data standards. It’s a framework used for organizing source data collected in human clinical trials.
  • ADaM Datasets: Analysis datasets are created to enable statistical and scientific analysis of the study results. CDISC Analysis Data Model (ADaM) specifies the fundamental principles and standards to ensure that there is clear lineage from data collection to analysis.
  • TLF: Analytical outputs, in the form of tables or figures, are used to summarize the analysis required for the submission to the regulatory agencies. These outputs are supported by listings that display the actual data at all the data points.

 

The need for automation

When working on any project / analysis, certain elements remain unchanged regardless of the study design. Therefore, standardizing and automating their production could lead to efficiency, ensuring consistency, and reduce the overall time required for submission. Also, by automating these items, we could reduce manual intervention, thereby minimizing the chances of human error.

This approach has several benefits, including:

  • Efficiency: Since the team can focus more on the non-standard parts of the outputs, the overall efficiency of the team is increased.
  • Consistency: Since automated tools generate standard code based on a set of rules, the resulting code remains highly consistent across various projects. This makes it easier to understand and debug (in case of any updates).
  • Quality: Since the tools have been rigorously tested, they produce extremely high-quality and reliable outputs.
  • Reduced manual intervention: Since manual intervention is limited, the possibility of human error is minimized. As long as the specifications are correctly drafted, the output generated by the standard code should be error-free.

A metadata-driven approach

Many companies, including Cytel, have adopted a metadata-driven approach to accelerate tasks such as SDTM, ADaM, and TLF code generation. The goal of this approach is not to automate 100% of the final code but rather to generate as much standardized and structured code as possible. This approach enhances efficiency while simplifying modifications when needed.

While a Metadata Repository (MDR) can maximize automation in the long run, currently available MDR tools remain cumbersome.2 For this reason, while still assessing the benefit of MDR solutions, Cytel has taken a different approach — extracting metadata from existing documents that statistical programmers already use in their daily work. Without adding extra workload, this metadata is stored in a structured format, allowing us to apply automated rules to enrich it. From there, we can generate SDTM, ADaM, and TLF code efficiently.

For example, metadata can be extracted from ODM.xml files or raw datasets to streamline SDTM specification mapping. These specifications can then be leveraged to generate SAS or R code automatically. Similarly, metadata from study mock shells — such as titles, footnotes, table headers, and table body structure and content — can drive the creation of TLFs with minimal manual intervention.

Another key advantage of this metadata-driven approach is its language agnosticism. By structuring metadata independently of the programming language, the same metadata can be used to generate both SAS and R code. This ensures consistency, facilitates the transition for SAS programmers moving to R, and maintains quality without impacting project timelines.

 

Final takeaways

In line with the premise that “one solution does not fit all,” CROs can maximize the value of metadata within clinical trial delivery by leveraging the metadata already embedded inherent in study artifacts. If you are able to define the way of extracting as much metadata as possible from the documents you already use, you can obtain a lot of value if you are able to transform that metadata into real deliverables.

This metadata-driven approach is sensitive to the fact that CROs must accommodate a multitude of sponsor standards and delivery requirements, without sacrificing the benefits of automation in an ecosystem rich in interdependencies between regulatory authorities, industry consortia, sponsors, CROs, and other third-party technology vendors.

 

1 US FDA. (4 October, 2024). Electronic Common Technical Document (eCTD).

2 PHUSE White Paper (2 October, 2024). Best Practices in Data Standards Implementation Governance.

 

Interested in learning more?

Watch Manish Deole and Sebastià Barceló’s on-demand webinar, “Expediting Regulatory Submissions through Automation”:

FDA Guidance on Integrating RCTs into Clinical Practice and the Growing Potential of RWE

Patient recruitment remains one of the most challenging and costly parts of clinical trials. One approach to tackle this has been partnering with healthcare providers to capture data gathered during routine clinical practice for use in clinical trials.

As part of the U.S. FDA’s Real-World Evidence (RWE) Program, the agency has issued a new draft guidance on the integration of randomized control trials (RCTs) into routine clinical practice.1 The draft is open to public comment until December 17, 2024.

Here we discuss what you need to know about the new draft guidance and the implications for the future of clinical trials.

 

Bringing together clinical research and clinical practice

Traditional randomized controlled trials gather a large amount of patient information, some of which is also collected during routine clinical care. Considering this overlap, data for a clinical trial could potentially also be gathered from patients in other clinical settings with health care providers.

Such integrated RCTs, often referred to as point-of-care or large simple trials, are designed to be more convenient and accessible for participants since they can reduce the need for trial sites, and thus can ultimately lead to more representative and generalizable results.

Additionally, there has been increasing interest in incorporating real-world data (RWD) similarly collected during routine clinical care into clinical studies.

While integrating clinical trials into routine clinical practice is not new — indeed efforts to do so have been going on for decades — recent tools, such as the use of electronic health records, have made such trials and the use of RWD far more feasible.

 

Integrating RCTs into routine clinical practice: What to know

The new draft guidance, “Integrating Randomized Controlled Trials for Drug and Biological Products into Routine Clinical Practice: Guidance for Industry,” aims to “support the conduct of randomized controlled drug trials (RCTs) with streamlined protocols and procedures that focus on essential data collection, allowing integration of research into routine clinical practice.”

The guidance emphasizes a few key points:

The role of established healthcare institutions and existing clinical expertise

The guidance highlights the importance of leveraging established healthcare institutions and existing clinical expertise, discussing the roles of sponsors, clinical investigators, and healthcare providers. This can reduce start-up times and speed up enrollment, making the trial process more efficient.

 

Streamlining RCTs to align with clinical practice

Here the guidance emphasizes that trials will be most successfully integrated with clinical practice when the data needed is collected routinely and does not require additional procedures or visits. Where this is not possible, a hybrid approach should be considered.

 

A quality by design approach

Successful integration will rely on designing trials that follow a set of principles that involves considering such aspects as eligibility criteria, choosing suitable investigational drugs, study endpoints, and so on.

 

Implications for the future of clinical trials

Enhanced accessibility and efficiency

By integrating RCTs into routine clinical practice, this guidance aims to make participation easier for patients, which could lead to higher enrollment rates and more diverse participant pools. This will be especially important for rare diseases, as the overall pool of patients is smaller, and trials compete for enrollment.

Furthermore, streamlined protocols and procedures are expected to reduce administrative burdens and costs, making trials more efficient and potentially accelerating the development of new therapies.

 

Improved generalizability of results

The use of RWD and the integration of trials into everyday clinical settings can produce findings that are more applicable to real-world patient care. This can enhance the external validity of trial results and improve their utility in clinical decision-making.

 

Faster innovation cycles

The ability to conduct trials more quickly and efficiently can shorten the time from discovery to market for new treatments. This can foster a more dynamic and responsive healthcare innovation ecosystem.

 

Integrating clinical trials with clinical practice: Challenges and perspectives

Although the emphasis has been put on the integration of clinical trials with clinical practice, such integration may face challenges in the short term. Over time, infrastructure and scientific advances could help us overcome them.

 

Quality, integrity, and accuracy of trial data

The guidance emphasizes that sponsors must ensure the quality, integrity, and accuracy of trial data. However, sponsors may encounter inconsistencies with how data is collected by healthcare providers and find that some study procedures cannot be performed within routine clinical practice without causing significant disruption.

Additionally, the lack of standardized formats and terminologies across different data sources can also make it difficult to integrate and analyze data uniformly. Although standards and common data models (CDM) are converging towards more standardization, the progress has been very slow.

 

Obtaining informed consent

When conducting a traditional clinical trial, sponsors must obtain consent from trial participants, but doing so within routine clinical practice may present additional hurdles. To overcome this when integrating an RCT into clinical practice, the guidance suggests that one solution can be to embed informed consent documents into EHRs.

However, when using RWD retrospectively, obtaining appropriate consent remains a significant challenge, and may need future changes in jurisdictions waiving such consent.

 

Controlling for bias

When incorporating a clinical trial into clinical practice, blinding may be difficult to ensure. According to the guidance, blinding may add complexity to trial implementation, require greater resources, increase costs, and require longer timelines. And when not possible to include blinding, identifying potential sources of bias and including measures to address them in the trial design will add additional challenges.

 

Data privacy and security

Ensuring the privacy and security of patient data is crucial. The use of RWD must comply with stringent data protection regulations, which can complicate data sharing and integration.

Facilitating secure and compliant data sharing between institutions and across borders also remains a significant hurdle.

 

Methodological challenges

Developing robust methodologies to analyze RWE and integrate it with traditional clinical trial data is essential to ensure the reliability and validity of the results.

RWD may also be subject to biases that can affect the validity and generalizability of findings. Ensuring that the data accurately represents the broader patient population is crucial.

 

Addressing these challenges requires collaboration among stakeholders, including researchers, healthcare providers, regulatory bodies, and technology developers. By overcoming these hurdles, the integration of RWD into clinical trials can enhance the relevance and efficiency of clinical research, ultimately leading to better patient outcomes.

 

Final takeaways

The FDA’s new draft guidance represents a significant step toward modernizing clinical trial methodologies, making them more patient-centric and reflective of real-world conditions. This evolution is poised to enhance the relevance, efficiency, and impact of clinical research in the coming years.

This approach is also consistent with the growing trend of considering the “entirety of evidence.” The conventional hierarchy of evidence, which opposes clinical trials with real-world evidence study designs, may also need to be revisited for a more complex and holistic consideration of evidence that includes a variety of needs on the one hand and a continuum of study designs and data sources on the other.

 

Notes

1 U.S. FDA. (September 2024).  Integrating Randomized Controlled Trials for Drug and Biological Products into Routine Clinical Practice: Draft Guidance for Industry.

Pediatric Development Plans: Key Considerations

Historically, many drugs have been prescribed to children even though this patient population have largely been excluded from clinical trials. Authorities worldwide have, therefore, implemented regulations to address the gap in drug research involving children and to promote efforts that can lead to increased knowledge of pediatric pharmaceutical use.

There is an obvious logic. If medicines are to be used in children, they need to be studied in pediatric populations to ensure they are safe and effective. Here, we share important considerations for your pediatric development plan, including the US pediatric study plan (PSP) and the EU pediatric investigation plan (PIP).

 

When do sponsors need to conduct pediatric studies and when are they exempt?

Whether you need to include children in your clinical studies will partly depend on which disease you are targeting and what type of medicine you are studying. If you have a drug that targets a condition that does not affect children, such as Alzheimer’s disease, you will be granted a waiver. A waiver may also be given for specific age groups based on safety or lack of efficacy, the condition not occurring in the specific age group or other specific age-related reasons. Sometimes, a deferral from the requirement to study the drug in the pediatric population may be granted which means that the studies can be postponed until after you have shown that the drug is safe and effective in adults. However, outlining a PIP/PSP for your drug is mandatory, regardless of whether you expect to receive a waiver or deferral for the pediatric studies.

 

The challenge of harmonizing across national borders

Harmonizing pediatric study plans for different parts of the world is a complex task due to authorities in different regions having varying recommendations about when to initiate the development of pediatric study plans and what they should include. For example, in the EU, it’s preferred to submit a PIP early in the development process, when pharmacokinetic data are available, whereas in the US, the FDA requests a PSP after the completion of Phase II trials. These differences in timing make it challenging to coordinate pediatric studies globally.  To manage this effectively, the best practice is to set a strategy for the global pediatric plan early in the development process. Without this proactive approach, the pediatric plans could delay the entire development project.

 

The contents of a PSP or PIP

The purpose of a PIP/PSP is to gather comprehensive information about the use of a drug in pediatric populations. Below are examples of what it should contain:

  • An overview of the disease, diagnosis, and treatment, highlighting differences between children and adults.
  • An assessment of the need for the drug in children across all age groups from birth to adolescence.
  • A summary of available chemical, preclinical, and clinical data on the drug.
  • A proposed strategy for any required preclinical studies and measures to adapt the drug’s formulation for use in children.
  • A proposed plan for potential clinical studies in children, including the timing of these studies in relation to those conducted in adults.

 

Financial benefits of conducting pediatric studies

Conducting pediatric studies not only ensures the safety and efficacy of a medicine in children but may also introduce new market opportunities in the pediatric population. In addition, following your pediatric plan can yield significant financial benefits in the form of a six-month patent extension (additional protection). It may seem short, but a six-month extension provides valuable exclusivity on the market and helps developers maximize the commercial lifespan of their product.

Regulatory incentives for pediatric oncology drugs: The RACE for Children Act

The Research to Accelerate Cures and Equity (RACE) for Children Act, passed by the U.S. Congress in 2017 and implemented in August 2020, significantly reformed the landscape of pediatric oncology drug development. The Act mandates that new cancer drugs developed for adults must also be evaluated for pediatric use if the molecular target of the drug is relevant to pediatric cancers. This requirement includes drugs with orphan drug designation, previously exempt from such studies. Prior to the RACE Act, pharmaceutical companies were not obligated to conduct pediatric studies for oncology drugs developed for adult cancers, leading to a significant gap in treatment options for children.

Early findings are promising, showing a clear rise in the number of oncology drugs being studied for pediatric use. Between August 2020 and August 2022, 32 initial pediatric study plans were submitted to the FDA due to the RACE Act, indicating a promising shift towards more inclusive drug development practices. [1]

 

Key Takeaways

Integrating pediatric patients into clinical trials can help ensure the safe and effective use of medicines for children. This is emphasized by global regulatory requirements and incentivized initiatives. However, navigating diverse sets of regulatory guidelines across countries and regions presents challenges in harmonizing and coordinating pediatric development plans on a global scale. With careful planning and considerations of the key factors outlined here, sponsors can minimize delays and expedite the approval process, ensuring timely access to safe and effective drugs for both adults and children.

 

Have questions? Get in touch with our experts: Erika Spens, Director Regulatory, Affairs; Sofie Broberg, Senior Consultant, Regulatory Affairs; Anna Törner, VP, Strategic Regulatory Affairs; and Linda Nord, Senior Consultant Regulatory Affairs: Contact Our Strategic Consulting Team

Notes

[1] Children’s Cancer Cause. (2023, February 8). First Two Years of the RACE Act Evaluated in New GAO Report. https://www.childrenscancercause.org/blog/race-act-gao-report

Writing a Successful Study Protocol for Real-World Evidence Studies

Real-world evidence studies are becoming increasingly popular in pharmaceutical development. But to ensure such studies are feasible and of high scientific quality, a well-written study protocol is essential. Let’s take a closer look at how to write a successful study protocol for real-world evidence studies: Read more »

Ulrika Andersson on First-in-Human Clinical Trial Development

The first-in-human trial, which aims to show the safety and tolerability of a new drug, is a major milestone for any drug development project. For this edition of the Industry Voices series, Ulrika Andersson, the new Director of Drug Development with the Therapeutics Development Team, discusses the route to the first-in-human clinical trial, planning nonclinical studies, and how to approach regulatory guidelines.

Read more »