Platform trials are randomized clinical trials that allow simultaneous comparison of multiple interventions, usually against a common control. Arms to test experimental interventions may enter and leave the platform over time. This implies that the number of experimental intervention arms in the trial may change as the trial progresses. Determining optimal allocation rates to allocate patients to the treatment and control arms in platform trials is challenging because the optimal allocation depends on the number of arms in the platform and the latter typically varies over time. In addition, the optimal allocation depends on the analysis strategy used and the optimality criteria considered. In this article, we derive optimal treatment allocation rates for platform trials with shared controls, assuming that a stratified estimation and a testing procedure based on a regression model are used to adjust for time trends. We consider both, analysis using concurrent controls only as well as analysis methods using concurrent and non-concurrent controls and assume that the total sample size is fixed. The objective function to be minimized is the maximum of the variances of the effect estimators. We show that the optimal solution depends on the entry time of the arms in the trial and, in general, does not correspond to the square root of allocation rule used in classical multi-arm trials. We illustrate the optimal allocation and evaluate the power and type 1 error rate compared to trials using one-to-one and square root of allocations by means of a case study.
Platform trials compare multiple experimental treatments to a control. They are multi-arm multi-stage trials with the additional feature of allowing arms to enter and leave the trial over time.1–4 As in multi-arm trials, the common trial infrastructure permits shortening the required time and reducing the costs to evaluate new interventions. In addition, the shared control group increases the statistical efficiency compared to separate parallel group trials and requires fewer patients to be allocated to the control group. However, due to the additional flexibility of platform trials, their design and analysis are more complex.
A major concern when designing and analyzing platform trials is the potential presence of time trends, due to, for instance, changes in the patient population being recruited.5 Especially, if the allocation rates between each of the active treatment arms and the control group vary over time, such time trends can lead to biased treatment effect estimates and inflation of type I errors in hypothesis tests.6–9 This issue has also been discussed for response-adaptive designs.10 To address such biases, time period-adjusted analyses based on regression models have been proposed,11 where the time periods are defined as the time spans where the allocation ratios stay constant. Simon and Simon12 considered randomization-based inference as a solution to avoid the potential bias and to control the type 1 error rate in the presence of time trends.
Time trends are of an even larger concern when the so-called non-concurrent controls are used for treatment-control comparisons. Here, for a specific experimental treatment arm, non-concurrent controls refer to the patients allocated to the control group before the arm under evaluation enters the platform trial. In contrast, concurrent controls are the control group patients randomized concurrently (in time) to those in the treatment arm. While including non-concurrent controls in the estimation of treatment effects can increase the power of testing treatment-control differences and reduce the variance of the estimates, they can also introduce bias in the estimates due to time trends if not adjusted for.13–16 Also in this context, time period-adjusted analyses based on regression models have been proposed. They can adjust for potential time trends, and thus control the type 1 error and give unbiased estimates, if time trends in all treatment arms are equal and additive on the model scale.17
In this article, we derive optimal treatment allocation rates for platform trials under the assumption that a time period-adjusted analysis based on regression models is used. To understand the principles of optimal allocation strategies in platform trials, we focus on the simple setting of a platform trial with two treatment arms and a shared control, where one of the treatment arms enters when the trial is already ongoing. For this platform trial design, we aim to clarify the design elements on which the optimal allocation ratios depend and to compare the optimal platform trial design with the optimal conventional multi-arm trial design. For the latter, it is well known that for experimental treatments (and under some additional assumptions), the standard error of treatment effect estimates is minimized for allocation.18
Several authors discuss the problem of adding a new treatment arm during the ongoing trial using different optimality criteria and statistical analysis procedures. Cohen et al.19 reviewed statistical methodologies and examples of trials with newly added treatment arms. Choodari-Oskooei et al.20 focused on the family wise type 1 error rate when new arms are added. Elm et al.21 evaluated the operating characteristics of pairwise comparisons of trials adding a new arm over the trial under different approaches. Ren et al.22 described statistical considerations with respect to type 1 error and power in three-arm umbrella trials. They also discussed the optimal allocation ratio for the control arm in periods in which treatment arms overlap, when minimizing the sum of variances of the treatment effect estimators. However, to the best of our knowledge, Bennett and Mander23 is the first article in which the optimal allocation rates in platform trials are investigated. They optimized the allocation rates to maximize the probability to find all treatments that are better than the control, while assuming that the expected treatment effects were equal for all treatment arms. In their approach, treatment comparisons are based on simple group comparisons with -tests, using concurrent controls. Concurrent data from different periods (where other treatments may have entered or left the platform and the allocation ratios may have changed) are pooled in this approach. However, in platform trials with time trends and changing allocation ratios over time, this approach can lead to an inflation of the type 1 error rate and biased estimates.12 More recently, Pan et al.24 addressed the modification of the critical boundaries to control the family wise error rate and re-estimation of the sample sizes when new arms are added, and, similarly as by Bennett and Mander,23 provided the optimal allocation ratios when minimizing the total sample size to achieve a desirable marginal power level, using only concurrent controls and without adjusting for potential time trends.
Here, we optimize allocation rates for a testing procedure based on a regression approach, which includes a “period” effect to account for changing allocation ratios when using period-wise treatment effect estimators and focus the attention on the marginal power. We also consider a different optimization criterion. Instead of the power to reject all null hypotheses corresponding to effective treatments, we minimize the maximum of the standard errors of the means of the treatment effect estimators resulting from the regression model. Assuming equal targeted treatment effects, this is asymptotically equivalent to maximizing the minimum power across the investigated treatments. This implies (under some regularity assumptions) that, under the optimal design, the power of the different arms will be equal. In particular, for multi-sponsor platform trials, this is a reasonable feature, as all sponsors should get the same chance to demonstrate the efficacy of their treatments in the platform. In addition, besides platform trials that only use concurrent data to compare treatments against control, we consider trials that incorporate non-concurrent controls.
The article is structured as follows. In Section 2, we introduce the platform trial designs considered and introduce the notation. In Section 3, we derive optimal allocations for trials using concurrent controls only and, in Section 4, do so for trials also incorporating non-concurrent controls. In Section 5, we illustrate the application of different allocation rules (optimal compared to equal allocation and square root of allocation) by a simulation study based on an example in the context of a hypercholesterolemia trial. In Section 6, we discuss optimal allocations under unequal variances (Section 6.1) and optimal allocations when minimizing the sum of the variances (Section 6.2). We conclude the article with a discussion.
Trial designs and optimality criterion
Consider a platform trial evaluating the efficacy of two experimental arms () against a shared control (). The trial design allows the sequential entry and exit of the experimental arms, such that the trial initially starts with arm 1 and the control, and arm 2 may enter the platform trial at a later time point. In addition, recruitment to arm 1 may end before the recruitment to arm 2 ends. Thus, the platform trial consists of three periods () defined according to the sets of actively recruiting treatment arms . In period 1, patients are recruited to treatment 1 and control (), in period 2 to both experimental treatments and control () and, in period 3, to treatment 2 and control ().
The total sample size of the trial is denoted by , which is partitioned into three periods with sample sizes , . We refer to the corresponding proportions of patients by . In each period , patients are allocated to the arms with the allocation ratios , such that for . See Figure 1 for an illustration of the trial design.
Platform trial with two active treatments ( and ) and a shared control (). The trial duration is divided into three periods. denotes the allocation ratios in period () to arm (), refers to the proportion of patients in period , where and are the sample sizes in period and in the overall trial, respectively.
Let denote the observation of patient on treatment arm (, ), distributed as where is assumed to be known and where we dropped the subindex in the mean for simplicity. Let denote the treatment effect for treatment (), and consider the null hypotheses .
The simple mean differences between the treatment and control groups are biased estimators of the treatment effects if there are time trends (i.e. the means in the treatment arms change over time) and the allocation rates change across periods.5 Therefore, we use stratified estimators, stratified by period, that adjust for potential time trends.9,25,26 The stratified estimators are given by
where is the treatment effect estimate in period for arm (), where and are the sample means in period for experimental and control arms, and the weights are given by
where denotes the variances of the estimates per period, and is given by . This choice of weights minimizes the variance of the stratified treatment effect estimator. Then, the variance of the stratified effect estimator is given by
A few comments: (i) The stratified estimator is equivalent to the non-stratified estimator if the allocation ratios are equal across periods. (ii) Expression (3) for the variances applies if the above distributional assumptions hold, that is, if there are no time trends. However, as discussed above, the stratified treatment effect estimate used is also unbiased if time trends are present. (iii) The stratified estimator (and corresponding test) corresponds to the treatment effect estimators in the linear models:
where denotes the indicator function. The first model (to test ) is fit with observations from periods 1 and 2 of the control and treatment arms 1 and 2 and the second model (to test ) with observations from periods 2 and 3 of the control and arms 1 and 2. Here denotes the period effect which adjusts for potential time trends. The treatment effects are assumed to be constant in time (no time-by-treatment interaction).
The optimality criterion we aim to minimize is the maximum of the variances of the stratified effect estimators across experimental treatment arms. Thus, given a fixed overall sample size , we aim to find the allocation probabilities that minimize the objective function
If there is no unique solution to this optimization problem, we aim to find the solution leading to the smallest minimum variance among all allocation probabilities leading to the same optimized maximum variance (6).
Note that if the expected effect sizes are equal for both experimental arms, minimizing (6) will also maximize the minimum individual power. Furthermore, as is only a scaling factor considered fixed in (3), the optimal allocation does not depend on . Similarly, assuming to be known is not a limitation, because the optimal allocation does not depend on the variances, as long as they are equal across arms. The optimization under unequal variances will be discussed in Section 6.1. Finally, as mentioned above, the analysis adjusts for biases due to time trends, but optimization is performed under the assumption that there is no time trend. This is because at the planning stage it is usually unknown whether time trends will be present in the trial or what pattern they will follow.
Optimal designs
We derive optimal designs minimizing the objective function (6) for three different cases:
Unrestricted optimization. In this setting, the entry time for treatment 2 (corresponding to ) is a design parameter that is determined by the trial design rather than being externally governed, for example, by the time a new treatment becomes available to be included in the trial. Similarly, the time from entry of treatment 2 and completion of the sub-trial corresponding to treatment 1, , is a design parameter in this case, which can also be chosen. We therefore optimize over and ( is then given by ).
Fixed sample size in period 1. We fix and optimize and . In this case, the entry time of arm 2 is not subject to optimization.
Fixed sample sizes in all three periods. In addition to , we also fix (and thus also ). Therefore, this case corresponds to a design in which both the entry time of arm 2 () and the completion time of arm 1 (given by ) are predefined, and we optimize over only.
The three cases differ with regard to the parameters to be optimized. In Case 3, the proportions of patients in periods 1, 2, and 3 are fixed, and the allocation ratios in each period are optimized. In Case 2, only the proportion of patients in period 1 is fixed, and, in addition to the allocation ratios within periods, the proportion of patients recruited in period 2 is optimized. Finally, in Case 1, the proportion of patients recruited in periods 1, 2, and 3 and the allocation ratios within the period are optimized. Assuming a constant recruitment rate, the proportions of patients in the different periods correspond to the entry and exit times of the experimental treatment arms. Instead of assuming the times to be deterministic, one could consider the input times to be random. This consideration is discussed in Section 7. As we will see, the three cases give qualitatively different optimal designs, which differ in the number of periods. Table 1 provides a summary of the design parameters that are assumed to be fixed and the parameters that are optimized, and depicts the structure of the optimal solution for each case.
Summary of considered designs and optimal solutions.
Design assumptions
Optimized parameters
Solution
Case 1
(None)
, ,
Case 2
Fixed entry time
,
Case 3
Fixed entry time Fixed exit time
The first column indicates the case (see Section 3), the second specifies the assumptions regarding the entry and exit times of arms in the trial, the third lists the optimized parameters, and the fourth column illustrates the optimal design. The solutions for the allocation ratios in period 2 in Case 2 with and Case 3 with and can be found in Sections 3.2 and 3.3 for concurrent controls, and Sections 4.2 and 4.3 for non-concurrent controls. The code to obtain the solutions is shown in Supplemental Material D.
In the subsections below, we present the derivations of the optimal allocations in each of the three cases. In the derivations, we assume that the sample sizes are positive real numbers such that the variance of the estimators (3) is a differentiable function of the sample sizes. This is a reasonable approximation for large sample sizes. The derivations were performed in Mathematica (see Section D in the Supplemental Material, and GitHub (https://github.com/MartaBofillRoig/Allocation). In calculations resulting in several solutions, we selected those resulting in real values between 0 and 1 based on numeric examples only.
Furthermore, since stratified test statistics are used, the optimal allocation in periods 1 and 3 is equal allocation, that is, . This can be directly seen from equation (3): the allocation ratios in period 1 only affect and, since , it follows that is maximized for . A similar argument holds for the optimal allocation in period 3. Thus, it remains to determine the optimal allocation in period 2 and, depending on the case considered, the optimal partition between periods by means of .
Note that if the optimal allocation in period 2 allocates positive proportions of subjects to both experimental arms (i.e. if ) then the optimal allocation satisfies the constraint
This follows by the following argument: Assume the variances at the optimal allocation are not equal, for example, that . Then, the objective function can be further decreased by moving a small fraction of the sample size from treatment 1 to treatment 2 such that the inequality still holds. This shift reduces the and thereby the maximum of the two variances. This is a contradiction to the assumption that the allocation is optimal.
Case 1: Unrestricted optimization
In this case, the optimal design satisfies (and ) such that all patients are recruited in period 2. In this design, there is only a single period and the effect estimates reduce to the non-stratified mean differences. The one-period design is optimal because then all control observations are shared and are used in both treatment effect estimates, and a non-stratified estimate is applied which, under the model assumption of no time trend, also reduces the variances. The resulting trial is a classical multi-armed trial with many-to-one comparisons.18 For this design, it is well known that the optimal allocation is , .
Case 2: Fixed sample size in period 1
Assume that the time point when the second treatment enters the platform trial is given. Thus, the size of period 1 () is fixed and we optimize (6) over and . As noted above, the optimal allocation in the first and third periods is equal allocation. We derive the optimal allocation in period 2 for all , considering first the scenarios where and then the scenarios where .
If , period 1 is larger than or equal to half the total sample size. Therefore, for any choice of allocation ratios within periods 2 and 3, we have (where equality can hold only for ), even if all observations in periods 2 and 3 are allocated to arm 2 and the control. As the objective function is the maximum of the two variances, it is minimized in this case, if all patients are equally allocated to treatment 2 and control, that is, for , , and . This optimal solution corresponds to two separate consecutive trials, wherein the first period we compare arm 1 versus control, there is no second period, and in the third period, we use the remaining sample size to compare arm 2 versus control. Note that all designs with , where all patients are allocated either to arms 0 and 2 in periods 2 and 3, minimize as well, and are therefore also optimal solutions. Moreover, if under optimal allocation (as is the case for ), the maximum variance does not change if the allocation ratios in period 1 slightly deviate from equal allocation. Therefore, the optimal design is not unique when the objective function is just the maximum of the variances. However, with the additional condition of the smallest minimum variance among all minmax solutions (see the condition below (6)) equal allocation in the first period is optimal.
Otherwise, if , one can see that under the optimal design in period 2, patients are allocated to both experimental treatment arms. Therefore, as discussed at the beginning of Section 3, under the optimal design satisfies the variances of the effect estimators are equal (7). Furthermore, for the optimal design we have and . This can be confirmed by numerical optimization. Figure 3 shows the maximum variance under optimal allocation compared to the maximum variance of optimized separate trials with the same total sample size, as a function of . One can see that the maximum variance is minimized for . This is due to the following argument: If the period 3 observations in arms 2 and control are all moved to the respective arms in period 2, both, the variances of arms 1 and 2 will decrease. For arm 1, this holds because the concurrent control group sample size increases. For arm 2, this is the case because a non-stratified estimator is used instead of a stratified estimate, the non-stratified estimator having lower variance under the model assumption of no time trend.
The optimal allocations , , and can be obtained numerically as a special case of Case 3, setting , outlined below. Hence, the optimal design is a two-period trial in which arm 2 enters later, but both arms finish at the end of the trial. In this case, the numerical examples suggest that the optimal allocation is unique.
Case 3: Fixed sample sizes in all three periods
Assume that, in addition to , the proportion of recruited patients when treatment 2 enters, also , the proportion of patients in period 2 is fixed. To derive the optimal allocation in period 2, we distinguish three scenarios: (i) ; and, for : (ii) , and (iii) and .
In scenario (i), if , as in Case 2, the optimal allocation assigns all patients after period 1 to treatment arm 2 and control, with equal allocation between the two arms. This is achieved, for example, for and . With the optimality condition, to achieve the smallest minimum variance among all minmax solutions (see the condition (6)), equal allocation in period 3 is optimal.
In scenario (ii), if , then and . Analogously to scenario (i), the optimal design allocates patients in period 2 to treatment arm 1 and control such that , . This design minimizes the variance of , which is the maximum of the variances in this case.
In scenario (iii) if, on the other hand, and , one can see that under the optimal design, patients are allocated to both experimental treatment arms in period 2. Therefore, as discussed at the beginning of Section 3, the variances of the two treatment effect estimators are equal under optimal allocation, cf. equation (7). To optimize under this constraint, we use the method of Lagrange multipliers, which shows that the solution satisfies
Now, can be obtained as the numerical solution of (9) and is given by (8). As the sum of the allocation ratios is 1 also results. Note that by (9) that the optimal solution depends on only via . The numerical examples suggest that the optimal allocation is unique in this case.
Figure 2 shows the optimal allocation probabilities in period 2 as functions of for different . The optimal allocation ratio for the control is not monotone in but has a minimum at , that is, where . For this special case, we can derive an explicit solution for the optimization problem. As in this case, the objective function is symmetric in the two treatments, it follows that . Therefore, to satisfy constraint (7), also needs to hold. The latter, however, implies that in period 2, the allocation ratios for both treatment arms have to be equal, such that . Furthermore, by results for the many-to-one setting,18 we know that the variances are minimized for allocation, such that and . This can be also verified by substituting the solution in (9).
Optimal allocation probabilities in period 2 as function of and for different values of in trials with three-periods as shown in Figure 1. Black lines: , red line: , and blue line: . The vertical line at indicates where equal allocation between treatments 1 and 2 is optimal for the design with concurrent controls only. The horizontal line marks the value . Solid lines correspond to the optimal allocations in trials utilizing concurrent controls only (Section 3), and dashed lines correspond to the optimal allocations in trials utilizing concurrent and non-concurrent controls (Section 4).
Screenshot from the shiny app (available at https://github.com/MartaBofillRoig/Allocation). The figure on the top shows the optimal solutions for as shown in Figure 2. The figure on the bottom compares the maximum of the two variances to the variance using separate trials. Solid lines indicate the solutions for trial designs using concurrent controls only, while dashed lines correspond to those for trials using concurrent and non-concurrent controls.
In order to illustrate further cases apart from those included in the article, we built a shiny app (available at https://github.com/MartaBofillRoig/Allocation) that allows visualizing the allocation rates with respect to the sample sizes per period 2. The app also shows the variance in the optimum as a function of . Figure 3 includes one of the figures that can be obtained in the app. On the bottom, we see the variance with respect to the proportion of compared to the variance when running two separate trials. We can observe that the decrease in variance becomes larger with .
Optimal allocation in trials using non-concurrent controls
Suppose we want to use also non-concurrent controls to compare arm 2 against the control. To this end, we consider the regression model estimate of the treatment effect of treatment 2 from the model
We fit this model based on all observed data in periods 1, 2, and 3. This model provides unbiased treatment effect estimators if time trends affect all arms equally and are additive on the model scale.17 For treatment 1, we suppose that the efficacy is evaluated after treatment arm 1 ends, as is common in platform trials. Thus, we do not use the estimate from this model, but the stratified treatment effect estimators defined in (4) based on data from treatment 1 and the control group from stages 1 and 2 only.
Using the regression model (10) to incorporate non-concurrent controls implies that the treatment effect of arm 2 compared to control is estimated by the stratified treatment effect estimator
where is the pooled mean per period, and are weights, which are functions of the sample sizes. As a consequence, also the variance of can be written as a function of the allocation rates and the total sample size . This variance is given by
See Section A2 in the Supplemental Material for the derivation and the expression of the weights . For arm 1, for which the treatment effect is estimated using only concurrent controls, the treatment effect estimator and its variance are given by (1) and (3), respectively.
As in Section 3, we aim to minimize the maximum of the variances in and discuss the optimal allocations for unrestricted optimization (Case 1), where the entry time of treatment 2 is fixed (Case 2) and where both the entry time of treatment 2 and the completion time of treatment 1 are given (Case 3).
It is evident from (3) and (12) that for all (and thus for all three considered cases), the optimal allocation in periods 1 and 3 is equal allocation, that is, . This is due to the fact that we consider a stratified estimator. Therefore, it remains to determine the optimal allocation ratios in period 2 in the three cases.
Case 1: Unrestricted optimization
As for the setting using concurrent controls only, without restrictions with respect to the time at which arm 2 enters the platform, the optimal allocation rates correspond to a multi-arm design where all treatment arms start and end at the same time, such that there are no non-concurrent controls. This follows because then all control patients are used in direct comparisons for both treatment arms. Then, the optimal allocation is allocation as in Section 3.
Case 2: Fixed sample size in period 1
Suppose we aim at optimizing a design where the time at which arm 2 enters is given. Therefore, as in Case 2 for trials using concurrent controls only (Section 3.2), we optimize assuming that is fixed.
As for concurrent controls, we derive the optimal allocation, considering first scenarios where and then scenarios where . If , as in the case of concurrent controls, the optimal allocation allocates all patients after period 1 to treatment arm 2 and control, with equal allocation between the two arms. This is achieved, for example, for , , and , such that . Note that this design has only two periods and the model (10) is fitted without the period effect as there is no data to estimate this factor in this case. In addition, the non-concurrent controls do not contribute to the treatment effect estimator (only the weight in (11) is larger than zero) because only the control treatment is present in both periods. Therefore, the period effect can only be estimated from the control group data and consequently the estimate of the control group treatment (and also the treatment effect estimate for treatment 2) cannot be improved by estimates from other treatments.
To see that the above allocation is optimal, note that defined in (12) is minimized by , and if is kept fixed and for this allocation the variance does not depend on the allocation ratios in period 1. Furthermore, because and assuming equal allocation between the control arm and arm 1 in period 1, we have . Therefore, under an allocation ratio that minimizes the objective function, the maximum of the variances of the treatment effect estimates is the variance of treatment 2. It follows, that this allocation ratio also minimizes the objective function.
If , as in Case 2 in Section 3, the optimal design satisfies that , and therefore leads to a two-period platform trial, where treatments 1 and 2 complete recruitment at the same time and there is no period 3. While this can be seen by inspection of Figure 3, there is also an argument for this result. First, note that for any given three-period platform trial (with ), we can achieve the same sample sizes for each arm in a two-period platform trial, shifting all observations from period 3 to period 2 such that . Now, after period 1, recruiting all observations of the control arm in period 2 instead of splitting them between periods 2 and 3, increases the sample size of the control arm for the estimate of and, therefore, reduces its variance. Also, the variance of is not increased, as the number of concurrent controls for arm 2 is not affected and, as , the model (10) is fitted with one parameter less (without ) in this case.
It remains to determine the optimal allocation in period 2 for the resulting two-period platform trial. In this case, the effect size estimate of treatment 2 using non-concurrent controls (11) can be written as
where and its variance simplifies to
where .
We now minimize , where and are defined by (3) and (14), respectively. For given and assuming that , with a similar argument as at the beginning of Section 3, one can see that (7) holds, and the variances under the optimal allocation are equal. Using this constraint, we obtain an explicit expression for the optimal allocation by minimizing setting . The resulting formulas for the allocation ratios in period 2 are to be found in the Appendix.
Figure 4 displays the optimal allocations as a function of both, for trials using non-concurrent controls (dashed lines) and using only concurrent controls (solid lines). Comparing the period 2 allocation rates of the optimal designs with non-concurrent and only concurrent controls, one sees that the latter allocates more patients to the control group.
Optimal allocation probabilities , in period 2 as function of for trials with two-periods (). Black, red, and blue lines represent the allocation rates to control, arm 1, and arm 2, respectively. Solid lines refer to analysis with concurrent controls only, and dashed lines to analysis with concurrent and non-concurrent controls.
Case 3: Given sample sizes in periods 1 and 2
We now assume that and are fixed (and not optimized) corresponding to a three-period platform trial in which arm 2 enters later and arm 1 finishes before arm 2 does (Figure 1). As for concurrent controls, we distinguish three scenarios.
If, , as in Case 2, the optimal design allocates all patients equally to treatment 2 and control, that is, for , and .
As for concurrent controls, if and in addition , then for all allocations, the maximum variance is the variance of . The inclusion of non-concurrent controls can only reduce the variance of . Therefore, the optimal design allocates all observations in period 2 to treatment 1 and control.
If and , we minimize , by minimizing in for fixed and . In this case, as in (ii), the two variances are equal under the optimal allocation. As we could not obtain an analytical solution for the optimization problem in this case, we minimized in the remaining free variables under the constraint of equal variances with the Mathematica function FindMinimum or the r-package nloptr27 using the sequential quadratic programing algorithm for nonlinearly constrained gradient-based optimization.28 See Section D in the Supplemental Material. Numerical solutions for the optimal allocations in period 2 are shown in dashed lines in Figure 2. As expected, the larger , the larger the difference between designs utilizing non-concurrent controls compared to designs with concurrent controls only, as the size of the non-concurrent control group is then larger. The pattern of the optimal allocation rates is similar to the scenario where concurrent controls only are used. However, the ratio of patients assigned to the control group is lower when non-concurrent controls are used. Furthermore, there may be less control patients than patients assigned to arm 1 or 2 (which does not occur if only concurrent controls are used).
Example and simulation study
We illustrate the optimal allocations in platform trials by means of a phase II placebo-controlled trial in primary hypercholesterolemia.29 In this trial, the goal was to evaluate the efficacy of 80 mg of the antibody atorvastatin with SAR236553 as compared to atorvastatin alone. Additionally, there was interest in evaluating other doses and combinations, in particular, to investigate the efficacy of 10 mg of atorvastatin plus SAR236553 compared to atorvastatin alone. The primary endpoint was the percent change in calculated LDL cholesterol from baseline. Patients were randomly assigned according to a 1:1:1 allocation to receive a 80 mg of atorvastatin plus SAR236553, 10 mg of atorvastatin plus SAR236553, or 80 mg of atorvastatin plus placebo.
We revisit the design of this trial to discuss three allocation strategies: equal allocation (1:…:1), square root of (1:…:) and the proposed optimal allocations.
We assume the total sample size as in the original trial, , and mean responses and variances in the control group as actually observed in the trial (i.e. a mean of with variance in the control), and consider means equal to and variance for the experimental treatment arms, which lead to power at significance level in a multi-arm design setting using the square root of allocation. Furthermore, we considered three designs depending on the entry and end of arm 2 in the trial, that is,
Design with one period only (i.e. multi-arm design), and thus with sample sizes per period and as Case 1 (in Section 3.1).
Design with two periods (arm 2 starts later, but arms 1 and 2 finish at the same time), as Case 2 (in Section 3.2), assuming a sample size of in period 1 and in period 2.
Design with three periods (arm 2 starts later and finishes after arm 1 does), as Case 3 (Section 3.3), considering two situations: , and , and .
After rounding, we obtain, for example, for the three-period design, , and . We consider the analysis for comparing groups based on concurrent data only. For each design configuration and allocation strategy, we describe the sample size per period and arm. Furthermore, we compare the statistical individual power for each treatment control comparison and also evaluate the variance of the estimates by simulating 100,000 trials. In the main manuscript, we report simulation results where no time trends were assumed. In the Supplemental Material (Section B.3), results for trials with time trends are presented.
We start with the standard multi-arm design (Case 1). In this case, as discussed before, the optimal allocation coincides with the square root of rule (see Table 1 in the Supplemental Material). In such a design, the power of the treatment-control comparisons of treatment 1 is the same as that of treatment 2. When comparing the power of the design with optimal allocations against the design with 1:1 allocation, we can see that there is an increase in power when using the optimal design and the confidence intervals (CIs) are narrower (see Table 2). However, it could also be noted that in this example, the difference is small.
Power for each design according to the allocation strategy.
Power
Power
CI width
CI width
Design
Allocation
1-period
1.000
0.000
one
0.798
0.798
1.106
1.106
1.000
0.000
opt (=sqrt)
0.805
0.805
1.100
1.100
2-period
0.250
0.750
one
0.844
0.671
1.006
1.006
0.250
0.750
opt
0.772
0.757
0.998
0.998
0.250
0.750
sqrt
0.851
0.681
0.997
0.997
3-period
0.337
0.326
one
0.720
0.722
1.026
1.151
0.337
0.326
opt (=sqrt)
0.725
0.724
1.085
1.096
3-period
0.337
0.446
one
0.782
0.687
0.947
1.173
0.337
0.446
opt
0.737
0.730
1.038
1.055
0.337
0.446
sqrt
0.784
0.682
0.939
1.157
Here and are the proportion of patients allocated to periods 1 and 2, respectively; “one” denotes one-to-one allocation, “opt” denotes optimal allocation and “sqrt” denotes the square root of allocation; “Power Ai” is the estimated power when testing Ai against control (), and “CI Width Ai” refers to the width of the confidence interval for the treatment effect of arm Ai compared to control. Note that the optimal allocation coincides with the square root of in the 1-period design and 3-period design with .
Suppose now that the second arm enters while the trial is ongoing, but that it is assumed to end with the first arm. This situation would lead to a two-period design. Table 2 in the Supplemental Material shows the sample sizes per arm and period. We can observe that the designs using optimal allocations and using the root of the rule differ. The optimal strategy allocates fewer patients to the first arm in the second period, and more to the second arm in the second period, while maintaining an equal allocation between arm 1 and control in the first period as is the case for the one-to-one and square root of allocations. When comparing power, the results are as expected: the power of testing the efficacy of arm 1 versus control is larger than the power of the comparison regarding arm 2 when using 1:1 allocation and square root of allocation (see Table 2). Under the optimal allocation strategy, the power and standard errors of effect estimates for both treatment control comparisons are the same. As we minimize maximum variance, we see that the maximum power is smaller under the optimal allocation. Similarly, the (maximum) width of the CIs for the optimal design is smaller than the maximum width of the CI of the other designs.
Finally, consider a design in which arm 2 enters after arm 1, and in addition, the timing when arm 1 ends is fixed at some point before the end of the trial. For the resulting three-period trial, we consider two scenarios. In the first, the total sample sizes for periods 1 and 3 are assumed to be equal. Note that this also implies that the total sample sizes for arms 1 and 2 are equal. Then, in the second period, the optimal allocation is the square root of allocation. When increasing the sample size of period 2 (such that ) this is no longer the case. Moreover, as the period where the control arm is shared increases, with the optimal design also the power increases for larger (see Table 2). Under both choices of , the power of the arm that achieves the lowest power under the optimal design is larger than the lowest power using the 1:1 and square root of allocation strategies. See Tables 3 and 4 for the sample size per arm and period assumed for each allocation strategy.
To summarize, for Design 1, the multi-armed trial, as well in the symmetric design where the optimal gives only a small improvement in minimum power (across treatments) compared to equal allocation. However, in the 2-period and non-symmetric 3-period design, the minimum power of the optimized design is substantially larger than for uniform or allocation. This is mainly due to the fact that the latter leads to different variances of the treatment effect estimates (and therefore also different power values) for the two treatment arms.
We also evaluated the type 1 error under the different allocation strategies. In all the considered designs, the type 1 error is controlled (see Table 5 in the Supplemental Material), even if there are time trends (see Section B.3 in the Supplemental Material). However, in the latter case, the variances might slightly deviate because the time trends may increase the variability in the treatment groups.
Sample size distribution per arm and period according to the allocation strategy for a three-period design where arm 2 starts later and finishes after arm 1 does, and with .
(a) One-to-one
(b) allocation
(c) Optimal allocations
Period 1
Period 2
Period 3
Period 1
Period 2
Period 3
Period 1
Period 2
Period 3
Arm 2
0
10
16
Arm 2
0
9
16
Arm 2
0
9
16
Arm 1
16
10
0
Arm 1
16
9
0
Arm 1
16
9
0
Control
16
10
16
Control
16
12
16
Control
16
12
16
Sample size distribution per arm and period according to the allocation strategy for a three-period design where arm 2 starts later and finishes after arm 1 does, and with , , and .
(a) One-to-one
(b) allocation
(c) Optimal allocations
Period 1
Period 2
Period 3
Period 1
Period 2
Period 3
Period 1
Period 2
Period 3
Arm 2
0
14
10
Arm 2
0
12
10
Arm 2
0
16
10
Arm 1
16
14
0
Arm 1
16
12
0
Arm 1
16
8
0
Control
16
14
10
Control
16
17
10
Control
16
17
10
Optimal allocation under unequal variances and when minimizing the sum of variances
In the previous sections, we assumed that the responses were distributed with equal variance across arms and optimized the allocation rates to minimize the maximum of the variances of the stratified effect estimators. Below, we discuss optimal allocations under unequal variances (Section 6.1) and optimal allocations when minimizing the sum of the variances (Section 6.2). In both sections, we consider a design using concurrent data only and where the entering time of arm 2 and exit time of arm 1 are given by design, that is to say, under Case 3 (see Section 3).
Optimal solutions under unequal variances across arms
Consider the stratified estimators per period in (1) and weights in (2), where the observation of patient on treatment arm (, ), distributed as and denotes the variances of the period-wise treatment effect estimates. Then, in analogy to formula (3), the variance of the stratified effect estimator is given by
As above, we aim to find the allocation rates that minimize . The objective function depends on the ratios .
Through numerical optimization, we found that the optimal solution gives rise to the Neyman allocation in periods 1 and 3 (see Supplemental Material D). The optimal solution also varies in period 2 compared to the case of equal variances across arms. Figure 1 in the Supplemental Material illustrates the optimal allocation rates in a design with equal versus unequal variances across arms. The pattern is similar to the optimal solutions with equal variances (in Figure 2) with respect to the dependence on and : as increases, the allocation ratio for arm 2 also increases, while the allocation ratio for arm 1 decreases. Similarly, for , the allocation ratios increase for arm 1 and decrease for arm 2 as increases. Moreover, compared to the case of equal variances, more subjects are allocated to the arm with greater variability.
Optimal solutions when minimizing the sum of variances
As in Section 6.1, we consider a design using concurrent data, the stratified estimators per period in (1) and weights in (2). Instead of minimizing the maximum of the variances of the stratified effect estimators, we now use the sum of the variances as the objective function to be minimized. Thus, given a fixed overall sample size , we aim to find the allocation probabilities that minimizes . The optimal designs can be derived by numerical optimization (see Supplemental Material D).
As in the optimization using the maximum of the variances as the objective function, the optimal design leads to equal allocation between the active arm and the control in periods 1 and 3. Furthermore, in period 2, both optimal designs coincide in the optimal allocation rates when where the solution gives equal allocation between the treatment arms and square root of allocation for the control. For the rest of the scenarios where , the optimal solutions do not coincide (see Figure 2 in the Supplemental Material). In fact, for , when minimizing the sum of the variances, fewer subjects are allocated to arm 1 and more to arm 2 compared to the optimal solutions minimizing the maximum of the variances, while the opposite happens for . In addition, more subjects are allocated to the control when minimizing the sum of variances instead of the maximum.
Discussion
We derived optimal allocation rules for platform trials, minimizing the maximum variance of the treatment effect estimators of treatment-control comparisons. The most efficient design is the multi-arm trial, where all treatments start and end at the same time. However, this may not be feasible if, for example, not all treatments are available at the start. Under the assumption that some treatments enter the trial at a later time point, we showed that in the optimal design, all treatments finish at the end of the platform trial. This again, however, will in general not be practical. Therefore, we considered optimal designs under constraints on the entry and exit times of treatment arms.
In contrast to earlier work, we performed the optimization assuming that the analyses to compare the efficacy of treatments against control are adjusted with the categorical factor time period. These periods are defined by the time intervals in which no arm enters or leaves the study. Adjusted analyses are recommended in this case to avoid potential biases caused by time trends. Note that the optimal allocation depends on the chosen analysis model.
As an objective function, we considered the maximum of the variances of the two treatment effect estimators (6). As a consequence, under optimal allocation the variances of the treatment effect estimates are either equal, or as similar a possible, given the design restrictions with regard to the size of the periods. If the treatment effects are equal, this also results in an equal power for each of the arms. This is a desirable property, especially in multi-sponsor platform trials, as it gives equal chances to the different treatments. In Section 6, we considered the sum of the variances of the treatment effect estimates as an alternative objective function. The resulting optimal treatment allocation is the same in symmetric settings, where . In all other settings, the proportion of patients allocated to the control group is larger compared to the optimal allocation for the objective function (6). Other objective functions that have been considered are the probability to reject at least one null hypothesis or the sum (or average) of the individual rejection probabilities.
In this work, we assumed that the total sample size of the platform trial is known. Although the total sample size is only a scaling factor in the optimization problem, this assumption might be unrealistic. However, an equivalent strategy to the one followed to optimize the allocation rates would be to consider a targeted minimum precision (corresponding to the maximum variance) for the estimates of the treatment effects, assuming that these are equal, and minimize the total sample size of the trial. To this end, we need to search for the total sample sizes such that the minimum precision of the corresponding optimal design is equal to the target precision. As the optimized minimum precision is monotonic in , this corresponds to a simple numerical root finding.
Changing the allocation ratio in a clinical trial has been controversially discussed in the context of response adaptive randomization.30–33 The main concern is the need to adjust for potential time trends, which leads to statistical inefficiency. Also, for platform trials, we saw that the statistically most efficient design is a multi-arm trial, where allocation ratios stay constant over time. However, if it is not feasible to start and complete all arms at the same time, but some arms enter the trial at a later time point, or complete recruitment before the end of the platform trial, a change in the allocation ratios is optimal, also for analysis procedures that adjust for the time trends.
The testing approach considered in this article, which adjusts for time trends by including a period effect, is in line with the recent Food and Drug Administration (FDA) draft guidance on Master Protocols34 which suggests the use of stratified analyses to avoid bias caused by time trends if the allocation ratios are modified during the trial. Our findings also support the recommendations regarding the choice of allocation ratios: also in the draft guidance, a randomization process that assigns more subjects to the control arm than each individual treatment arm is recommended. This can increase the power for treatment-control comparisons for a given total sample size.
In this article, we optimized the allocation ratios under the assumption that the times when the second treatment enters or leaves the platform are known. In many realistic settings, these times may, however, be unknown at the start of the trial. In the resulting, optimal trial designs, however, the optimal allocation ratio in the first period is always 1:1 randomization and, thus, does not depend on and . The optimal allocation ratios for period 2 can be computed when treatment 2 enters the platform and determining when treatment 1 should be completed is fixed. This holds for both, analysis including non-concurrent controls and concurrent controls only. If it is desirable to use equal allocation among the experimental treatment arms, one can choose such that the optimal design satisfies this condition (see Figure 2). For the analysis using concurrent controls only, the optimal allocation in period 2 is then a allocation.
Our article aims to advance the understanding of optimal allocation principles in platform trials. To accomplish this, we centered our focus on a trial design with two experimental arms and a shared control arm without interim analyses. The inclusion of interim analyses has two consequences. First, the analysis model needs to be modified, as the size of the periods can depend on the interim results. Second, optimization under the assumption of a fixed overall sample size is no longer applicable, as, due to early stopping, the overall sample size depends on the trial outcomes. Instead one could, for example optimize the allocation ratios fixing the overall expected sample size, averaged over a prior on the effect sizes. Even without interim analyses, optimizing the allocation ratios in platform trials with more than two experimental treatment arms becomes more complex, due to the larger number of parameters. In addition, in larger platform trials a feasible optimal allocation rule must not depend on the sample sizes in all future periods, because they may be unknown in advance (as they depend on the entry times of future treatment arms). A simplified strategy could be to set the allocation ratios between treatment arms to 1 (equal allocation) and to optimize the allocation rate to the control arm only. The exit time of the treatment arms could be specified such that the variance of the respective treatment effect estimate reaches a certain threshold. The derivation of optimal allocation rules for platform trials with more than two arms and interim analyses is an open problem and subject to further research.
Supplemental Material
sj-pdf-1-smm-10.1177_09622802241239008 - Supplemental material for Optimal allocation strategies in platform trials with continuous endpoints
Supplemental material, sj-pdf-1-smm-10.1177_09622802241239008 for Optimal allocation strategies in platform trials with continuous endpoints by Marta Bofill Roig, Ekkehard Glimm, Tobias Mielke and Martin Posch in Statistical Methods in Medical Research
Footnotes
Acknowledgements
The authors are members the EU Patient-centric clinical trial platform (EU-PEARL). EU-PEARL (EU Patient-cEntric clinicAl tRial pLatforms) project has received funding from the Innovative Medicines Initiative (IMI) 2 Joint Undertaking (JU) under grant agreement No. 853966. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA and Children’s Tumor Foundation, Global Alliance for TB Drug Development Non-profit Organization, Springworks Therapeutics Inc. This publication reflects the authors’ views. Neither IMI nor the European Union, EFPIA, or any Associated Partners are responsible for any use that may be made of the information contained herein. We thank the reviewers of this article for their useful comments and suggestions, resulting in an improvement of the manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iDs
Marta Bofill Roig
Ekkehard Glimm
Supplemental material
Supplemental material for this article is available online. Supplemental Material includes the derivations of the variance of the treatment effect estimators and additional results from the case study. Mathematica and R code to reproduce the results are available at .
For trials using non-concurrent controls and assuming a fixed sample size in period 1 (Case 2 in Section 4.2), the formulas for the optimal allocation ratios in period 2 are
where and .
References
1.
BerrySMConnorJTLewisRJ. The platform trial: an efficient strategy for evaluating multiple treatments. Jama2015; 313: 1619–1620.
2.
MeyerELMesenbrinkPDunger-BaldaufC, et al. The evolution of master protocol clinical trial designs: a systematic literature review. Clin Ther2020; 42: 1330–1360.
3.
SavilleBRBerrySM. Efficiencies of platform clinical trials: a vision of the future. Clinical Trials2016; 13: 358–366.
4.
WoodcockJLaVangeLM. Master protocols to study multiple therapies, multiple diseases, or both. New Engl J Med2017; 377: 62–70.
5.
VieleK. Allocation in platform trials to maintain comparability across time and eligibility. Stat Med2023; 42: 2811–2818.
6.
AltmanDGRoystonJP. The hidden effect of time. Stat Med1988; 7: 629–637.
7.
GetzKACampoRA. Trial watch: trends in clinical trial design complexity. Nat Rev Drug Disc2017; 16: 307–308.
8.
JuliousSAWangSJ. How biased are indirect comparisons, particularly when comparisons are made over time in controlled trials?Drug Inf J2008; 42: 625–633.
9.
SennS. Hans van Houwelingen and the art of summing up. Biometrical J2010; 52: 85–94.
10.
VillarSSBowdenJWasonJ. Response-adaptive designs for binary responses: How to offer patient benefit while being robust to time trends?Pharm Stat2018; 17: 182–197.
11.
JennisonCTurnbullBW. Group sequential methods with applications to clinical trials. New York: CRC Press, 1999.
12.
SimonRSimonNR. Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization. Stat Probab Lett2011; 81: 767–772.
13.
DoddLEFreidlinBKornEL. Platform trials – beware the noncomparable control group. New Engl J Med2021; 384: 1572–1573.
14.
LeeKMBrownLCJakiT, et al. Statistical consideration when adding new arms to ongoing clinical trials: the potentials and the caveats. Trials2021; 22: 203.
15.
SavilleBRBerryDABerryNS, et al. The Bayesian time machine: accounting for temporal drift in multi-arm platform trials. Clinical Trials2022; 19: 490–501.
16.
SridharaRMarchenkoOJiangQ, et al. Use of nonconcurrent common control in master protocols in oncology trials: report of an American statistical association biopharmaceutical section open forum discussion. Stat Biopharm Res2021; 1–5. DOI: 10.1080/19466315.2021.1938204.
17.
Bofill RoigMKrotkaPBurmanCFet al. On model-based time trend adjustments in platform trials with non-concurrent controls. BMC Med Res Methodol2022; 22: 1–16.
18.
DunnettCW. A multiple comparison procedure for comparing several treatments with a control. J Am Stat Assoc1955; 50: 1096–1121.
19.
CohenDRToddSGregoryWM, et al. Adding a treatment arm to an ongoing clinical trial: a review of methodology and practice. Trials2015; 16: 179.
20.
Choodari-OskooeiBBrattonDJGannonMR, et al. Adding new experimental arms to randomised clinical trials: impact on error rates. Clinical Trials2020; 17: 273–284.
21.
ElmJJPaleschYYKochGG, et al. Flexible analytical methods for adding a treatment arm mid-study to an ongoing clinical trial. J Biopharm Stat2012; 22: 758–772.
22.
RenYLiXChenC. Statistical considerations of phase 3 umbrella trials allowing adding one treatment arm mid-trial. Contemp Clin Trials2021; 109: 106538.
23.
BennettMManderAP. Designs for adding a treatment arm to an ongoing clinical trial. Trials2020; 21: 251.
24.
PanHYuanXYeJ. An optimal two-period multiarm platform design with new experimental arms added during the trial. New Engl J Stat Data Science2022; 1–18. DOI: 10.51387/22-NEJSDS15.
25.
JennisonCTurnbullBW. Group Sequential Methods with Applications to Clinical Trials. New York: Chapman and Hall/CRC, 1999. ISBN 9780367805326. DOI: 10.1201/9780367805326. https://www.taylorfrancis.com/books/9781584888581.
26.
SennS. The many modes of meta. Drug Inf J2000; 34: 535–549.
27.
YpmaJJohnsonSGStammA. nloptr: R interface to nlopt. R package version 2.0.3. 2022.
28.
KraftD. A software package for sequential quadratic programming. Forschungsbericht- Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt, 1988.
29.
RothEMMcKenneyJMHanotinC, et al. Atorvastatin with or without an antibody to pcsk9 in primary hypercholesterolemia. New Engl J Medicine2012; 367: 1891–1900.
30.
KornELFreidlinB. Time trends with response-adaptive randomization: the inevitability of inefficiency. Clinical Trials2022; 19: 158–161.
31.
ProschanMEvansS. Resist the temptation of response-adaptive randomization. Clin Infect Dis2020; 71: 3002–3004.
32.
RobertsonDSLeeKMLópez-KolkovskaBCVillarSS. Rejoinder: response-adaptive randomization in clinical trials. Stat Sci2023; 38: 233–239.
33.
RobertsonDSLeeKMLópez-KolkovskaBCVillarSS. Response-adaptive randomization in clinical trials: from myths to practical considerations. Stat Sci2023; 38: 185–208.
34.
FDA/CDER/CBER/OCE. Guidance for industry – master protocols for drugs and biological development, 2023.
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.