Discussion paper on ethical considerations in adaptive platform trials

Lead author: Katherine Lee – 10 April 2023

Author list: Amir Zayegh, Arlen Wilcox, David Cook, Julie Marsh, Lynda Whiteway, Mitch Messer, Rob Mahar, Roberta Littleford, Steve Webb, Steven Tong, Vicki (Rui Dan) Xie


Adaptive trials refer to those in which the design changes based on accumulating data and pre-specified adaptation criteria (1, 2). Such trials can, for example, adjust the allocation ratio so more participants are randomised to more promising interventions (known as response adaptive randomisation), or stop recruitment if an efficacy signal is detected (known as early stopping), or stop allocating participants to interventions that appear futile (known as arm dropping) (3). Adaptive platform trials (APTs) use adaptive designs to compare the efficacy of multiple interventions across multiple domains (treatment modalities) simultaneously within different subgroups of participants under a single “master” protocol. They have the ability to add interventions and to share information across subgroups (4). The Australian Clinical Trials Alliance (ACTA)’s document Adaptive Platform Trials: Efficiencies & Complexities provides more information on features of adaptive platform designs, including a discussion of when such a design may be beneficial.

The complex and flexible nature of APTs raises several unique ethical issues compared with conventional (i.e. fixed design and fixed sample size) trials. In this paper, we describe some of the ethical issues that should be considered when planning and conducting APTs. These are discussed with regards to the values laid out in the Australian Government’s National Statement on Ethical Conduct in Human Research (5); research merit and integrity, beneficence, respect for human beings, and justice.

This document has been developed by the Innovative Trial Designs Working Group convened by ACTA. Although the scope of this working group includes all types of innovative trial design, this document is limited to APTs given the unique ethical considerations they raise.

The first value outlined in the NHMRC National Statement is merit and integrity. That is, the research is justifiable by its potential benefit, the design is appropriate for achieving the aims of the study, participants respect is not compromised by the research, and the research is conducted by persons with experience, qualifications and competence, and using facilities and resources, that are appropriate for the research (5). Adaptive platform trials raise several issues regarding research merit and integrity. We first outline issues related to the use of adaptations before considering those specific to platform trials.

Research merit and integrity

The first value outlined in the National Statement is merit and integrity. That is, the research is justifiable by its potential benefit, the design is appropriate for achieving the aims of the study, participants’ respect is not compromised by the research, and the research is conducted by persons with experience, qualifications and competence, and using facilities and resources, that are appropriate for the research (5). APTs raise several issues regarding research merit and integrity. We first outline issues related to the use of adaptations before considering those specific to platform trials


Incorporating adaptive properties into a trial can offer efficiency benefits over conventional study designs in terms of cost, time and exposure to harmful or suboptimal interventions (3, 6). Allowing for early stopping for efficacy (e.g. superiority or non-inferiority) based on the results of regular scheduled analyses enables interventions within the trial, or the trial as a whole, to be stopped as soon as a conclusion is reached. This means, on average, fewer participants may be required to answer the research question, accelerating the time to treatment approval or policy changes in disease management, and reducing the costs (3). Similarly, incorporating early stopping of the trial, or an arm within the trial, for futility can reduce research waste and participant exposure to ineffective interventions, and can allow funding to be redirected to new research questions. Finally, as the sample size is not typically set and not based on pre-enrolment assumptions, there is less risk of inconclusive findings due to an inappropriately small sample size.

Adaptive properties can, however, threaten the integrity and validity of trial results if they are not implemented and governed appropriately (7). Some considerations regarding whether an adaptive design is appropriate to achieve the aims of the study are given below:

  1. For adaptations that are guided by accruing outcome data, the outcome needs to be short term relative to length of recruitment (7).
  2. In general, response adaptive randomisation (whereby the allocation probabilities to interventions are regularly updated proportional to the relative efficacy of the interventions) (8, 9) is not as beneficial in terms of statistical power as equal allocation if there are only two interventions being compared (10), although there are counter examples, e.g. Sirkis et al (2022) (11). Response adaptive randomisation can also lead to biased results if not applied appropriately, for example in Bartlett et al. who used a play-the-winner algorithm which assigned one participant to conventional treatment (who died) and 11 participants  to intervention (all of whom survived) (12), or if there are temporal trends which are not accounted for in the analysis (13-15). These risks should be weighed up against the benefits of response adaptive randomisation, e.g. greater recruitment to the better performing interventions potentially leading to answers more rapidly than fixed allocation designs, where the allocation probabilities are pre-specified and do not change during the trial (11, 16). Finally, it has been shown that, on average, the final estimate of the treatment effect following response adaptive randomisation may slightly under-estimate the true treatment effect, which should be addressed in the interpretation of the trial results (13).
  3. Designs that involve early stopping for efficacy can inhibit the trial’s ability to provide convincing or sufficient information on secondary outcomes, safety, or subgroup effects. It is noteworthy that this can also be true with conventional trial designs, which are typically powered for the primary outcome only. To mitigate this, endpoints and the timing of adaptations should be chosen based on their sufficiency to change practice or policy. There is also scepticism that early stopping for efficacy might reflect a random high in the estimation of the treatment effect and the treatment effectiveness might not be as great as suggested (17). This should be acknowledged in the interpretation of the trial results.
  4. Designs that involve early stopping for futility may miss advantages provided by the treatment in a participant subgroup or for an important secondary outcome. This is also a feature of conventional trial designs.
  5. The use of non-concurrent controls (that is controls recruited over a different time period than intervention participants) can produce biased effect estimates if not accounted for in the statistical analysis (15, 18). This can usually be mitigated by explicitly modelling a time trend which should be pre-specified in the protocol and explicitly detailed in statistical analysis plan (1, 14, 19).
  6. Where surrogate outcomes are used to guide adaptations (for example, a laboratory measurement or a physical sign used as a substitute for a clinically meaningful outcome), the validity of the measure as a predictor of clinical outcome must be well justified from previous research or a systematic review, and an improvement in the surrogate must translate to patient benefit. For example, see the STAMPEDE Study, an adaptive platform trial on therapy for advancing and metastatic prostate cancer (20), and the literature around the assessment of immune checkpoint inhibitors as a surrogate for progression-free and overall survival (21, 22). Where appropriate, the surrogate can be replaced by the patient-centred endpoint within the analysis model, when sufficient time has accrued (23).

To maintain integrity of an adaptive trial, it is imperative that any adaptations are clearly pre-specified in the protocol, including the rationale for the adaptations, when and who will make the adaptations, and the statistical methodology used to determine each adaptation (24). This process should be overseen by an experienced biostatistician and additional trial governance is required to manage how the trial results from the scheduled analyses remain confidential, and when and how to publish domain-specific results

Platform trials

Using a platform design provides the opportunity to investigate an array of interventions across multiple domains within multiple subgroups of participants under a single master protocol, which may be supplemented by separate domain-specific or subgroup-specific appendices. This offers cost and time savings compared to conducting separate trials for each research question (24-27), and can allow sharing of information across subgroups (28-30). It also enables research to be concentrated for a disease and can reduce recruitment competition across studies. These factors do, however, need to be balanced against the time and resources needed to establish a platform trial (27, 31). For example, platform trials are typically designed using simulation, which can be a complex and time-consuming process (32). Additionally, custom-built randomisation systems may be necessary for trials involving response adaptive randomisation, further adding to the up-front investments of time and resources. Addressing multiple questions within a single study raises issues with multiplicity, and can lead to the risk of falsely concluding superiority in the frequentist paradigm (33), which if accounted for in the analysis can increase the required sample size.

Another benefit of platform designs is that new interventions or new research questions can be added into the trial after it has commenced recruitment (34). They also permit the progressive addition of sub-studies for molecular or other investigations (35). These additions are typically accomplished through amendments to, or addition of, appendices to the master protocol. The dynamic nature of platform trials means the trial remains contemporary and is responsive to the evolving research needs, such that these additions to the trial can be established in a timely manner, without requiring a new protocol, ultimately accelerating evidence generation. The following should, however, be considered when making an addition: (i) the statistical implications of the addition (36), (ii) the control group should ideally remain contemporary and comparable to the invention group(s) if this is acceptable to consumers and stakeholders, otherwise statistical approaches requiring strong assumptions may have to be implemented (36, 37), (iii) the population of available participants should be adequate to support the addition, (iv) writing a new research question into the trial and supporting documentation can require just as much work as populating a new protocol template, and (v) the introduction of the intervention/research question should be supported by an expert informed, transparent, and robust process (38).

The complex and dynamic nature of platform trials raises several challenges during design and conduct to ensure the integrity of the trial. Platform trials typically require a complex governance structure consisting of various committees overseeing different aspects of the trial. This would typically include a Trial Steering Committee, the central decision-making body, a Central Management Team for the day-to-day running of the platform, a (blinded) statistical committee (with some trial-independent membership) overseeing the study design and new additions to the platform, an (unblinded) analytic team responsible for producing the reports from the scheduled analyses, and a committee (or committees) overseeing the trial safety, conduct and oversight of adaptations (e.g. a Data Safety Monitoring Board [DSMB]). It is critical to ensure that all members of all these committees have a clear understanding of the protocol and the planned amendments, and collectively have the knowledge and experience to establish and conduct the trial. In particular, the Central Management Team needs to have sufficient experience to manage the complexity of the design, which may, for example, include multiple interventions that vary by site (38-40) and separate eligibility criteria and consents for the different domains.

Platform trials need a dedicated and experienced DSMB to oversee participant safety and study adaptations (41, 42). This role demands significant investment of time and effort, coupled with meticulous attention to detail and a comprehensive understanding of adaptive designs. As a result, identifying suitable members for this task can prove to be challenging. It is critical to ensure that all members of the DSMB have a clear understanding of the protocol and the planned amendments.

Platform trials also have additional administrative and logistical challenges over conventional trial designs, which can be a threat to the integrity of the trial if there is not adequate training and time spent to establish and conduct the trial appropriately. For example, protocols for platform trials are typically complex and may involve concepts and nomenclature that are not familiar to everyone involved in the trial. It is the principal investigator’s (PI) responsibility to ensure that the trial team, the approving human research ethics committee (HREC), the Research Governance Officer (RGO), consumers, and other stakeholders are engaged early and educated in the study design, the rationale for the new concepts within the design, and the nomenclature used (e.g. domain, response adaptive randomisation, decision-making triggers etc.). It can be helpful for researchers to be open to different methods of communication regarding the trial. For example, providing references to other platform trials and relevant papers in the literature, as well as hosting lectures and discussion forums on the main principles of the trial to inform the HREC/RGO, consumers and communities. The trial team may also consider facilitating access to independent expertise for HRECs and RGOs, where required. Finally, the PI or their delegate should work with HREC to understand limitations of the current systems and processes, and to make the administrative and logistical procedures as simple as possible (e.g. how to add treatment domains to an approved, pre-existing study).

Platform trials are heavily reliant on specialist statistical expertise throughout their lifecycle, without which, can undermine the validity of the trial results (43). During the design phase, simulation is typically used to inform sample size requirements which is an iterative process between the clinical and statistical members of the team, relying on highly specialised skills and experience. During the conduct of the study, there are regular scheduled analyses as well as multiple final analyses as different domains within the trial reach a conclusion. This ongoing need for scheduled analyses requires clear demarcation between blinded and unblinded statistical teams to maintain the integrity of the blinding, the confidentiality of the scheduled reports, and to ensure that the results are obtained in a timely manner for DSMB meetings and trial adaptations, including those that trigger final reporting of a domain. In addition, it is necessary that trial design and data decisions are made independent of knowledge of the scheduled trial results.

Finally, the expansive and adaptive nature of platform trials means there is a need for strong database management support (39). Real time data capture and data cleaning is needed for the regular scheduled analyses, and updates to the database structure or logics are also typically required if new interventions or new domains are added to the trial. In proposing a platform trial, investigators should provide a robust Data Management Plan with their HREC submission, which should provide reassurance to HRECs and RGOs that the dynamic management of data flow necessary for ongoing data cleaning, frequent scheduled analyses and database modification can be provided by the research team.


The second value outlined in the National Statement is beneficence, that is the likely benefit of the research to the participants or the wider community must justify any risks of harm or discomfort to participants (5).

A major benefit of disease-specific APTs for participants is that they may simultaneously receive a standardised high level of care across multiple treatment modalities. Another potential benefit is that pre-specifying rules for early stopping (for futility and/or efficacy) or using response adaptive randomisation can result in fewer participants exposed to inferior or ineffective interventions (3). This can be appealing to clinicians who have an a priori view on the relative benefit of the interventions being compared, and to regulators and future participants if a conclusion is reached sooner.

Platform designs assessing multiple interventions offer potential participants, treating clinicians and sites the ability to choose which treatment options (usually domains) are available. This may also be affected by the availability of investigational product at the site. The option to opt-in or opt-out of the interventions and domains being evaluated can make the study more appealing to participants, clinicians and sites, and better preserves the clinician-patient relationship. It also recognises the differing interpretations of the existing literature, as well as different values and preferences among clinicians and patients.

A potential harm to participants is that platform trials typically collect large amounts of data to address the multiple research questions within the study. Although this can be burdensome for trial participants, at a population level the burden is typically reduced compared to answering each of the questions in separate trials. High-quality APTs restrict the data collected to internationally agreed, standardised minimum outcome datasets, and avoid ad-hoc or hypothesis generating analyses. Another consideration is that when randomising a participant to several different interventions across different domains, although there may be equipoise regarding each intervention, there may not be equipoise for combinations of interventions. To mitigate this, randomisation algorithms should avoid known inferior treatment combinations.


The third value in the National Statement is respect, which refers to having due regard for the welfare, beliefs, perceptions, customs and cultural heritage, both individual and collective, of those involved in research (5). This involves respecting the privacy, confidentiality and cultural sensitivities of the participants, and allowing participants to make their own decisions where possible

It is critical that people being consented to participate in a trial are fully aware of the extent of the research to which they are joining, including both the risks and the benefits. The complex nature of APTs means the study design needs to be translated so that it is appropriate for a lay audience. Potential participants, and sometimes family members or carers, need the time to understand the trial before deciding whether to participate. It can help to include flowcharts or infographics of the trial design in the participant information, for example detailing the potential participant pathways through the trial, and to tailor the information to the domains and interventions relevant to the participant to reduce the complexity. The informed consent process can also be made simpler through the use of a layered consent process and multi-media(44). As with any study, it is imperative to have consumer and community involvement in developing the participant information.

In the case of APTs, achieving true informed consent may require a dynamic consent model (45). Dynamic consent involves obtaining consent from participants at entry to the platform, and again when participants are eligible for a new intervention or domain, which provides an option for participants to withdraw from the trial at any stage. Such consent models are more work intensive than standard models of consent, and can be managed through online portals or by multiple approaches by research staff. A more complex consideration is when consent is required to be randomised for some domains but not other domains, which has implications for the use of outcome data for some domains and not others. This needs to be made explicit in the participant information. While the ethical default is to obtain prospective (‘opt-in’) consent, there are some research questions for which alternative modes of obtaining consent may be ethically acceptable. Deferred consent (also known as opt-out consent or research without prior consent) involves approaching the participant after randomisation and administration of the intervention has occurred, to seek consent for collection of outcome data or ongoing study interventions (46, 47). A full waiver of the requirement for informed consent may also be granted in limited circumstances where both prospective and deferred consent are not feasible. A unique facet of platform trials is that different interventions within the platform may be conducted using different modes of consent. Research staff should be aware of the potential for participants to find this confusing (e.g. approaching for deferred consent while also seeking prospective consent for a different domain), and be trained to explain the reasons why different research questions may require different modes of consent.


The fourth value in the National Statement is justice, namely that all people are treated equally and equitably (5). For justice to be upheld, the selection and process of recruiting participants should be fair, there should be no unfair burden of participation on particular target populations or subgroups, there should be no exploitation of participants, there should be fair access to the benefits of research, and the research outcomes should be made accessible to research participants in a way that is timely and clear.

There are several factors to consider when assessing justice of an APT. In a fixed design, the sample size is determined in advance based on pre-specified assumptions about the efficacy of the intervention(s), variability in the outcome and heterogeneity across subgroups of participants. These assumptions can be inaccurate, which can lead to indeterminate (wasted) trials or trials that are unnecessarily large (i.e. could have reached the same conclusion with fewer participants). A major advantage of adaptive designs is that accumulating outcome data collected within the trial is used to guide the ongoing trial design (i.e. which arms/domains to continue in the trial). This can reduce the risk of indeterminate results and of further exposure to inferior interventions, and prevent delay in the availability of information on efficacy, thereby informing treatment decisions for future trial participants and patients being treated outside the study (via treatment policy changes).

Platform trials have the potential for broader eligibility criteria than conventional designs as they are typically disease focussed and seek to answer to multiple questions simultaneously. This can provide an increased opportunity for participation and can lead to greater equity across subgroups, particularly given the ability for different sites to offer difference domains/arms within the trial. On the contrary, broad eligibility criteria can mean different ethical considerations for different subgroups. For example, the complexity of the design can make it difficult for some participants to make an informed decision to participate, such as participants who are cognitively impaired or paediatric participants. The increased complexity of APTs should not be an excuse to setting a higher bar for participation. It is incumbent on researchers to ensure diverse populations are considered in the informed consent processes and efforts made to facilitate involvement of all potential participants. It is also important that all sites start to recruit all eligible individuals as soon as possible after platform commencement (once governance and ethics are in place), rather than delaying recruitment in some subgroups (such as more severe disease) until several adaptations have occurred in order to ensure equable access; this should be monitored over time as the trial progresses.

It has been suggested that using interim data to guide response adaptive randomization is at odds with the requirement for clinical equipoise (48-50). On the contrary, response adaptive randomisation has been argued to reflect what clinicians would apply if they had access to the data; as confidence increases, there is imbalance in the proportion they would give each treatment until the level of confidence crosses a threshold where there is sufficient statistical power (belief) that there is no longer equipoise at which point the trial stops (51).

Finally, establishing a platform with multiple domains increases a participant’s chance of receiving an active treatment, and gives participants options to participate in a range of different treatment evaluations. Furthermore, assessment of multiple interventions within a domain compared to a common control means that fewer participants are exposed to the control whilst maintaining statistical power for comparisons of interventions.


APTs are increasingly being considered to improve the efficiency and effectiveness of clinical trials. However, their unique nature raises unique ethical issues that must be carefully considered and addressed during trial planning and implementation. In particular, the appropriateness of using an adaptive platform design should be rigorously assessed from scientific and participant perspectives, with careful consideration given to the potential advantages and disadvantages of such a design in terms of cost and resource utilization and the risk/benefit to participants. If an adaptive platform design is to be used, it is critical that the adaptations are clearly outlined in the protocol, including how and when the study will adapt. It is crucial that all members of the trial team and relevant stakeholders understand the key concepts of the design and that this is explained in a simple and understandable format for participants.

In light of the ethical considerations outlined above, it is incumbent upon trial designers and implementers to closely examine the potential implications of any proposed APT, and to consider how best to address and mitigate any ethical concerns that may arise. By taking a careful and thoughtful approach to the ethical dimensions of APTs, researchers and clinicians can help to promote the highest levels of quality, transparency, and participant safety in clinical research.

Figure 1: Summary of ethical considerations in adaptive platform trials

Value Items
Merit and integrity – Can offer benefits over conventional fixed design in terms of time, cost and/or the number of participants
– Enables research to be concentrated for a disease
– Can threaten the integrity and validity of trial results if adaptive properties are not implemented appropriately
and/or not clearly specified a priori
– Requires an appropriate governance structure
– Trial team need to have appropriate knowledge and expertise
– Requires adequate training of the trial staff and the relevant stakeholders
– Requires a dedicated and experienced Data and Safety Monitoring Board
– Heavily reliant on specialist statistical expertise the absence of which can undermine the validity of the trial results
– Needs a detailed and robust Data Management Plan
Beneficence – Can mean fewer participants exposed to inferior interventions
– Participants can opt-in/out of different interventions
– Can mean large amounts of data collected per participant
– May not be equipoise for combinations of interventions
Respect – Study design needs to be translated so that potential participants can understand the trial before deciding to
– Critical to have consumer and community involvement in developing the participant information
– May have a complex consent process that may vary by domain
Justice – Adaptations reduce the risk of indeterminate results and reduce the risk of continued exposure to harm or
delayed availability of information regarding intervention efficacy
– Typically have broader eligibility criteria that conventional designs, which can increase the opportunity for
participation but may mean different ethical considerations are required for different subgroups
– Including multiple domains increases a participant’s chance of receiving an active treatment, and gives
participants options to participate in a range of different treatment evaluations


  1. U.S. Department of Health and Human Services Food and Drug Administration CfDEaRC, Center for Biologics Evaluation and Research (CBER), . Adaptive Designs for Clinical Trials of Drugs and Biologics Guidance for Industry. https://www.fda.gov/media/78495/download2019
  2. Bhatt DL, Mehta C. Adaptive Designs for Clinical Trials. New England Journal of Medicine. 2016;375(1):65-74.
  3. Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Medicine. 2018;16(1):29.
  4. Parmar MK, Sydes MR, Cafferty FH, Choodari-Oskooei B, Langley RE, Brown L, et al. Testing many treatments within a single protocol over 10 years at MRC Clinical Trials Unit at UCL: Multi-arm, multi-stage platform, umbrella and basket protocols. Clinical Trials. 2017;14(5):451-61.
  5. Australian Government: The National Health and Medical Research Council tARCaUA. National Statement on Ethical Conduct in Human Research 2007 (Updated 2018). https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018#block-views-block-file-attachments-content-block-12018
  6. van der Graaf R, Roes KCB, van Delden JJM. Adaptive Trials in Clinical Research: Scientific and Ethical Issues to Consider. JAMA. 2012;307(22):2379-80.
  7. Wason JMS, Brocklehurst P, Yap C. When to keep it simple – adaptive designs are not always useful. BMC Medicine. 2019;17(1):152.
  8. Shan G. Response adaptive randomization design for a two-stage study with binary response. Journal of Biopharmaceutical Statistics. 2023:1-11.
  9. Feifang Hu; William F. Rosenberger. The Theory of Response Adaptive Randomization in Clinical Trials. Wiley J, editor2006.
  10. Hey SP, Kimmelman J. Are outcome-adaptive allocation trials ethical? Clinical Trials. 2015;12(2):102-6.
  11. Sirkis T, Jones B, Bowden J. Should RECOVERY have used response adaptive randomisation? Evidence from a simulation study. BMC Med Res Methodol. 2022;22(1):216.
  12. Bartlett RH, Roloff DW, Cornell RG, Andrews AF, Dillon PW, Zwischenberger JB. Extracorporeal circulation in neonatal respiratory failure: a prospective randomized study. Pediatrics. 1985;76(4):479-87.
  13. Proschan M, Evans S. Resist the Temptation of Response-Adaptive Randomization. Clin Infect Dis. 2020;71(11):3002-4.
  14. Korn EL, Freidlin B. Time trends with response-adaptive randomization: The inevitability of inefficiency. Clin Trials. 2022;19(2):158-61.
  15. Jiang Y, Zhao W, Durkalski-Mauldin V. Time-trend impact on treatment estimation in two-arm clinical trials with a binary outcome and Bayesian response adaptive randomization. J Biopharm Stat. 2020;30(1):69-88.
  16. Wathen JK, Thall PF. A simulation study of outcome adaptive randomization in multi-arm clinical trials. Clin Trials. 2017;14(5):432-40.
  17. Walter SD, Guyatt GH, Bassler D, Briel M, Ramsay T, Han HD. Randomised trials with provision for early stopping for benefit (or harm): The impact on the estimated treatment effect. Stat Med. 2019;38(14):2524-43.
  18. Lee KM, Wason J. Including non-concurrent control patients in the analysis of platform trials: is it worth it? BMC Medical Research Methodology. 2020;20(1):165.
  19. Saville BR, Berry DA, Berry NS, Viele K, Berry SM. The Bayesian Time Machine: Accounting for temporal drift in multi-arm platform trials. Clinical Trials. 2022;19(5):490-501.
  20. James ND, de Bono JS, Spears MR, Clarke NW, Mason MD, Dearnaley DP, et al. Abiraterone for Prostate Cancer Not Previously Treated with Hormone Therapy. New England Journal of Medicine. 2017;377(4):338-51.
  21. Ritchie G, Gasper H, Man J, Lord S, Marschner I, Friedlander M, et al. Defining the Most Appropriate Primary End Point in Phase 2 Trials of Immune Checkpoint Inhibitors for Advanced Solid Cancers: A Systematic Review and Meta-analysis. JAMA Oncol. 2018;4(4):522-8.
  22. Kok PS, Yoon WH, Lord S, Marschner I, Friedlander M, Lee CK. Tumor Response End Points as Surrogates for Overall Survival in Immune Checkpoint Inhibitor Trials: A Systematic Review and Meta-Analysis. JCO Precis Oncol. 2021;5:1151-9.
  23. Renfro LA, Carlin BP, Sargent DJ. Bayesian adaptive trial design for a newly validated surrogate endpoint. Biometrics. 2012;68(1):258-67.
  24. Park JJH, Harari O, Dron L, Lester RT, Thorlund K, Mills EJ. An overview of platform trials with a checklist for clinical readers. J Clin Epidemiol. 2020;125:1-8.
  25. Saville BR, Berry SM. Efficiencies of platform clinical trials: A vision of the future. Clin Trials. 2016;13(3):358-66.
  26. Gold SM, Bofill Roig M, Miranda JJ, Pariante C, Posch M, Otte C. Platform trials and the future of evaluating therapeutic behavioural interventions. Nature Reviews Psychology. 2022;1(1):7-8.
  27. Park JJH, Sharif B, Harari O, Dron L, Heath A, Meade M, et al. Economic Evaluation of Cost and Time Required for a Platform Trial vs Conventional Trials. JAMA Network Open. 2022;5(7):e2221140-e.
  28. Berry SM, Broglio KR, Groshen S, Berry DA. Bayesian hierarchical modeling of patient subpopulations: efficient designs of Phase II oncology clinical trials. Clin Trials. 2013;10(5):720-34.
  29. Turner RM, Turkova A, Moore CL, Bamford A, Archary M, Barlow-Mosha LN, et al. Borrowing information across patient subgroups in clinical trials, with application to a paediatric trial. BMC Medical Research Methodology. 2022;22(1):49.
  30. Angus DC, Berry S, Lewis RJ, Al-Beidh F, Arabi Y, van Bentum-Puijk W, et al. The REMAP-CAP (Randomized Embedded Multifactorial Adaptive Platform for Community-acquired Pneumonia) Study. Rationale and Design. Ann Am Thorac Soc. 2020;17(7):879-91.
  31. Love SB, Cafferty F, Snowdon C, Carty K, Savage J, Pallmann P, et al. Practical guidance for running late-phase platform protocols for clinical trials: lessons from experienced UK clinical trials units. Trials. 2022;23(1):757.
  32. Wilson DT, Hooper R, Brown J, Farrin AJ, Walwyn RE. Efficient and flexible simulation-based sample size determination for clinical trials with multiple design parameters. Stat Methods Med Res. 2021;30(3):799-815.
  33. Dmitrienko A, D’Agostino RB, Sr. Multiplicity Considerations in Clinical Trials. N Engl J Med. 2018;378(22):2115-22.
  34. Park JJH, Detry MA, Murthy S, Guyatt G, Mills EJ. How to Use and Interpret the Results of a Platform Trial: Users’ Guide to the Medical Literature. Jama. 2022;327(1):67-74.
  35. Thall PF. Adaptive Enrichment Designs in Clinical Trials. Annu Rev Stat Appl. 2021;8(1):393-411.
  36. Lee KM, Brown LC, Jaki T, Stallard N, Wason J. Statistical consideration when adding new arms to ongoing clinical trials: the potentials and the caveats. Trials. 2021;22(1):203.
  37. Bofill Roig M, König F, Meyer E, Posch M. Commentary: Two approaches to analyze platform trials incorporating non-concurrent controls with a common assumption. Clin Trials. 2022;19(5):502-3.
  38. Schiavone F, Bathia R, Letchemanan K, Masters L, Amos C, Bara A, et al. This is a platform alteration: a trial management perspective on the operational aspects of adaptive and platform and umbrella protocols. Trials. 2019;20(1):264.
  39. Hague D, Townsend S, Masters L, Rauchenberger M, Van Looy N, Diaz-Montana C, et al. Changing platforms without stopping the train: experiences of data management and data management systems when adapting platform protocols by adding and closing comparisons. Trials. 2019;20(1):294.
  40. Morrell L, Hordern J, Brown L, Sydes MR, Amos CL, Kaplan RS, et al. Mind the gap? The platform trial as a working environment. Trials. 2019;20(1):297.
  41. Sandercock PAG, Darbyshire J, DeMets D, Fowler R, Lalloo DG, Munavvar M, et al. Experiences of the Data Monitoring Committee for the RECOVERY trial, a large-scale adaptive platform randomised trial of treatments for patients hospitalised with COVID-19. Trials. 2022;23(1):881.
  42. Turnbull BW. Adaptive designs from a Data Safety Monitoring Board perspective: Some controversies and some case studies. Clinical Trials. 2017;14(5):462-9.
  43. Ciolino JD, Spino, C., Ambrosius, W. T., Khalatbari, S., Cayetano, S. M., Lapidus, J. A., Nietert, P. J., Oster, R. A., Perkins, S. M., Pollock, B. H., Pomann, G. M., Price, L. L., Rice, T. W., Tosteson, T. D., Lindsell, C. J., & Spratt, H. . Guidance for biostatisticians on their essential contributions to clinical and translational research protocol review. Journal of Clinical and Translational Science. 2021.
  44. Symons TJ, Straiton N, Gagnon R, Littleford R, Campbell AJ, Bowen AC, et al. Consumer perspectives on simplified, layered consent for a low risk, but complex pragmatic trial. Trials. 2022;23(1):1055.
  45. Budin-Ljøsne I, Teare HJA, Kaye J, Beck S, Bentzen HB, Caenazzo L, et al. Dynamic Consent: a potential solution to some of the challenges of modern biomedical research. BMC Medical Ethics. 2017;18(1):4.
  46. van der Graaf R, Hoogerwerf M-A, de Vries MC. The ethics of deferred consent in times of pandemics. Nature Medicine. 2020;26(9):1328-30.
  47. Woolfall K FL, . Dawson A, et al. . Fifteen-minute consultation: an evidence-based approach to research without prior consent (deferred consent) in neonatal and paediatric critical care trials. Arch Dis Child Educ Pract Ed. 2016;101:49-53.
  48. Sim J. Outcome-adaptive randomization in clinical trials: issues of participant welfare and autonomy. Theor Med Bioeth. 2019;40(2):83-101.
  49. Saxman SB. Ethical considerations for outcome-adaptive trial designs: a clinical researcher’s perspective. Bioethics. 2015;29(2):59-65.
  50. Begg CB. Ethical concerns about adaptive randomization. Clinical Trials. 2015;12(2):101.
  51. London AJ. Learning health systems, clinical equipoise and the ethics of response adaptive randomisation. J Med Ethics. 2018;44(6):409-15.