Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Immune checkpoint inhibitors (ICIs), with or without chemotherapy, are now standard first-line therapy for advanced non-small cell lung cancer (NSCLC) without a driver mutation.1 A phenomenon of rapid tumour progression following ICIs has been reported in several clinical studies with differing criteria and no agreed definition.2 Gandara et al proposed clinically applicable criteria for fast progression (FP) to define this phenomenon.3 The incidence of FP was approximately 10% in the OAK randomised trial and a French retrospective cohort study.3 4
A model to predict FP after ICIs
In this issue, Zhou and colleagues evaluated 21 pretreatment, blood-based variables in seven different machine learning models as predictors of FP among participants treated with atezolizumab in four trials of atezolizumab for advanced NSCLC.5 Their proposed final model included four, readily available variables: C reactive protein, neutrophil count, lactate dehydrogenase and alanine transaminase.
Strengths and limitations
The key strengths of this study were the use of multiple, state-of-the-art, machine learning approaches to develop a simple, parsimonious model based on a few routinely collected variables from the OAK trial6 and validation of the model using data from other trials.7–9 Five issues that affect the clinical applicability of this model are discussed below.
First, the study population used to train the model is no longer frequent in routine clinical practice. The model was trained using participants’ progressing after treatment with cytotoxic chemotherapy.6 The model was validated using data from the BIRCH7 and FIR9 trials, in which approximately 20% of participants were treatment naïve. However, separate results for this subgroup were not reported, so the applicability of the results to treatment-naïve patients remains unclear.
Second, FP is not a phenomenon that occurs only after treatment with atezolizumab. Nine per cent of participants randomly assigned docetaxel in the OAK trial subsequently met criteria for FP.3 It would be helpful to see how the model proposed by Zhou et al performs in patients treated with chemotherapy alone or with other ICIs.5
Third, as expected, the model’s observed performance in the development cohort was better than it was in the validation cohorts. For example, the AUC (area under the receiver operating characteristic curve) statistics of the proposed model in the development cohort versus the validation cohorts were 0.908 versus 0.666 and 0.776. Further evaluation of the proposed model in other data sets is needed to determine its true utility.
Fourth, selecting variables for inclusion in the proposed model was data driven rather than based on specific biological or mechanistic principles. For example, in the OAK and POPLAR trials, elevated pretreatment lactate dehydrogenase was associated with worse overall survival regardless of whether treatment was with atezolizumab or docetaxel.10 Elucidation of the pathophysiology of this proposed phenomenon would help identify candidate variables that are more specific to the proposed mechanisms of rapid tumour progression.
Fifth, for the model to be clinically useful, we need effective, alternative management strategies for those predicted to have FP. It remains unclear whether those who developed FP after starting single-agent ICI in these trials would have done better if they had been treated with chemotherapy or chemotherapy plus ICI.
This work by Zhou and colleagues highlights three important considerations for investigators hoping to develop a machine learning model for clinical implementation. First, the problem that the model addresses must be clinically relevant. Second, the model needs to accurately identify those with the problem within the population of interest. Finally, viable interventions are needed to mitigate the problem. Below, we highlight an excellent example illustrating these considerations in developing a machine learning model for routine clinical practice.
Investigators from The University of California, San Francisco and Duke University noted that 10%–20% of outpatients undergoing radiation therapy or chemoradiation required acute care (an admission or unscheduled visit). They developed a machine learning model using gradient-boosted trees to predict a patient’s likelihood of requiring acute care based on pretreatment variables captured in the electronic health record.11 They then demonstrated in a randomised trial that those identified by the model to be at 10% or greater risk of requiring acute care, required significantly lower rates of acute care when randomly assigned to evaluations scheduled two times a week versus one time a week (12% vs 22%, respectively).12
Hence, for the proposed model to address the problem of FP, we first need to validate the model in newly diagnosed advanced NSCLC treated with ICIs, with or without chemotherapy. Second, we need to identify the potential treatments that can delay FP. Finally, we can perform a biomarker enrichment design trial where those predicted to be at high risk of developing FP by the model are randomised to receive either the standard treatment plus potential treatment to delay FP or the standard treatment alone. The proposed model and treatment strategy will be deemed clinically useful if the incidence of FP in the intervention group is lower than in the control group.
Data availability statement
No data are available.
Patient consent for publication
Contributors This was an invited editorial. All authors contributed to the conception and design of the structure of the manuscript. YYS drafted the paper. THT, CKL and MS revised the draft paper.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests Dr Lee reports grants from Amgen, AstraZeneca, Roche and Merck KGaA, all outside the submitted work.Dr Stockler reports grants from Astellas, Amgen, AstraZeneca, Bayer, Bionomics, Bristol Myers Squibb, Celgene, Medivation, MSD, Pfizer, Roche, Sanofi and Tilray, all outside the submitted work. All other authors declare no competing interests.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Commissioned; internally peer reviewed.