Article Text

Download PDFPDF

Does the evidence exist for the deployment of AI in cancer therapies?
  1. Keith Langmack
  1. Radiotherapy Physcis, Nottingham University Hospitals NHS Trust - City Campus, Nottingham, UK
  1. Correspondence to Dr Keith Langmack; Keith.Langmack{at}nuh.nhs.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Artificial intelligence (AI) is a portmanteau term that encompasses a wide range of computational techniques for non-procedural methods of problem solving. It has been postulated to improve the standardisation and quality of care, make the care process more efficient and ameliorate staffing issues.1 Macheka et al have systematically reviewed the literature of prospective studies using AI in the postdiagnostic cancer pathway to ascertain the quality of the clinical evidence and real-world feasibility of AI in this setting.2

Of the 12 457 studies that their PubMed and Embase database search found, only 15 met their inclusion criteria (7 randomised controlled trials, 8 observational studies).

Almost half of these studies were at reasonably high risk of bias using the ROB-2 or ROBIN quality assessment tools. Most studies reviewed addressed questions around influencing clinician or patient behaviour, while one assessed a model to estimate surgery duration.

Four encompassed radiation oncology, specifically brachytherapy planning, autosegmentation of organs, automated treatment planning and disease response. Their conclusion was that most oncology AI research remains at an experimental stage.

Given the ubiquity of literature that reports the use of AI, 15 seems a remarkably low number of studies. However, a recent National Institute for Health and Care Excellence (NICE) health technology assessment of AI autocontouring similarly found a dearth of prospective evidence, with eight full papers across nine technologies meeting their search criteria.3 This assessment also found studies poorly reported with methodological limitations. NICE searched several more databases for publications (including INAHTA, CES Registry and ScharrHUD) for clinical and cost-effectiveness of the technology, which may explain the difference in the number of studies retrieved.

Macheka et al have recommended several themes to address the areas of limitation their research highlighted. Specifically these are lack of interoperability between hospitals, lack of validation of AI quality/efficacy, lack of standardisation in evaluation and validation of AI, lack of integration of implementation science framework and lack of workforce training.2 The first of these, interoperability between hospitals, may prove the most intractable as accessing data from multiple systems without manual intervention requires a large change in information system design.4 Other issues may prove more solvable. For example NICE are aiming to develop an application-specific evaluation framework for AI as a medical device (AIaMD) to demonstrate clinical and economic value.5 This requires the engagement of all stakeholders in developing the evaluation tools to ensure their applicability and fitness for purpose. The pitfall of not doing so is that evaluations will be inconsistent.6

The need for an implementation science for AI in healthcare has been well described.4 The heart of any development is the holistic design of the best possible care delivery system.

This requires multistakeholder involvement so that any AI components work in the care environment to augment care, rather than concentrating on the AI. At the design stage it may be found that AI is not the only solution for a particular problem. For example, the use of a priori multicriteria optimisation driven by a clinically defined wish list incorporating patient-specific requirements could be an appropriate automation method for radiotherapy treatment planning until AI models can access all the required clinical data to drive appropriate decision-making.7 Any implementation framework of the AIaMD must therefore be specific to the application.8

Macheka et al 2 remind us that there is still a great deal of validation work required before AI can be deployed at scale within oncology. The development of an appropriate AI application has a number of challenges that should be addressed at the design stage.4 During development, to avoid systematic bias, AI models require large quantities of well-curated data from the population being served. They also require direct access to similar data on a per-patient basis. Accessing this data is challenging, especially for data that are not already in a computer readable format or is spread between diverse systems. The community is starting to develop a hierarchy of evaluation frameworks (from general principles5 to specific examples8 9) for assessing performance and cost-effectiveness of AI. Currently some specific AI technologies, such as AI autocontouring, are being adopted with the evidence being generated at a local level.9 A recent informal survey of heads of radiotherapy physics in England by Institute of Physics and Engineering in Medicine (IPEM) showed that two-thirds of radiotherapy departments have AI auto-contouring in clinical use (personal communication). Yet, AI must not be regarded as a panacea for productivity gaps. It is likely to be a necessary, but not sufficient, part of the solution. Within the radiotherapy pathway, the introduction of a treatment planning system that reduced calculation times from 2–3 hours to 2–3 min did not significantly decrease the time between referral and treatment starting alone. Rather, additional workflow and system interventions were required to support the reduction in time to treatment by 13 days.10 Just adding technology is not the solution if the entire process requires redesign to achieve improvements.

Data availability statement

No data are available.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Contributors This work was solely produced by the author.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests I have a research agreement with MVision (Finland).

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles