Box 2.

Summary of AI considerations

  • It is important to be vigilant of algorithmic bias against certain patient populations (ie individuals with darker skin tones); for example, through having diverse research panels, clear ground truths that challenge clinical presuppositions and biases, and public and patient involvement and engagement (PPIE). The underlying datasets must also be carefully curated to ensure generalisability of the model's performance.

  • The reason(s) for using AI, the expected indicators of success and any unanticipated consequences of change should be considered at an early stage. This includes conducting a clinical risk assessment, as well as a cost–benefit analysis, considering initial costs as well as costs associated with ongoing maintenance, training and service redesign.

  • The algorithm must be continually re-examined for evidence of ‘data drift’ and changes in environmental conditions, such as the evolving prevalence of skin cancer and other dermatological conditions over time.

  • There must be clear guidance and training for staff on the circumstances in which to use the model, and appropriate recourse to revert to model developers with any concerns over the algorithm's performance. Box 1 presents an example of a ‘false positive’ result, which is perhaps less worrisome than a ‘false negative’ result, with the latter presenting greater challenges of post-market surveillance and regulation as well as liability and optionality. If the algorithm had failed to identify a patient with skin cancer, where does the burden of accountability lie? This is an area in which there is little consensus, although it is our belief that AI is, for now, an augmentative tool, designed to supplement and not supersede clinician expertise. Clinical staff must maintain oversight and remain accountable for actioning (or not) the recommendations made by an AI model, until such time as there is clearer medico-legal guidance around indemnity and liability. Can we cautiously envisage an AI ‘Bolam test’ in years to come, wherein an AI algorithm cannot be deemed negligent when other similar algorithms arrive at the same conclusion?

  • Policymakers must consider the broader health system dependencies and bottlenecks that must be addressed, alongside the introduction of AI in a given health and care context (eg improved access to treatment and the availability of specialist staff, pathway optimisation/redesign, and greater engagement with marginalised communities).