Intended for healthcare professionals

Analysis

Clinical applications of machine learning algorithms: beyond the black box

BMJ 2019; 364 doi: https://doi.org/10.1136/bmj.l886 (Published 12 March 2019) Cite this as: BMJ 2019;364:l886
  1. David S Watson, doctoral student123,
  2. Jenny Krutzinna, postdoctoral researcher1,
  3. Ian N Bruce, professor of rheumatology and director45,
  4. Christopher EM Griffiths, foundation professor of dermatology56,
  5. Iain B McInnes, Muirhead professor of medicine7,
  6. Michael R Barnes, reader of bioinformatics,23,
  7. Luciano Floridi, professor of philosophy and ethics of information and director of the digital ethics lab13
  1. 1Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford OX1 3JS, UK
  2. 2Centre for Translational Bioinformatics, William Harvey Research Institute, Queen Mary University of London, London, UK
  3. 3The Alan Turing Institute, London, UK
  4. 4Arthritis Research UK Centre for Epidemiology, Centre for Musculoskeletal Research, Faculty of Biology Medicine and Health, The University of Manchester, Manchester, UK
  5. 5NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester M13 9WL, UK
  6. 6The Dermatology Centre, Salford Royal NHS Foundation Trust, The University of Manchester, Salford, UK
  7. 7Institute of Infection, Immunity and Inflammation, University of Glasgow, Glasgow, UK
  1. Correspondence to: D Watson david.watson{at}oii.ox.ac.uk

To maximise the clinical benefits of machine learning algorithms, we need to rethink our approach to explanation, argue David Watson and colleagues

Key messages

  • Machine learning algorithms may radically improve our ability to diagnose and treat disease

  • For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models

  • Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers

Machine learning algorithms are an application of artificial intelligence designed to automatically detect patterns in data without being explicitly programmed. They promise to change the way we detect and treat disease and will likely have a major impact on clinical decision making. The long term success of these powerful new methods hinges on the ability of both patients and doctors to understand and explain their predictions, especially in complicated cases with major healthcare consequences. This will promote greater trust in computational techniques and ensure informed consent to algorithmically designed treatment plans.

Unfortunately, many popular machine learning algorithms are essentially black boxes—oracular inference engines that render verdicts without any accompanying justification. This problem has become especially pressing with passage of the European Union’s latest General Data Protection Regulation (GDPR), which some scholars argue provides citizens with a “right to explanation.” Now, any institution engaged in algorithmic decision making is legally required to justify those decisions to any person whose data they hold on request, a challenge that most are ill equipped to meet. We urge clinicians to link with patients, data scientists, and policy makers to ensure the successful clinical implementation of machine learning (fig 1). We outline important goals and limitations that we hope will inform future research.

Fig 1
Fig 1

Overview of the opportunities and challenges associated with black box models in clinical decision making

Predictions versus explanations

Predictions tell us that x is true; explanations tell us why x is true. The past decade has seen enormous advances in our ability to predict complex phenomena using computational techniques. Explanatory breakthroughs, on the other hand, have been few and far between.

Machine learning algorithms have already shown expert diagnostic performance based on imaging data for conditions including diabetic retinopathy,1 skin cancer,2 and pneumonia.3 Precision medicine seeks to go further, modelling molecular data to classify patients according to endotype,4 defining disease mechanism and ontologies.5 With the integration of electronic health records and wearable medical sensors, machine learning may usher in a new era of real time diagnostic updates, enabling earlier, more targeted interventions.6

Machine learning techniques are already emerging in clinical practice.7 Microsoft’s InnerEye offers a graphical user interface to algorithms that help radiologists diagnose cancerous tumours and plan precise surgical interventions.8 DeepMind Health recently partnered with Moorfields Eye Hospital to develop models for diagnosing common retinal pathologies based on optical coherence tomography scans.9 IBM’s Watson for Oncology seeks to provide personalised cancer care based on health records,10 although the project has run into numerous procurement problems, cost over-runs, and delays.11

One frequently cited obstacle to machine learning’s wider clinical adoption is a lack of understanding among patients and doctors about how predictions are made.12 This is especially true of some top performing algorithms, like the deep neural networks used in image recognition software. These models may reliably discriminate between malignant and benign tumours, but they offer no explanation for their judgments. Of course, clinicians are not always able to perfectly account for their own inferences, which may be based more on experience and intuition than on explicit medical criteria.13 Moreover, doctors do not optimally integrate all available evidence, and cognitive biases can be deeply entrenched.14 Still, many think that, as a new technology, the burden of proof is on machine learning to account for its predictions. If doctors do not understand why the algorithm made a diagnosis, then why should patients trust the recommended course of treatment? Is informed consent even possible without some grasp of how the model reached its conclusion?

Not all algorithms are black boxes. Some sophisticated models, such as those based on regularised linear regression, provide a modest number of informative parameters.15 Yet, although restricting the use of clinical machine learning to more intelligible algorithms is tempting, it would be a mistake. No single technique is optimal for all cases—a result known as the “no free lunch theorem” in computer science16—which means that any attempt to shoehorn all datasets into a particular family of statistical models is guaranteed to fail.

The opportunity costs of not using our best available tools for disease detection and treatment are substantial—12 million people a year receive misdiagnoses in the United States, with about six million facing potential harm as a result.17 Nearly one third of all preventable deaths in the United Kingdom are attributable to misdiagnosis.18 The benefits of early disease detection are well known.

Yet clinicians are right to be sceptical of inscrutable models. Especially worrisome is the risk of overfitting to an unrepresentative sample. In one famous example, an algorithm designed to predict probability of death among hospital patients with pneumonia systematically classified asthmatics as low risk.19 The correlation was spurious—patients with asthmatic pneumonia were sent directly to the intensive care unit, where they received continuous treatment that improved their prognosis so substantially that they seemed to have better than average chances of survival. Mistakes like this show the potential dangers of naively accepting the outputs of a black box model. They also raise important questions about liability in cases of algorithmic error. Who is ultimately responsible for a computational misdiagnosis? Clinicians? Data scientists? Policy makers have tackled similar questions in other contexts and come to no clear consensus.20

Right to explanation?

The GDPR has emphasised “explainability” as a top priority in machine learning research, provoking a global debate over the right to explanation in cases where individuals are subject to automated decisions. Whether or not this purported right is enshrined in the GDPR—a point of contention among legal scholars21—there are compelling reasons to endorse it in medical contexts. This will require a total reorientation of priorities for data scientists, who are more used to optimising for accuracy than for intelligibility.

Before we can design new methods to tackle this challenge, we must agree on what constitutes a satisfactory explanation. Do we want to understand all the patterns the machine has learnt (model-centric explanations)? Or just those that are relevant to the patient (subject-centric explanations)?22 The former aims to provide global understanding about the relative importance of all variables and how they interact to make predictions, which may shed new light on disease mechanisms; the latter provides local understanding about why this particular input led to that particular output, which could be relevant for individual patient prognosis.

Clinicians sceptical of machine learning tend to focus on the lack of clear model-centric explanations.19 Deep neural networks, for example, routinely contain millions of parameters, assigning weights and biases to thousands of nodes in an architecture so complex that no human could plausibly be expected to grasp the whole model’s internal mechanics. But if a computer truly outperforms doctors in making diagnoses, then we would like to know why. Understanding the biological patterns or processes it has uncovered could advance our knowledge and help build the medical community’s trust in such systems.

Of course, patients are the most critical stakeholders in clinical machine learning. Enabling them to appreciate their algorithmically determined diagnosis and treatment options is crucial—but also complicated, especially when inputs include high dimensional genomics or imaging data. Researchers in the nascent field of interpretable machine learning have implemented methods for generating model-agnostic local explanations.2324 These approaches are promising, but more work is needed to extend them to clinical settings and support them with the appropriate medical ethics framework.

The path forward

Current proposals struggle to meet two important criteria for the clinical application of machine learning: scalability and customisability. With biological datasets often containing millions of variables per sample, the computational complexity of explanatory methods for molecular models must be constrained. This entails an inherent trade-off between completeness and simplicity. Ideally, users could specify a level of explanatory granularity that best suits their needs. Some may prefer diagnoses to be explained in terms of basic, familiar biological concepts; others may opt for a more detailed account in terms of molecular mechanisms.

Important unanswered questions remain about how best to measure the utility of a given explanation. Some authors have attempted to formalise the problem in a computable fashion,25 whereas others advocate a more empirical approach driven by experimental psychology.26 Both methods have their merits and drawbacks, but building a research programme around either will be difficult without first establishing a broad consensus.

Some caution with regard to transparency is advisable. A fully open source approach may enable misuse of the algorithm for harmful purposes outside the clinical context. This is particularly problematic when a diagnosis is based on easily accessible data, such as facial images or movement patterns.27 Scrutiny of machine learning is important but should not expose people to disproportionate risks or privacy violations, especially when there is no immediate benefit to diagnosis, as is the case with currently untreatable conditions.

We are only just beginning to realise machine learning’s potential for medicine, and although it remains exploratory the benefits should not be ignored. Patients, clinicians, and data scientists must collaborate to develop new methods for extracting model-centric and patient-centric explanations that can provide global and local understanding. Bringing algorithms into the clinic can advance knowledge and improve care, but only if we are prepared to devote sufficient resources to illuminating the black box for doctors and patients alike.

Footnotes

  • Contributors and sources: This article was originally conceived and drafted by DW, a doctoral student in epistemology and machine learning at the University of Oxford. He is the corresponding author and guarantor of the article. The work was completed with the guidance of LF, director of the Oxford Internet Institute’s Digital Ethics Laboratory, University of Oxford, and MRB, director of bioinformatics at Queen Mary University of London. Substantial contributions were subsequently provided by JK, a bioethics expert, and valuable clinical perspectives were added by IB, CG, and IM. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. MRB and LF contributed equally.

  • Competing interests: All authors have completed the Unified Competing Interest form (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work. We have read and understood the BMJ policy on declaration of interests and declare that we have no competing interests.

References