AI in the NHS: a framework for adoption

Introduction
While we are encouraged by the promise shown by AI in healthcare, and more broadly welcome the use of digital technologies in improving clinical outcomes and health system productivity, we also recognise that caution must be exercised when introducing any new healthcare technology. Working with colleagues across the NHS Transformation Directorate, as well as the wider AI community, we have been developing a framework to evaluate AI-enabled solutions in the health and care policy context. The aim of the framework is severalfold but is, at its core, a tool with which to highlight to healthcare commissioners, end users, patients and members of the public the considerations to be mindful when introducing AI to healthcare settings. By way of a summary, the framework encompasses eight key considerations that policymakers are encouraged to discuss (Table 1).
Summary of the AI considerations framework
Building on existing work
The past 5 years has seen a proliferation of academic publications and policy initiatives designed to support the deployment of AI in health and care settings, many of which have informed the development of our framework. Lovejoy et al outline a number of considerations for the use of AI in healthcare, with a particular focus on context and model design.1 Similarly, Reddy et al present the ‘Translational evaluation of healthcare AI’ framework, centred around three main components (capability, utility and adoption) and associated subcomponents.2 Meanwhile, other publications have focused specifically on ethical considerations surrounding the use of AI, such as digital exclusion and worsening clinical outcomes among minority populations.3,4 These publications have been developed against a backdrop of significant policy activity, notably, these include the Central Digital and Data Office (CDDO) Data Ethics Framework, the Department of Health and Social Care (DHSC) guide to good practice and the National Institute for Health and Care Excellence (NICE) evidence standards framework (ESF) for digital health technologies. The lattermost of these is a particularly welcome addition to the policy landscape, with its focus on the economic impact of using healthcare AI; traditionally an overlooked field of study.
NHS policy initiatives in this context include the Artificial Intelligence Laboratory's (AI Lab's) A buyer's guide to AI in health and care (and associated template), which serves to support commissioners in the procurement of AI technologies; the NHS Digital Technology Assessment Criteria for health and social care (DTAC), which is a tool for healthcare organisations to evaluate suppliers through the lenses of user needs and security, as well as regulatory and technical compliance; and the AI in healthcare: creating an international approach together report, published jointly by the Global Digital Health Partnership (GDHP) and NHS AI Lab, with the aim of providing AI policy guidance to the international health community.5,6
A framework for adoption
The framework presented here builds on, and is in many ways an amalgamation of, much of this work. Notably, it aims to reconcile both the ‘ethical’ considerations (such as algorithmic bias and transparency) as well as more ‘operational’ considerations (such as real-world implementation, post-market surveillance and, importantly, change management); as there appear to be few publications that traverse these twin objectives. The framework may temper some of the hype surrounding healthcare AI, and encourages users to be more holistic in their evaluation of AI technologies. The framework was developed following consultation with colleagues within the NHS Transformation Directorate, as well as the broader AI and life sciences community, across a wide range of domain expertise. The breadth of expertise allowed for the identification of several considerations; for example, clinical staff pointed to the importance of a ‘lead responsible clinician’ overseeing the rollout of a new technology, technical colleagues highlighted the need to monitor the change in an algorithm's performance over time, while ethics experts shed light on issues pertaining to bias and diversity.
While we recognise that the framework is far from exhaustive, we hope that it can, in time, be developed into a more robust assessment tool for healthcare commissioners to oversee the introduction of new technologies. It is also important to note the overlap that exists between each of these eight considerations; for example, ‘ethics and governance’ should necessarily underpin the entire AI design and deployment life cycle. Likewise, the financial implications of using AI in healthcare (presented here within ‘success metrics’) will be intertwined with other considerations, such as ‘context’, ‘implementation’ and ‘managing change’. Nevertheless, the framework may serve as a useful guide in navigating the adoption of AI in a healthcare system.
We have developed a vignette (Box 1) to showcase a hypothetical use case for AI in the NHS. The framework has then been ‘applied’ to the vignette (Fig 1) to shed light on issues pertaining to the design and deployment of the algorithm that may otherwise have been overlooked; for example, the importance of algorithmic validation that is reflective of ‘data drift’ and changing environmental conditions (such as evolving disease prevalence), the scope and limitations of using the model in real-world clinical settings, and the policy measures that must occur in tandem with AI solutions to achieve meaningful system-wide improvement.
AI considerations framework applied to the vignette in Box 1. GP = general practitioner; R&D = research and development.
Scaling AI safely: a hypothetical vignette
Summary and conclusion
This framework may aid policymakers in better understanding the AI landscape in a given health and care context, and highlight the ancillary factors that must addressed if AI is to be used as meaningfully as possible. In the case of the earlier vignette (Box 1), in which AI is being used to detect skin cancer, these factors may be summarised in Box 2.
Summary of AI considerations
We commend the excellent work that is currently taking place in using AI to address the highest priority areas of clinical need, and look forward to the more routine and widespread adoption of these tools in healthcare. The technologies must, however, be introduced carefully, using holistic evaluation criteria, multistakeholder engagement and ongoing performance monitoring.
Funding
This article and the framework presented here have been developed as part of the Faculty of Medical Leadership and Management (FMLM) National Medical Director's Clinical Fellow Scheme. We welcome feedback and comment in developing this framework further.
- © Royal College of Physicians 2022. All rights reserved.
References
- ↵
- Lovejoy C
- ↵
- Reddy S
- ↵
- ↵
- Adamson A
- ↵
- Joshi I
- ↵
- Global Digital Health Partnership
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.
Cited By...
- No citing articles found.