Artificial intelligence and the NHS: a qualitative exploration of the factors influencing adoption

Abstract
Background AI has the potential to improve healthcare. However, there is limited research investigating the factors which influence the adoption of AI within a healthcare system.
Research aims I aimed to use innovation theory to understand the barriers and facilitators that influence AI adoption in the NHS; and to explore solutions to overcome these barriers, and examine these factors, particularly within radiology, pathology and general practice.
Methodology Twelve semi-structured, one-to-one interviews were conducted with key informants. Interview data were analysed using thematic analysis.
Findings A range of barriers and facilitators to the adoption of AI within the NHS were identified, including IT infrastructure and language clarity. Several solutions to overcome the barriers were proposed by participants, including education strategies and innovation champions.
Conclusion Future research should explore the importance of IT infrastructure in supporting AI adoption, examine the terminology around AI and explore specialty-specific barriers to AI adoption in greater depth.
Introduction
Across clinicians, data scientists, managers and governments, there is an aspiration that AI will transform healthcare delivery over the coming years. The enthusiasm for such a transformation is growing rapidly; a Google search for ‘AI in health and social care’ produces 360 million search results, an increase from 109 million just 12 months ago.1 However, while technical breakthroughs in AI and healthcare are shared widely, there is often less interest in solutions to promote the adoption of such technologies across complex, and perhaps siloed, health systems.2,3 This is a critical gap within the literature. If doctors are unwilling to work with AI, if concerns about data bias and legal liability are unaddressed, or if the data needed to validate such technologies cannot be shared with industry, then these breakthroughs will remain hypothetical.
AI can be defined as ‘the science of making machines do things that would require intelligence if done by people’.4 Despite some controversy, there is a growing consensus that the term ‘AI’ encompasses the subsets of machine learning and deep learning algorithms.5,6 These are the two computer science disciplines most commonly applied in healthcare.6,7 Therefore, the broad term of AI will be used throughout this paper.
There is a consensus that the specialties that stand to realise the potential of AI soonest are those which are data-driven, such as radiology and pathology.3,8,9 For instance, machine learning algorithms can be as accurate as radiologists in interpreting four important chest X-ray findings.10 Meanwhile, other radiology applications include screening for breast and lung cancers, as AI could support the interpretation of mammography and computed tomography of the chest.9,11,12 Within pathology, AI could be applied to improve the efficiency of metastases detection in lymph nodes and the accuracy of prostate cancer grading.13,14 These applications are especially valuable because radiology and pathology are currently facing significant pressures across the NHS.15,16 However, while data-driven specialties may harness the benefits of AI earlier, its impact is sure to be felt more widely. Within primary care, the Royal College of General Practitioners believes that AI will have the most valuable near-term impact when used for administrative tasks, releasing time for healthcare professionals.17 Future AI applications in general practice include identifying the relevant guidelines for each consultation, and potentially even recognising, and challenging, cognitive biases.18,19
There is a paucity of qualitative research, which evaluates the barriers to AI adoption within a healthcare system. This is significant because qualitative research is ideally suited to examining the attitudes, behaviours and decisions that impact whether an innovation is ultimately adopted. Of the two qualitative studies that explored barriers to adoption, one was conducted with French stakeholders, whose perspectives may not translate to the NHS, while the other focused solely on autonomous robots, which are unlikely to be the first AI product adopted at scale.20,21 Moreover, there is limited literature that explores the facilitators of AI adoption. Therefore, this paper moves beyond the current evidence base by examining both barriers and facilitators to AI adoption, within the context of NHS clinical practice.
Diffusion of innovations (DOI) theory has been used as a theoretical framework to interpret the findings of this qualitative study, because it has been applied successfully to understand the adoption of other medical technologies.22–27 Rogers defined diffusion as ‘the process by which an innovation is communicated through certain channels over time, among the members of a social system.’22 DOI theory describes four elements which influence the diffusion of a given innovation: characteristics of the innovation (Table 1), communication channels, time and the social system.
Further information on the diffusion of innovations framework22
The primary objective of this research was to apply innovation theory to understand the barriers and facilitators to AI adoption in the NHS, as perceived by key informants. Where barriers were identified, potential solutions to overcome these were explored. The secondary objective was to examine these factors within the context of radiology, pathology and general practice.
Methods
This qualitative interview-based study examines the opinions of thought leaders in the UK healthcare AI landscape.
Ethics and permissions
The study received ethical approval from the University of Birmingham Internal Research Ethics Committee. Participants provided written informed consent to participate.
Sampling and recruitment
Purposive sampling selected key informants who were thought leaders from diverse backgrounds. Methods to select participants included contacting relevant royal colleges, regulatory bodies and research organisations, hand searching lists of contributors to major reports in the field and searching on LinkedIn. Four groups of key informants were sought (Table 2).
Key informants sought and rationale
Twelve participants were recruited. Initially, organisations were emailed using publicly accessible email addresses or existing contacts. This led to the recruitment of five participants. The remaining seven participants were identified by the researcher and contacted through email or LinkedIn. Thirteen prospective participants and four organisations who were contacted did not respond or were unable to participate.
Recruitment planned to continue until data saturation was reached. However, COVID-19 placed considerable stress on the healthcare sector, so recruitment ended after 12 interviews. The researcher was satisfied this number of interviews would generate useful data because the aim of qualitative research is not to generalise the findings and key informant sampling is recognised to generate rich data even from a small number of interviews.28,29
Data collection
Twelve interviews were conducted in March and April 2020. The average interview length was 37 minutes (range 23–52). The researcher conducted all interviews one-to-one with the participants. One interview occurred face-to-face, with the rest by Zoom. The researcher used a reflexive approach to consider how her personal characteristics shaped the research (supplementary material S1).
Interviews were semi-structured, guided by a topic guide (supplementary material S2) which was pilot tested. Interviews were audio recorded and transcribed verbatim by the researcher. Field notes were made following each interview.
To improve validity, data collection and analysis were iterative processes. The topic guide was modified following the initial coding of interview transcripts to allow unforeseen issues raised by participants to be explored further in subsequent interviews. A summary of the initial coding of each transcript was returned to participants for member checking. Saturation of codes was reached within 11 interviews.
Data analysis
Transcribed interviews were uploaded to NVivo 12 software, and Braun and Clarke's six-step guide to thematic analysis was then followed (Box 1).30
Description of data analysis
Findings
Twelve interviews were conducted with participants who worked in the UK healthcare AI ecosystem. Participants included NHS doctors, managers, researchers and personnel at regulatory bodies (supplementary material S3).
Three themes and nine sub-themes were identified (Table 3).
Themes and sub-themes
Theme 1: system
Socio-political context of the NHS in 2020
A lack of funding was cited by six participants (across all four key informant groups) as a barrier to AI adoption in the NHS. A typical sentiment was that ‘the NHS doesn't have any money.’ (P3).
Additionally, it was suggested that NHS organisations often cannot look past the initial start-up costs to future benefits; P7: ‘I think trust finances often focus on such a short-term basis at the moment that, if [AI] improves patient outcomes and efficiencies over 5 years, great. But what's it going to do to the trust finances for the next 12 months?’
The quality of IT infrastructure across the NHS was mentioned by nine participants as a barrier to AI adoption; P8: ‘If you can't invest in the basic digital infrastructure, then AI is it out of your reach.’
However, two participants (P5 and P9) disagreed, and expressed the view that the quality of IT infrastructure was not a significant barrier to AI adoption.
Many participants also discussed how it would be important to fund the change, and not the technology in isolation; P3: ‘In terms of practical facilitators ... IT and tech project managers and dedicated funding to support them as well.’ and P12: ‘Champions can be really helpful to allay fears.’
Regulatory landscape
Eight participants highlighted the current regulatory landscape as a barrier; P5: ‘[Regulation] is an absolute mess, right? It's an absolute mess. ... If you've got a piece of kit, which is AI, where does that sit? The MHRA? The GMC? ... CQC?’
Those who felt regulation acted as a barrier highlighted that regulation was confusing for developers to navigate and the roles and remits of regulators were unclear.
Fit within the puzzle
Nearly all participants discussed how some specialties would be more amenable to AI than others; P8: ‘I think obviously the low hanging fruit would be ... doctors with patterns. So, image recognition ... dermatology, radiology, pathology and ophthalmology.’
Most participants felt that there would be specialty-specific barriers to adoption, and some gave examples of these. However, four participants did not feel qualified to comment on specialty-specific barriers to adoption. Within those who did discuss the issue, many emphasised that adoption in primary care faces different challenges to secondary care; P9: ‘The general practitioners (GPs) and the tech suppliers aren't able to work together because there are just so many GP practices.’ P3: ‘GPs deal a lot with mental health, chronic illnesses, disabilities and things which are very non-digitisable healthcare problems.’
It was suggested that NHSX, the body responsible for the digital transformation of the NHS, should identify where the NHS could benefit most from AI and share those needs with developers so that they can create useful AI products; P9: ‘I think there's a role for NHSX ... and people like that to really identify. Well, what can this technology do? ... And then where does that best plug into clinical pathways to release value or improve care? And then to signal demand to the tech developers ... and then they can develop against it.’
Theme 2: people
What actually is AI?
The need for clarity of language around AI was raised by many participants; P10: ‘We all need to use the same language ... you know you can be 10 minutes into a meeting and no one's got a clue what anyone's talking about because there's no baseline terms.’
Several participants expressed dislike for the term ‘AI’, with some going as far as to claim ‘I hate the word AI.’ (P2).
Additionally, hype regarding AI was emphasised by four participants as a potential barrier; P2: ‘People talk about AI like ... it's a utopian dream that's going to solve all our needs financially and clinically and everything else.’
Finally, misunderstanding and fear around AI were highlighted by a few participants. It was suggested that fears around AI often stemmed from misunderstandings around ‘what AI might be, rather than what it actually is.’ (P1).
People powered transformation
Seven participants outlined the need for the education of healthcare professionals, particularly ‘explaining what the benefits will be and how it will help them in their work.’ (P12).
Three participants proposed that real-world examples of AI being used in the NHS would support better understanding and dispel some fears; P1: ‘I think once we have one or two actual examples of AI being deployed into the NHS and people can see ... the benefits ... then people will think of it just as any other type of software that we would now struggle to do without.’
The need to educate, and communicate with, the public was raised by a few participants; P10: ‘Then someone needs to work on the comms plan, you know, with the public. Can you imagine ... the public with some of this stuff?’
It's not going to be a robo-doc
Five participants highlighted how they did not believe that AI would replace doctors, with one clarifying ‘that is nobody's intention.’ (P1). Instead, several participants suggested that AI had the potential to improve the working life of clinicians; P12: ‘So, [doctors] might be thinking that they're going to lose their job and this AI system is going to do everything. But the reality is, it's just going to do a discrete set of tasks. And that might free them up to do other things ... have more patient-facing time, do more research.’
However, three participants, all classified as ‘individuals with influence’, suggested that fewer healthcare professionals may be required in the future.
Several participants noted a lack of resistance among healthcare professionals towards AI. Most (including a doctor and royal college representative) felt that patient-facing staff were keen to try using AI; P6: ‘We're not seeing it as replacing us, we're seeing it as being very much a tool to assist us.’
However, one participant believed that some staff would not be keen to engage; P5: ‘There's a bottom 25% [of doctors] where it's going to be really difficult to get them to adopt or engage in it.’
Theme 3: technology
Data driven nature
Several participants spoke about the ‘fragmented data pool’ (P4) within the NHS as a barrier to the development of AI products; P2: ‘I think people make an assumption about data usage in the NHS that there's just a single point of contact and you can get access to every patient record – doesn't work like that. We don't have that.’
Furthermore, five participants felt that information governance, especially at a local level, hampered data sharing; P10: ‘How do you deal with all of the things the public would be really worried about? Which is all the information governance.’
One interviewee proposed a solution of information governance templates, perhaps to be developed by NHSX; P9: ‘So, there's something for NHSX to do ... provide national data protection impact assessment templates for trusts to adapt ... that would speed things up massively.’
Challenges ahead
Many participants discussed the concept of ‘black box’ AI as raising new challenges, including the right to an explanation; P10: ‘To what extent should people have a right under the [General Data Protection Regulation] legislation to an answer about why their treatment is taking a particular course? If a computer has decided that.’
However, there was disagreement. Several participants believed that black box AI was theoretical, and that AI would never be truly unexplainable. Others thought that there was a degree of over-questioning; P2: ‘People talking about ... the black box algorithm, how do we know what decision is made? And my answer is ... how do you know the clinician ... who's sat in front of you is making the right decision?’
Several participants raised concerns regarding legal liability. One participant highlighted how there was no case law regarding AI yet.
Another issue highlighted by several participants was the transferability of an AI tool from one setting to another; P9: ‘We had a radiology machine learning [AI] provider in north London. Again, really high sensitivity and specificity. Applied it in south London. It's terrible. Different ethnic group, different scanner, different radiology positioning.’
Finally, the risk of biased algorithms was raised by a few participants; P7: ‘How do we know an algorithm isn't biased?’
Evaluation
Most participants spoke about the regulatory and evaluation challenges introduced by self-learning AI; P11: ‘If you have a machine learning [AI] device that you want to learn while it's going along, each learning procedure creates effectively a new device. So, you would in fact have to then go around the regulatory circle again just for that one learning event.’
Additionally, three participants discussed the lack of an agreed gold standard for how accurate an AI tool needs to be before it can be used in clinical practice; P5: ‘There's no real understanding as to where the benchmark is for being able to use a piece of AI software. So, does it have to be better than the percentage of, say, misdiagnoses the consultants make? As good as? Or what? Nobody really knows.’
Discussion
DOI theory
The key aspects of Rogers’ DOI theory that were reflected in the findings were the perceived relative advantage, compatibility, complexity, trialability and observability of AI. These represent the five characteristics of an innovation explained in Table 1. Additionally, some elements of both the ‘time’ and the ‘social system’ aspects of Rogers’ model were reflected in the results (Table 4).
Key findings mapped to the diffusion of innovations framework
DOI theory suggests that of the five characteristics, relative advantage and compatibility are especially significant in explaining the rate of adoption.22 Indeed, these were the two characteristics that applied most often to the findings.
Implications for practice
Many of the findings correlated to themes that are well explored in the existing literature base; for example, the confusing regulatory landscape for AI, issues with data access to develop AI products, the view that some specialties are more amenable to AI and the need for education around AI.2,3,31–42
However, some findings were unexpected, moving beyond the existing literature. Many participants highlighted the need for improved language clarity around AI, and the term AI itself was disliked by several interviewees. This issue of language clarity was not discussed in depth in the literature, with only one qualitative study indicating a poor understanding of the term ‘AI’.20 Moreover, most participants spoke of the financial pressures facing the NHS and highlighted how these may negatively impact the adoption of AI. These financial pressures are well documented generally; however, they did not feature in the literature on AI adoption.43,44 Another unanticipated finding was ambiguity around the gold standard that AI will need to reach before it can be deployed in a healthcare system. Champions (as facilitators of AI adoption) were also endorsed by several participants, despite the fact they were not explored in the evidence base on AI adoption. Lastly, the transferability of an AI product, from one healthcare setting to another, was highlighted by several participants as likely to be poor, but this issue was not well explored in the literature and was only discussed in one narrative review.45
Some of the barriers to AI adoption highlighted by this study represent practical concerns. Considering the issue of language clarity when discussing AI, there was no consensus among participants on what terms should be used instead. This raises questions around how AI adoption can be meaningfully discussed among healthcare professionals if standardised terminology is yet to be agreed. Similarly, uncertainty regarding the gold standard for AI underlines the need for a national conversation regarding the threshold of performance that an AI tool has to meet before it can be deployed across the NHS. Ongoing uncertainty in this area has the potential to undermine confidence in AI among both healthcare professionals and the public. Lastly, the transferability of AI is significant because if new AI tools need to be developed or existing ones significantly re-designed to be used across different NHS sites, then this will result in an additional financial burden.
Four solutions to encourage AI adoption were proposed by participants. One was the use of champions, which are recognised to support the adoption of innovations in other sectors.46,47 Another was education for healthcare professionals and the public, emphasising the benefit AI can offer to clinicians’ working lives and sharing real-world case studies of AI use in the NHS to improve understanding. The final two solutions involved NHSX and similar organisations. Firstly, that these organisations should identify the areas in the NHS where AI can best address existing challenges and ask developers to focus on these. Secondly, that they could provide clarity on information governance, in the form of template data protection impact assessments, to support data sharing to develop AI products. The latter has also been suggested by others.31,33
Limitations
Although data saturation was reached, the sample size was smaller than planned. Additionally, all participants were male. This was unintentional, of the 20 potential participants contacted directly during recruitment, seven were women. Unfortunately, none participated. Women are underrepresented within leadership positions in both healthcare and health technology, so recruiting a gender diverse sample was anticipated to be challenging.48–50 While the findings are internally coherent, they would likely be enhanced with a more gender diverse sample, and this represents a future research area.
Furthermore, complete triangulation was not reached, as data collection and analysis were conducted by a single researcher. Additionally, the research objective to explore barriers and facilitators within the context of radiology, pathology and general practice was not fully addressed as some participants were unwilling to comment.
Conclusion
DOI theory was an applicable and relevant theoretical lens. The issue of language clarity around AI was a notable, and unexpected, finding which will have implications for practice. Additionally, the study identified some specialty-specific factors that could influence the adoption of AI, particularly affecting general practice, although these were not explored in as much depth as intended. Finally, there was disagreement among participants regarding whether the quality of IT infrastructure in some areas of the NHS acted as a barrier to AI adoption.
A significant implication for practice, which echoes the existing literature, is the need for education around AI for healthcare professionals, the media and the general public. This can promote understanding, while dispelling fears and myths. Unfortunately, this may be difficult as the media tends to seek a silver bullet for the challenges facing the NHS. Additionally, champions and real-world case studies should be considered as practical facilitators to support AI adoption within an NHS setting, while further clarity on information governance would also be welcomed.
Future research is needed to ascertain whether the quality of IT infrastructure could impact the ability of certain NHS organisations to adopt AI. Other areas of further research include the language around AI and specialty-specific barriers to AI adoption. Finally, future studies in this area should attempt to capture a more diverse sample of participants.
Supplementary material
Additional supplementary material may be found in the online version of this article at www.rcpjournals.org/fhj:
S1 – Reflexivity statement.
S2 – Interview topic guide.
S3 – Participant characteristics.
- © Royal College of Physicians 2021. All rights reserved.
References
- ↵
- Fenech M
- ↵
- ↵
- Loh E
- ↵
- Minsky ML
- ↵
- Ongsulee P
- ↵
- ↵
- Bini SA
- ↵
- Kulkarni S
- ↵
- ↵
- Singh R
- ↵
- Lehman CD
- ↵
- Sathyakumar K
- ↵
- ↵
- Nagpal K
- ↵
- Martin J
- ↵
- Care Quality Commission
- ↵
- Royal College of General Practitioners
- ↵
- Manning CL
- ↵
- Summerton N
- ↵
- ↵
- Cresswell K
- ↵
- Rogers EM
- ↵
- Zhang X
- ↵
- ↵
- ↵
- ↵
- Ash JS
- ↵
- Creswell J
- ↵
- Payne G
- ↵
- ↵
- Reform
- ↵
- The AHSN Network
- ↵
- House of Lords Select Committee on Artificial Intelligence
- ↵
- Harwich E
- ↵
- Academy of Medical Royal Colleges
- ↵
- Ahmad OF
- ↵
- Joshi I
- ↵
- Royal College of Radiologists
- ↵
- Royal College of Physicians
- ↵
- Royal College of General Practitioners
- ↵
- ↵
- ↵
- Robertson R
- ↵
- Kraindler J
- ↵
- Carter SM
- ↵
- Chesbrough H
- ↵
- Sergeeva N
- ↵
- Denend L
- ↵
- Kvaerner KJ
- ↵
Article Tools
Citation Manager Formats
Jump to section
Related Articles
Cited By...
- No citing articles found.