The implementation of artificial intelligence in radiology: a narrative review of patient perspectives

ABSTRACT
Aim To synthesise research on the view of the public and patients of the use of artificial intelligence (AI) in radiology investigations.
Methods A literature review of narrative synthesis of qualitative and quantitative studies that reported views of the public and patients toward the use of AI in radiology.
Results Only seven studies related to patient and public views were retrieved, suggesting that this is an underexplored area of research. Two broad themes, of confidence in the capabilities of AI, and the accountability and transparency of AI, were identified.
Conclusions Both optimism and concerns were expressed by participants. Transparency in the implementation of AI, scientific validation, clear regulation and accountability were expected. Combined human and AI interpretation of imaging was strongly favoured over AI acting autonomously. The review highlights the limited engagement of the public in the adoption of AI in a radiology setting. Successful implementation of AI in this field will require demonstrating not only adequate accuracy of the technology, but also its acceptance by patients.
Background
In recent years, there has been increasing interest in the application of artificial intelligence (AI) within radiology. Advances in such technology come at a time where the volume of radiological investigations is increasing, and AI could offer the potential to improve both the speed and accuracy of radiological reporting.1 A search for the terms ‘artificial intelligence’ and ‘radiology’ within PubMed yielded 26 citations from 2016, increasing to 1,539 citations in 2021, an almost 6,000% increase. Well over 100 CE-marked products are available within Europe, although with significant variation in the levels of peer-reviewed evidence to demonstrate their efficacy.2
A key consideration in the implementation of AI is the trust and acceptance of the technology among stakeholders. Clinical professionals and trainees remain uncertain about the effect of the introduction of AI into the field of radiology.1,3–5 The stakeholders who bear the greatest risks from new technologies are patients. Achieving adequate understanding of the attitudes and concerns of patients is crucial to ensure their interests are represented in determining how the technology is used to deliver clinical care. Accordingly, we undertook a narrative review to gather and explore current evidence regarding patient perspectives on utilising AI within medical imaging.
Methods
Search strategy
On 9 June 2022, we conducted an electronic database search for published and unpublished studies of patient perspectives on AI in radiology. The search strategy is outlined in Table 1 and documented in full in supplementary material S1 along with the study protocol in supplementary material S2.
Search terms formulated for qualitative studies detailing patient views on artificial intelligence in radiology
Data collection, narrative synthesis and quality assessment
All titles and abstracts were reviewed by one researcher (SH) with a random selection of 20% independently screened by a second researcher (SB). The full text of any potentially eligible article was then obtained and assessed for eligibility. An appraisal of the literature for themes using an inductive process as described by Arai et al6 and the highlighting of data around these themes were conducted separately by SH and BB and then summarised in consensus with SB. A quality assessment of the studies was undertaken by SB and BB using a mixed-methods appraisal tool.7 Narrative synthesis was guided by methods described by Arai et al and findings reported in accordance with Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) criteria.6,8
Results
The search yielded a total of 2,832 studies, of which 116 were selected for full-text review, resulting in seven studies in the final review (Fig 1). These were five cross-sectional survey studies and two semi-structured interview studies. The studies were conducted in the Netherlands (n=3), Germany (n=2), Sweden (n=1) and the UK (n=1) and were all published between 2019 and 2021 (Table 2). Two studies focused on breast cancer screening and included only female participants. None of the studies reported restrictions by ethnicity. Critical appraisal of the studies was performed using the Mixed Methods Appraisal Tool7 (supplementary material S3).
Study selection.
Description of studies that satisfied the study eligibility criteria
Strengths and limitations
This is the first review of published studies on patient perspectives on the introduction of AI in radiology. We identified only a few studies that met eligibility criteria. Despite the studies being conducted in different countries and in heterogeneous health systems, the themes explored were reasonably consistent.
All of the studies were conducted in Western Europe, and all but one was performed within a medical setting, which could limit the generalisability of the opinions expressed. Patients' prior knowledge of AI would plausibly affect their perceptions and preparedness to accept the technology in image interpretation. Only two studies11,14 recorded participants' prior understanding of AI. Although the influence of prior knowledge of AI on the attitudes of participants was modest, it is a limitation because the other included studies did not assess it. Analysis of non-responders was not reported in any of the studies, which is a potential sources of bias in the reported results.
Discussion of the key themes
Our results demonstrate the lack of evidence regarding patient views compared with other groups, with only seven eligible studies identified, compared with 43 studies that explored the views of clinical professionals (34 studies) and medical students (nine studies) (supplementary material S4).
Two broad recurring themes were identified; confidence in the capabilities of AI for diagnostic imaging, and views regarding the accountability and transparency of a diagnostic process that might include AI. The demographic features that were associated with views on the adoption of these technologies are presented in Table 3.
Summary of patient data addressing the themes of confidence in the capabilities of AI, its accountability and transparency, and demographic features associated with these views
Confidence in the capabilities of AI
Patients recognised the future implementation of AI in the diagnostic process as inevitable.9,10 The potential to add safety, apply state-of-the-art analysis and facilitate faster results (and, thus, reduce anxiety) were identified by patients as desirable benefits.9–11 The highest levels of confidence were consistently expressed for systems utilising a combination of AI and clinical interpretation, sometimes coined ‘AI-supported decision making’.9,10,12 In only one study, there was there greater confidence in a clinician's interpretation without the use of AI.13 In general, a greater level of trust was expressed in the ability of the clinician to correctly interpret imaging compared with AI, with a consistent negative attitude toward AI being a sole replacement for human diagnostic decision making. However, AI was seen as potentially slightly superior to clinicians in incorporating state-of-the-art information to make decisions based on the most current evidence.10,14
Digital literacy, prior experience of AI and educational attainment were factors associated with greater acceptance of the use of AI.9,11–15 Age was evaluated in two studies and found to be not, or only weakly, associated with reduced acceptability of AI.9,13 Acceptability was reduced if there was a preference for human interaction, a history of chronic disease or an inability to work. Patient acceptance of AI was reduced as the severity of the disease being assessed increased.14 When discussing test results, most patients felt that human interaction and empathy were vital parts of this process, particularly in understanding the reliability of results and their clinical impact.10
Accountability and transparency
Clinician opinion is strongly favoured over AI in circumstances in which clinician and AI did not agree,13,14 with patients viewing the role of AI as a decision aid for clinicians.
Accountability for errors if AI were operating autonomously was not explored in depth. There was uncertainty expressed regarding whether the responsibility would belong to the radiologist, the wider clinical team or the AI provider.10,12 This reflects similar uncertainty expressed within the radiology profession and highlights the need for an ethical and legal framework within which AI systems are expected to operate.4,15 Patients expressed a wish for transparency regarding the role of AI in a decision-making process and an expectation of the reasoning behind AI-based decisions.10,11,14
There was uncertainty regarding the responsibility for regulating the efficacy of AI systems.10,12 Generally, patients expected that AI systems would be subject to independent scientific validation and operate within a regulatory or legal framework,10,11,15 with the duty for developing this validation considered to belong to the ‘scientific community’.10 These expectations are at odds with the current state of implementation of AI. A recent review of over 100 radiology CE-marked AI products available for clinical use found that 64 lacked peer-review evidence and approximately half of the evidence produced was co-funded or co-authored by the vendor.2
Although patients placed great faith in human interpretation, the inherent risk of errors in current practice was recognised, as were the potential benefits of AI to assist reaching a correct diagnosis. This is reflected in the preference for a model of human interpretation with AI as a second opinion, with the human as the final arbitrator of the outcome.
Conclusions
There is limited published literature regarding the views of patients on the use of AI. This review indicates that patients see the potential value of this technology, but expect transparency around its role, with high expectations of validation and regulation. Many of the uncertainties around issues of responsibility for decision making and governance are shared by clinicians.
Key uncertainties surround attitudes to the topic of diagnostic error. The self-learning algorithms utilised by AI will mean that the mechanisms for an error might remain unclear even to the manufacturer. Although the limitations of human perception and decision making resulting in error are difficult to accept, they are at least comprehensible. Arguments that the likelihood of an error might have been greater without AI are nuanced and prone to the bias of hindsight. Public opinion regarding the relative responsibilities of the radiologist, manufacturer and wider regulatory framework remain unclear.
There appears to have been only limited exploration of the public's attitude toward important governance issues, such as expectations around transparency and consent, ensuring personal privacy and use of personal data for the development of commercial products.
With increasing demands and limited resources, the justifications to develop AI as a replacement for human interpretation will increase. Yet, a clear majority of patients identified in this review currently favour the implementation of AI as an aid rather than a replacement for decision making, especially when the potential stakes are high. This highlights a separation between the possible roles of AI and what is currently considered acceptable. Developing an understanding of the concerns that underlie these views is a key aspect of addressing them and realising the many potential roles for AI.
Summary points
Public and patient views are substantially under-represented in current literature regarding opinions on implementing AI into radiology.
A combination of human and AI image interpretation is strongly favoured. AI working independent of human oversight is not widely supported.
Patients expect transparency in the implementation of AI, together with independent scientific validation, clear regulation and accountability.
Supplementary material
Additional supplementary material may be found in the online version of this article at www.rcpjournals.org/content/futurehosp
S1 – Search Strategy
S2 – Study Protocol
S3 – Quality assessment
S4 – References for studies containing the views of other stakeholders
- © Royal College of Physicians 2023. All rights reserved.
References
- ↵
- Neri E
- ↵
- ↵
- Gallix B
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Jonmarker O
- ↵
- ↵
- Müller A
- ↵
- ↵
- York T
- ↵
- ↵
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.
Cited By...
- No citing articles found.