Implicit bias in healthcare: clinical practice, research and decision making

ABSTRACT
Bias is the evaluation of something or someone that can be positive or negative, and implicit or unconscious bias is when the person is unaware of their evaluation. This is particularly relevant to policymaking during the coronavirus pandemic and racial inequality highlighted during the support for the Black Lives Matter movement. A literature review was performed to define bias, identify the impact of bias on clinical practice and research as well as clinical decision making (cognitive bias). Bias training could bridge the gap from the lack of awareness of bias to the ability to recognise bias in others and within ourselves. However, there are no effective debiasing strategies. Awareness of implicit bias must not deflect from wider socio-economic, political and structural barriers as well ignore explicit bias such as prejudice.
Introduction
Bias is the evaluation of something or someone that can be positive or negative, and implicit or unconscious bias is when the person is unaware of their evaluation.1,2 It is negative implicit bias that is of particular concern within healthcare. Explicit bias, on the other hand, implies that there is awareness that an evaluation is taking place. Bias can have a major impact on the way that clinicians conduct consultations and make decisions for patients but is not covered in the medical field outside clinical reasoning. Conversely, it is commonly highlighted in the world of business.3,4 The lack of awareness of implicit bias may perpetuate systemic inequalities, resulting in lower pay for clinicians from ethnic minorities and lack of female surgeons in senior positions, for example.5,6
Cognitive bias may explain political decisions in the coronavirus pandemic framing ventilators as ‘lifesaving’ and subsequent investment over public health non-pharmaceutical measures: framing bias.7 Clinicians during the pandemic may have been tempted to prescribe medication despite lack of clear evidence due to fear of lack of action: action bias.8 Action bias may have been exhibited by stressed members of the public when panic buying groceries despite reassurance of stable supply.9 Cognitive bias may affect the way clinicians make decisions about healthcare given the novelty of the disease and evolving evidence base. Politicians may prioritise resources to goals that will provide short-term benefit over long-term benefit; this might include increases critical care capacity over public health investment: present bias.7 Given the amount of poorly reported and implemented non-peer reviewed pre-print research during the pandemic, many clinicians may implement easily available research amplified by media rather than taking a critical look at the data: availability bias.8,10 This may be compounded by physical and emotional stress. Media reporting of the coronavirus in the USA as the ‘Chinese virus’ was linked with increasing anti-American bias towards east Asians.11
This article aims to identify the potential impact of bias on clinical practice and research as well as clinical decision making (cognitive bias) and how biases may be mitigated overall.
Methods
A non-systematic literature review approach was used given the heterogeneous and mixed-method study of bias in healthcare; such a topic would be unamenable to systematic review methodology. Inclusion criteria included English language articles which were identified by searching PubMed and the Cochrane database from January 1957 to December 2020 using the following search terms: ‘implicit bias’, ‘unconscious bias’, ‘cognitive bias’, and ‘diagnostic error and bias’. The highest level of evidence was prioritised for inclusion (such as recent systematic reviews, meta-analyses and literature reviews). Opinion articles were included to set context in the introduction and the discussion sections to identify possible future direction. Articles mentioning bias modification in clinical psychiatry were excluded as these focused on specific examples of clinical care rather than contributing to a broad overview of the potential impact of bias in medicine.
How does bias work and where does it come from?
Decision making can be understood to involve type 1 and type 2 processes (see Fig 1).12,13 Type 1 processes are fast, unconscious, ‘intuitive’ and require limited cognitive resources.13,14 They are often known as mental shortcuts or heuristics, which allow rapid decision making. In contrast, type 2 processes are slower, conscious, ‘analytic’ and require more cognitive resources.13 The above is known as dual process theory (DPT). It is type 1 processing that makes up the majority of decision making and is vulnerable to error. If this occurs in consecutive decisions, it can lead to systematic errors, such as when a car crash that occurs after errors in some of hundreds of tiny decisions that are made when driving a car.13 Despite the critique of implicit bias, such automatic decisions are necessary for human function and such pattern recognition may have developed in early humans to identify threats (such as predators) to secure survival.3 It is thought that our biases are formed in early life from reinforcement of social stereotypes, from our own learned experience and experience of those around us.15
Decision-making processes. a) The interaction between type 1 and type 2 processes allows diagnoses to be made from patient presentations. T = ‘toggle function’; the ability to switch between type 1 and type 2 processes. b) The type 1 processes that control calibration of decision making to make a diagnosis. Adapted with permission from Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf 2013;22(Suppl 2):ii58–64.
The Implicit Association Test (IAT) is the commonest measure of bias within research literature. It was developed from review work which identified that much of social behaviour was unconscious or implicit and may contribute to unintended discrimination.16,17 The test involves users sorting words into groups as quickly and accurately as possible and comes in different categories from disability to age, and even presidential popularity. For the gender-career IAT, one vignette might include sorting gender, or names (eg Ben or Julia), into the family or career categories. This has been well summarised in meta-analyses comparing the ability of the IAT to predict social behaviour.18,19 Furthermore, Oswald and colleagues found that the IAT was not a predictor of markers of discrimination when looking at race and ethnicity.19
While the IAT is used widely in research literature, opponents of the IAT highlight that it is unclear what the test actually measures, and comment that the test cannot differentiate between association and automatically activated responses.20 Furthermore, it is difficult to identify associations, bringing further confusion to the question of how to measure the activity of the unconscious mind. Given these conflicting views, while IAT testing is commonly used, it cannot be universally recommended.21 There are ethical concerns that the IAT could be used as a ‘predictive’ tool for crimes that have not yet occurred, or a ‘diagnostic’ tool for prejudice such as racism.22 The IAT should be used as a tool for self-reflection and learning, rather than a punitive measure of one's biases or stereotypes.23 The test highlights individual deficiencies rather than looking at system faults.
A systematic review focusing on the medical profession showed that most studies found healthcare professionals have negative bias towards non-White people, graded by the IAT, which was significantly associated with treatment adherence and decisions, and poorer patient outcomes (n=4,179; 15 studies).24 A further systematic review showed that healthcare professionals have negative bias in multiple categories from race to disability as graded by the IAT (n=17,185; 42 studies) but it did not link this to outcomes.25 The reviews bring into question healthcare provider impartiality which may conflict with their ethical and moral obligations.25,26
Bias in clinical medicine
Using the IAT, US medical students (n=4,732) and doctors (n=2,284) were demonstrated to have weight bias (ie prejudice against those who are overweight or obese) which may stem from a lack of undergraduate education in the causes of obesity and how to consult sensitively.27–29 Many healthcare professionals believe that obesity is due to a lack of willpower and personal responsibility, but it may be due to other factors such as poverty and worsening generational insomnia.30–32 Similarly the obesity IAT evaluated across 71 countries (n=338,121) between 2006 and 2010 identified that overweight individuals had lower bias towards to overweight people, while countries with high levels of obesity had greater bias towards obese people.33
There is evidence to corroborate anecdotal reports of female doctors being mistaken for nurses while at work, and male members of staff and male students being mistaken for doctors despite the presence of a clear female leader.34,35 Boge and colleagues found that patients (n=150) were 17.1% significantly less likely to recognise female consultants as leaders compared with their male counterparts, and 14% significantly more likely to recognise female nurses as nurses compared with male nurses.34 In addition, female residents (registrars) have significantly negative evaluations by nursing staff compared with their male colleagues despite similar objective clinical evaluations between male and female colleagues.36,37
One alarming disparity that deserves mention is gender-specific differences in myocardial infarction presentation and survival. While members of both genders present with chest pain, women often present with what is known as ‘atypical’ symptoms such as nausea, vomiting and palpitations.38,39 The mention of ‘atypical’ in the literature is misleading given that women make up half of an average population. Large cohort studies (n=23,809; n=82,196) have found increased in-hospital mortality by 15–20% (adjusted odds ratios) for female patients compared with male patients, which contrasts with smaller cohorts (n=4,918; n=17,021), which have found no differences.40–43 Interviews with patients under the age of 55 (n=2,985) who had suffered myocardial infarctions revealed that women were 7.4% (absolute risk) more likely to seek medical attention, and were 16.7% less likely to be told their symptoms were cardiac in origin.44 This data indicates a need for education of the public and healthcare professionals alike about the symptoms of a myocardial infarction in women.
In 2019, the MBRRACE-UK report revealed that maternal and perinatal mortality in pregnancy was five times higher in Black women compared with White women, and this data has also been replicated in US data with a similar order of magnitude of three to four times.45,46 While official reports have not offered clear explanations as to the causes of such differences, it has been suggested that a combination of stigma, systemic racism and socio-economic inequality are relevant causative factors rather than biological factors alone.47,48 Lokugamage calls for healthcare professionals to challenge their own biases and assumptions when providing care using a ‘cultural safety’ model.49,50 Such a model could help identify areas for power imbalances in the healthcare provider–patient relationship and resultant inequalities. Cultural competence training has been evaluated in a Cochrane systematic review, and a number of randomised controlled trials (RCTs) included did show that training courses (of varying lengths) did provide some improvement in cultural competency and perceived care quality at 6–12 months’ follow-up (five studies; 337 professionals; 84,00 patients).51 However, there was limited effect on improving objective clinical markers such as decreasing blood pressure in ethnic minorities for example.
Bias in research, evidence synthesis and policy
While scientific and medical research is thought to be free from outside influence, ‘science is always shaped by the time and the place in which it is carried out’.52 The research questions that are developed and answered depend on the culture and institutions in our societies, including public–private industry partnerships. During research conduct, minimisation of bias (specifically selection and measurement bias) within research is an important factor when attempting to produce generalisable and robust data. Canadian life science researchers note a consistent trend of small research institutions having a 42% lower chance of research grant application being successful compared with large research institutions.53,54 In contrast, gender bias within the wider realm of research may discriminate against women in the selection of grant funding as well as in terms of the hierarchical structure of promotion in academic institutions.55,56 At academic conferences and grand rounds, men were 21–46% more likely to introduced by their professional titles by women compared with when women were introduced by men.57–59 Women were 8–25% more likely to introduce a fellow woman by her title compared with men introducing men. However, these differences were not always observed.60
Taking an international perspective, when an IAT was used to assess healthcare professionals’ and researchers’ (n=321) views on the quality of research emanating from ‘rich’ and ‘poor’ countries (assessed by gross domestic product), the majority associated ‘good’ (eg trustworthy and valuable) research with ‘rich’ countries.61 This alone does not mean much, but by using a randomised blinded crossover experiment (n=347), swapping the source of a research abstract from a low- to high-income country, improved the assessment of the research in English healthcare professionals.62 A systematic review (three randomised control trials; n=2,568) found geographic bias for research from high-income countries or more prestigious journals over low-income countries or less prestigious journals.63 This highlights how publication bias for research from high-income countries could neglect a wealth of data from low-income countries that is valid, even if it is not published, or only published in lower impact journals. These data highlight a greater need for more objective assessments of research, including multiple layers of blinding with a journal review board and peer reviewers from low-income countries.63 However, blinding may be beneficial when recruiting people to jobs from job applications given that application photos may influence the selection process at resident or registrar level.64 It may be difficult to anonymise citations or publication data during academic selection processes.
Bias comes into play during evidence generation and application of evidence-based policy (EBP) where scientific-based, single-faceted solutions can be seldom applied to multi-faceted or ‘wicked’ problems.65 These problems are poorly defined, complex, dynamic issues where solutions may have unpredictable consequences (such as climate change or obesity).66 Parkhurst identifies two forms of evidentiary bias in policymaking that can occur in the creation, selection and interpretation of evidence: technical bias and issue bias.67 Technical bias is where use of the evidence does not follow scientific best practice, such as ‘cherry-picking’ rather than systematically reviewing the evidence to support a certain position. In contrast, issue bias occurs when the use of the evidence shifts political debate in a certain direction, such as presenting a policy with evidence reflecting one side of the debate.
Cognitive biases and diagnostic errors
Errors are inevitable in all forms of healthcare.68 The prevalence of diagnostic errors varies between different healthcare settings and may be partly due to cognitive factors as well as system related factors.69,70 Systematic reviews (76 studies; 19,123 autopsies) looking at studies where autopsies detected clinically important or ‘major’ errors involving principal underlying disease or primary cause of death found an error rate of 23.5–28% in adult and child inpatient settings.71,72 A systematic review conducted in primary care identified a median error rate of 2.5 per 100 consultations or records reviewed (107 studies (nine systematic reviews and 98 primary studies); 128.8 million consultations/records).73 Existing research on human factors using checklists to decrease hospital-associated infections and perioperative mortality supports emerging research that links bias to diagnostic errors.74–76
A systematic review assessing associations between cognitive biases and medical decisions found cognitive biases were associated with diagnostic inaccuracies in 36.5%–77% of case scenarios (7 studies; n=726) from mostly clinician survey-based data.77 There was an association found between cognitive bias and management errors in five studies (n=2,301). There was insufficient data to link physician biases and patient outcomes. The review was limited by a lack of definitions of the different types of cognitive biases in 40% of all studies (n=20) and a lack of systematic assessment of cognitive bias. Cognitive biases are one of several individual-related interweaving factors linked to errors, including inadequate communication, inadequate knowledge–experience skill set and not seeking help.78 There are many different types of cognitive bias which can be illustrated in the healthcare diagnostic context (see Table 1).
Selected cognitive biases in a healthcare context with definitions illustrated with an example of a patient presenting with chest pain79–86
Evidence-based bias training
Making diagnoses is thought to depend on the previously mentioned type 1 and type 2 processes which make up DPT.87 Despite this, there has been a growing body of evidence suggestive that type 2 processing or ‘thinking slow’ is not necessarily better than type 1 processing ‘thinking fast’ in clinicians.12,88–90 Furthermore, there has been suggestion that proposed solutions (such as reflection and cognitive forcing; strategies that force reconsideration of diagnoses) to identify and minimise biases and debiasing checklists has limited effect in bias and error reduction.91–94 Small-scale survey-based data (n=37) suggested the presence of hindsight bias where clinicians disagree on the exact cognitive biases depending on the outcome of a diagnostic error (see Table 1).95
A systematic review (28 studies; n=2,665) on cognitive interventions targeting DPT for medical students and qualified doctors found several interventions had mixed or no significant results in decreasing diagnostic error rate.70 The vast majority of studies included small samples (n<200) and effects often did not extend beyond 4 weeks. Interventions included integration into educational curricula, checklists when making diagnoses, cognitive forcing, reflection and direct instructions. These interventions often come under the umbrella term of ‘meta-cognition’. A more recent systematic review and meta-analysis determined that diagnostic reflection improved diagnostic accuracy by 38% in medical students and doctors (n=1,336; 13 studies) with short-term follow-up.96 This implies that decreasing bias can only occur after a diagnostic error has taken place. The limited evidence base for decreasing bias may be due to methodological differences or intrinsic differences in study subjects in the clinical studies and reviews. Some clinicians may find a practical checklist when providing healthcare in order to minimise their own biases when making decisions (Box 1).97–99 The nature of decreasing bias through a single-faceted intervention may be very difficult as bias is a ‘wicked’ or multi-faceted problem.65 Unconventional methods of teaching bias may include a teaching bias to medical students in a non-clinical setting (such as a museum, a weekly series of case conferences examining health equity and implicit bias, and transformative learning theory).100–102 Transformative learning theory resembles what many consider to be key components of Balint groups and combines multiple single interventions (such as experience, reflection, discussion and simulation).102,103
Suggested checklist for making good clinical decisions97–99
Hagiwara and colleagues outlined three translational gaps from social psychology to medical training which may hinder the effectiveness of bias training to improve health outcomes.104 The first is a lack of evaluation of a person's motivation to make change along with bias awareness. The second is that bias training does not come with clear strategies to mitigate bias and may result in avoidance or overfriendliness which may come across as contrived in specific situations (such as clinics with marginalised groups). The third is lack of verbal and non-verbal communication training with bias training, given that communication is the mediator between bias and patient outcomes. Verbal communication training may involve micro-aggressions.105
Discussion
There are limited data to suggest reflective practice as a clear evidence-based strategy to decrease our biases on a clinician–patient level but options such as cultural safety checklists and previously outlined strategies (Box 1) could provide support to coalface clinicians.97–99 Better appreciation of biases in clinical reasoning could help clinicians reduce clinical errors and improve patient safety and provide better care for marginalised communities who have the worst healthcare outcomes.106,107 It is hoped that the training would help bridge the gap from the unawareness of bias to the ability to recognise bias in others and within ourselves to mitigate personal biases and identify how discrimination may occur.108 Awareness of implicit bias allows individuals to examine their own reasoning in the workplace and wider environment. It asks for personal accountability and a single question: ‘If this person were different in terms of race, age, gender, etc, would we treat them the same?’
However, there is a conflict between those suggesting bias training which may increase awareness of bias and the limited evidence to identify any effective debiasing strategy following the identification of biases.109 Advocates of bias training suggest that it should not be taught as an isolated topic but integrated into clinical specialty training.110 Others deduce that bias training would be more effective with measures of personal motivation and communication training along with evidence-based strategies to decrease implicit bias.101 Similarly, IAT testing should be administered with a caveat.
To our knowledge at the time of writing, only the Royal College of Surgeons of England has identified the importance of unconscious bias through an information booklet.111 The booklet entitled Avoiding unconscious bias seems unlikely because type 2 processing is integral to human thinking. There is a need for better-powered research into the effectiveness of strategies that can decrease implicit and cognitive bias, especially in the long term. Furthermore, organisations should consider whether bias training should be integrated into undergraduate and postgraduate curriculum as there are no effective debiasing strategies.
As we move into data-driven societies, the impact of bias becomes every important.112 A simple example is a step counting mobile application that undercounted steps, it is probably due to the application being likely constructed to count steps in an ‘average person’ ignoring differences in gender, body mass index and ethnic origin.113 Within artificial intelligence, testing of data algorithms in different groups of people can help make algorithms more applicable to diverse populations and, ideally, diversely created algorithms should limit bias and increase applicability.114,115
Since the Black Lives Matter movement, many institutions may consider implementing bias training to mitigate racism. However, awareness of implicit bias or tokenistic bias training must not deflect from wider socio-economic, political and structural barriers that individuals face.116,117 Similarly, implicit bias should not be used to absolve responsibility, nor ignore explicit bias that may perpetuating prejudice and stereotypes.117 Action to correct the lack of non-White skin in research literature and medical textbooks is welcome.118–120 Furthermore, there has been much work to challenge the role of biological race in clinical algorithms and guidance (such as estimated glomerular filtration rate and blood pressure).121,122 Most pertinent to the pandemic, Sjoding and colleagues compared almost 11,000 pairs of oxygen saturations with pulse oximetry and arterial blood gas among Black and White patients.123 Black patients were 8–11% (relative risk three times) more likely to have lower arterial saturations when compared with pulse oximetry for White patients. This has implications for the coronavirus pandemic, respiratory conditions and is a call to tackle racial bias in medical devices.
With regards to the structure of our healthcare systems, the understanding of personal bias can help identify judgements made during recruitment processes and help build representative leadership and workforce in the healthcare system of the population they serve.124 This is likely to help deliver better patient outcomes. Other strategies to decrease the impact of bias include using objective criteria to recruit, blind evaluations and salary disclosures.125 Additional measures include providing a system of reporting discrimination and measuring outcomes such as employee pay and hiring, and routinely measuring employee perceptions of inclusion and fairness. Such measures are fundamental to help mitigate inequality and associated adversity.
Acknowledgements
We thank Prof Damien Ridge for his suggestions on this manuscript.
Conflicts of interest
Dipesh Gopal is an in-practice fellow supported by the Department of Health and Social Care and the National Institute for Health Research.
Disclaimer
The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
No honoraria paid to promote media cited including books, podcasts and websites.
- © Royal College of Physicians 2021. All rights reserved.
References
- ↵
- ↵
- ↵
- Re:Work
- ↵
- Lovallo D
- ↵
- Appleby J
- ↵
- Moberly T
- ↵
- Halpern SD
- ↵
- Ramnath VR
- ↵
- Landucci F
- ↵
- Glasziou PP
- ↵
- Darling-Hammond S
- ↵
- Kahneman D
- ↵
- Croskerry P
- ↵
- ↵
- ↵
- Project Implicit
- ↵
- ↵
- ↵
- ↵
- ↵
- Goldhill O
- ↵
- Jost JT
- ↵
- Sukhera J
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Rubin R
- ↵
- ↵
- Kim TJ
- ↵
- Gohil A
- ↵
- ↵
- Boge LA
- ↵
- Cooke M
- ↵
- Galvin SL
- ↵
- Brucker K
- ↵
- Kawamoto KR
- ↵
- Mehta LS
- ↵
- Hannan EL
- ↵
- Hao Y
- ↵
- Wei J
- ↵
- Her AY
- ↵
- Lichtman JH
- ↵Mothers and Babies: Reducing Risk through Audits and Confidential Enquiries across the UK. Saving lives, improving mothers’ care: Lessons learned to inform maternity care from the UK and Ireland Confidential Enquiries into Maternal Deaths and Morbidity 2014–16. Oxford: National Perinatal Epidemiology Unit, University of Oxford, 2018. www.npeu.ox.ac.uk/mbrrace-uk/reports/confidential-enquiry-into-maternal-deaths [Accessed 26 February 2021].
- ↵
- Review to Action
- ↵
- Martin N
- ↵
- Hamilton D
- ↵
- Lokugamage A
- ↵
- ↵
- Horvat L
- ↵
- Saini A
- ↵
- Smith J
- ↵
- Murray DL
- ↵
- ↵
- Conrad P
- ↵
- Files JA
- ↵
- ↵
- Paradiso MM
- ↵
- Davuluri M
- ↵
- ↵
- ↵
- Skopec M
- ↵
- Kassam A-F
- ↵
- Parkhurst JO
- ↵
- Australian Public Service Commission
- ↵
- Parkhurst J
- ↵
- ↵
- ↵
- Lambe KA
- ↵
- ↵
- Winters B
- ↵
- Panesar SS
- ↵
- ↵
- ↵
- ↵
- ↵
- Seshia SS
- ↵
- ↵
- ↵
- Surry LT
- ↵
- ↵
- Cohen JM
- ↵
- ↵
- Banham-Hall E
- ↵
- Kruger J
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Schwartz BD
- ↵
- Sherbino J
- ↵
- Sibbald M
- ↵
- Zwaan L
- ↵
- Prakash S
- ↵
- Klein JG
- ↵
- Stiegler M
- ↵
- Browne AM
- ↵
- Zeidan A
- ↵
- Perdomo J
- ↵
- ↵
- Roberts M
- ↵
- Hagiwara N
- ↵
- Acholonu RG
- ↵
- ↵
- ↵
- ↵
- Atewologun D
- ↵
- Schmidt HG
- ↵
- Royal College of Surgeons of England
- ↵
- ↵
- ↵
- Courtland R
- ↵Why Aren't You A Doctor Yet? Episode 27: The internet is a repository of evil (ft. Alex Fefegha). iTunes, 2019. https://podcasts.apple.com/gb/podcast/episode-27-internet-is-repository-evil-ft-alex-fefegha/id1304737490?i=1000432446587 [Accessed 26 February 2021].
- ↵
- Ezaydi S
- ↵
- Pritlove C
- ↵
- Mukwende M
- ↵
- Massie JP
- ↵
- ↵
- ↵
- Gopal DP
- ↵
- ↵
- McKenna H
- ↵
- Arvizo C
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.
Cited By...
- No citing articles found.