Skip to main content

Main menu

  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us

Clinical Medicine Journal

  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

User menu

  • Log in

Search

  • Advanced search
RCP Journals
Home
  • Log in
  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us
Advanced

Clinical Medicine Journal

clinmedicine Logo
  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

Learning from practice variation to improve the quality of care

Charles RV Tomson and Sabine N van der Veer
Download PDF
DOI: https://doi.org/10.7861/clinmedicine.13-1-19
Clin Med February 2013
Charles RV Tomson
1Department of Renal Medicine, Southmead Hospital, Bristol
Roles: consultant nephrologist
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: charlie.tomson@nbt.nhs.uk
Sabine N van der Veer
2Department of Medical Informatics, Academic Medical Centre, Amsterdam, Netherlands
Roles: post doctoral researcher
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
Loading

Abstract

Modern medicine is complex and delivered by interdependent teams. Conscious redesign of the way in which these teams interact can contribute to improving the quality of care by reducing practice variation. This requires techniques that are different to those used for individual patient care. In this paper, we describe some of these quality improvement (QI) techniques. The first section deals with the identification of practice variation as the starting point of a systematic QI endeavour. This involves collecting data in multiple centres on a set of quality indicators as well as on case-mix variables that are thought to affect those indicators. Reporting the collected indicator data in longitudinal run charts supports teams in monitoring the effect of their QI effort. After identifying the opportunities for improvement, the second section discusses how to reduce practice variation. This includes selecting the ‘package’ of clinical actions to implement, identifying subsidiary actions to achieve the improvement aim, designing the implementation strategy and ways to incentivise QI.

Introduction

Over the past few decades, the practice of physicians has been transformed by a growing evidence base quantifying the benefits and risks of interventions in disease states, and the development of clinical practice guidelines summarising this evidence base. Although this has eroded the traditional autonomy of physicians to choose the treatment that they considered right for ‘their’ patient, considerable variation in the quality of care delivered to patients across the National Health Service (NHS) seems to persist.1,2 This implies that many patients leave healthcare encounters having not been offered treatments for their condition for which there is high-quality (Grading of Recommendations Assessment, Development and Evaluation (GRADE) level 1) evidence. This might partly be explained by the growing demand from patients that they be active partners in their own care, taking decisions on therapeutic options based on their own preferences and using topic-specific patient decision aids.3 Also, the clear tension between standardisation of care and the need to allow innovation4,5 might account for some of the observed variation. However, often the failure to offer treatments for which there is level 1 evidence is caused by systems that are not designed for reliable delivery of care and are susceptible to human factors.6 Therefore, service redesign is an important way to reduce practice variation and improve the quality of care.

Quality has several dimensions: we have adopted the definition of quality suggested by the Royal College of Physicians (RCP) (Box 1).7 Although there might, on occasion, be tensions between one dimension and another (most commonly between efficiency (health gain per pound spent) and the ‘clinical’ dimensions of patient experience, timeliness and safety), it is frequently the case that safer and more reliable systems of care delivery are also more efficient.8,9 Thus, service redesign, using quality improvement (QI) techniques, is also a good way to maintain clinical quality when budgets are under pressure. Here, we describe some of these techniques.

Box 1.

Dimensions of the quality of care, as defined by the Royal College of Physicians.7


Embedded Image

Identifying practice variation

Measuring practice variation

The mantra ‘If you cannot measure it, you cannot improve it’ is widely used in QI and reflects the pivotal role of measurement in systematic attempts to decrease practice variation. At the core of any measurement initiative is a quality indicator set10 that ideally combines measures of structure, process and outcomes of care (Box 2).11,12 The choice of measures to monitor the success of QI is complex. Process measures are easier to influence, whereas outcome measures are more ‘meaningful’ clinically. The latter are also more susceptible to case-mix variation, care processes outside the direct control of the QI team and to variation in how the case mix is coded.10,13–16

Box 2.

Three categories of quality indicators.11


Embedded Image

Practice variation can only be identified by collecting data on several providers or facilities and comparing the results. Therefore, measurement usually occurs at the regional or national level. Examples of such measurement initiatives are NHS programmes such as the Quality and Outcomes Framework (www.qof.ic.nhs.uk), the Patient Reported Outcomes Measures initiative (www.ic.nhs.uk/proms); the Intensive Care National Audit & Research Centre (www.icnarc.org); and the UK renal registry (www.renalreg.com).

Interpreting practice variation: the centre effect

If mortality from a particular condition is substantially higher among patients treated at centre A than at centre B, then some observers would conclude that centre B is ‘better’. However, in clinical practice, it can be difficult to be sure that differences in mortality, or in other frequently used clinical outcomes (eg length of stay, readmission rate or patient-reported outcomes), are really the result of differences in the quality of care, rather than of the case mix (‘But my patients are different: we treat sicker patients than other centres’).14

Therefore, valid comparison between centres requires collection of reliable data on variables that might affect outcome (eg age, comorbidity and functional status) as well as reliable coding of the primary diagnosis. ‘Coding depth’ should also be assessed: hospitals that are systematically better at coding for comorbidities will appear ‘better’ in comparisons of mortality after adjustment for comorbidity.17 ‘Gaming’ is also possible: because patients admitted for palliative care are not counted in most measures of adjusted hospital mortality, increasing the use of codes for palliative care will reduce apparent mortality, again without any real impact on clinical outcomes.18

Finally, collection of a reliable data set should be followed by a multilevel modelling statistical approach that analyses centre performance after adjustment for patient-level characteristics.19

Reporting practice variation

Quality improvement techniques are designed to allow teams to find, by repeated small tests of change, the best way of delivering a given package of changes within a given system, such as an outpatient clinic or operating theatre. The effects of these changes are measured and reported in ‘real time’, often using longitudinal ‘run charts’ (ie annotated time series charts) or statistical process control (SPC) charts.20,21 This form of continuous audit is preferable to the traditional audit cycle, in which an audit might be performed once a year or even less frequently, for several reasons. Assume that an audit is performed on a set of cases in September 2009, and then repeated by a different junior doctor, after implementation of changes to a protocol, in August 2010, and that the second audit shows improvement compared with the first. This improvement could be the result of the change in protocol, but could also be due to progressive improvement over time for reasons unrelated to the protocol, to month-to-month variation, or to differences in data collection between the two audits. Collection and graphical display of data each month would enable distinction between these possibilities and also ‘keeps teams to the task’, making it easier to see whether a change in practice is temporally related to a change in outcome. In addition, the use of ‘run chart rules’ in longitudinal SPC charts enables the team to decide whether the change in outcome is a statistically significant departure from the previous baseline (‘special cause’) or just part of the normal fluctuation in outcome measures (‘common cause’). However, to use SPC charts as a reliable reporting tool, they should be constructed with caution: a systematic review of the use of SPC charts in healthcare QI found that the single biggest methodological flaw was the failure to establish an adequate baseline before assessing the effects of a QI intervention.22

Provision of ‘achievable benchmarks’, based on high performance, is more likely to incentivise QI teams than are conventional performance measures, which sometimes encourage the mind-set ‘We're not an outlier, so that's OK’.23,24

Reducing practice variation

Selecting a ‘package’ of changes to implement

It is axiomatic that ‘just trying harder’ within the current system will seldom if ever generate sustained improvement; rather, a conscious programme to change some aspect of the system of care is required. Clinicians will usually choose to work on improving outcomes in an area in which existing audits have shown performance to be poor, but might not know how to achieve improvement.

Most QI programmes focus on the implementation of a ‘change package’ (a set of clinical actions predicted to improve the outcome in question). This ‘change package’ is often derived from clinical practice guidelines, based in turn on the highest quality evidence available. For instance, the ‘change package’ used in the ‘100,000 lives campaign’ of the Institute for Healthcare Improvement, designed to reduce mortality after myocardial infarction, comprised practices derived from the American College of Cardiology/American Heart Association guidelines: prescription of aspirin early and at discharge; prescription of a beta-blocker early after admission and on discharge; prescription of angiotensin-converting enzyme inhibitor or angiotensin II receptor blockers (ARB) for patients with systolic dysfunction; timely reperfusion; and smoking cessation counselling.25

A change package might also comprise a set of practices underpinning the delivery of a given clinical intervention. For instance, Bradley et al. undertook a qualitative study to document the range of practices associated with low door-to-balloon time for primary angioplasty,26 and then performed a quantitative study to find out which candidate practices were independently associated with a fast door-to-balloon time after adjustment for potential confounders.27 Working on incorporating these practices (eg empowering ambulance staff to ‘phone ahead’ to prepare the angioplasty suite) would be a good example of a QI intervention.

Incentivising and accelerating improvement

Relying on each individual centre to develop its own QI strategy wastes the opportunity for centres to learn from each other. The QI literature contains many examples of programmes designed to accelerate QI by encouraging collaboration and mutual support.

A ‘collaborative’ is a programme in which teams from different centres work together on a specific topic (eg the prevention of wound infection after surgery) in a structured way. Typically, the teams meet for a 2-day period to be taught about the change package and the principles of QI methodology (often using the ‘Model for Improvement’28) that they will use to implement the package. The teams are encouraged to start testing changes immediately, using small, incremental tests of change, using ‘Plan, Do, Study Act’ cycles, and to plot data in as close to real time as possible. Momentum is maintained by frequent exchange of progress reports between the participating teams, typically with a monthly teleconference and the use of an ‘extranet’ to post protocols, algorithms, checklists and results, with further physical meetings at 6 and 12 months. The collaborative model has been widely used in the NHS, for instance in the Safer Patients Initiative funded by the Health Foundation.29 Keeping in mind that evaluating the impact of such nationwide programmes is methodologically challenging, the overall conclusion was that any improvement seen in this programme was similar to that seen in non-participating hospitals. A recent systematic review concluded that there was limited but positive evidence that collaboratives generate improvement.30

Alternative models to promote improvement include top-down regulation (eg performance management or closure of ‘outlier’ centres); financial incentives (eg the Quality and Outcomes framework in UK primary care, Commissioning for Quality and Innovation (CQUIN) payments, or non-payment of the costs of admissions complicated by a ‘never event’); in a market system, competition to attract referrals based on quality; or a ‘Call to Action’, based on the theory of social movements, generating ‘bottom-up’ improvement driven by a set of common values.

Achieving change in practice: ‘diagnosis’

The challenge for most QI efforts is that they occur in parallel with routine clinical work, which can seldom be suspended or even slowed down while changes are being implemented. This is only not the case when a new system or care pathway is being designed ‘from scratch’, for instance when moving into a new hospital building. Success is most likely if the task is broken down into small segments, with different teams working on different parts of the care pathway. Often, a ‘driver diagram’ can help in deciding where to focus efforts to improve an important clinical outcome. Say, for example, that a dialysis unit had decided to attempt to reduce the incidence of sudden death among a population of patients undergoing haemodialysis. Fig 1 lists a few of the potential contributors to the high incidence of sudden death in this group, with some ‘secondary drivers’ that underlie each of the ‘primary drivers’. For each secondary driver, there will be a series of subsidiary actions that will help to achieve the aim. For instance, achieving ‘dietary restriction of high potassium foods’ could involve individual or group dietetic education sessions; provision of written information, for example diet sheets and recipes; providing patients with paper or web-based access to their test results and other strategies aimed at increasing patient ‘empowerment’; one-to-one psychologist input for ‘problem’ patients; and encouraging competition among patients.

Fig 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 1.

An example of a driver diagram to identify ways of reducing the incidence of sudden death in a dialysis centre. Below the top level, showing this overall improvement goal, the middle level lists primary ‘drivers’ that potentially contribute to the high incidence of sudden death. The bottom level contains secondary drivers underlying these primary drivers. Next, the dialysis centre can focus on selecting subsidiary actions that are expected to have an impact on the secondary drivers.

‘Process mapping’ and ‘value stream mapping’ are techniques for breaking down a patient pathway into its component steps, while assessing the value that each step adds.31 It is useful in the ‘diagnostic’ phase of a QI programme, and can open the eyes of staff to opportunities to remove wasteful steps from the system. Having a patient participate (or, failing that, a staff member acting as a ‘mystery shopper’) often casts light on repetition and delays that individual members of staff, operating at different parts of the pathway, might be completely unaware of.

‘Barrier analysis’ is an additional way of designing QI interventions; questionnaires or surveys administered to staff whose practice would have to change to achieve improvement can identify barriers and facilitators of change, which can then be targeted in the QI intervention.32,33

Achieving change in practice: ‘treatment’

Once the diagnostic phase has been completed, and a clearly articulated aim and measurement strategy have been defined, the remaining challenge is to design the implementation strategy. Although this depends on the changes being implemented, there are some generic lessons from the QI literature. In our systematic review of QI interventions in renal replacement therapy, for instance, we classified implementation strategies into educational activities; audit and feedback; treatment protocols and algorithms; reminders and prompts; structural changes; changes in staff roles or responsibilities; financial strategies; and patient-oriented strategies. We found weak evidence, confirming other studies,34 that multifaceted interventions are more likely to generate improvement than is a single intervention.35

The concept of a ‘care bundle’ of measures, all of which should be reliably applied in a given clinical situation, is an extension of the use of treatment protocols, in which adherence is measured as ‘all or none’ rather than measuring adherence to each facet of the intervention independently.36

Evaluating QI

It is more difficult to evaluate QI interventions than conventional medical treatments, for instance drug treatment, and some have argued that different standards of proof should be applied. There are undoubted methodological problems in evaluating QI interventions, because the ‘unit of randomisation’ is often at organisational level (eg a hospital or outpatient clinic) rather than at the level of the individual patient. However, randomised controlled trials are possible,24,37 and other methodologies (eg rigorous time series analyses or stepped wedge design) can also provide convincing evidence. In addition, given that potential confounders are more difficult to predict than in traditional observational epidemiology, and that QI interventions cost money and might well have unintended consequences, rigorous evaluation should be considered mandatory.38

Summary and conclusions

The evolving science of QI recognises explicitly that modern medicine is delivered by complex, interdependent teams and that conscious redesign of the way in which teams interact is at least as important as the competencies of any single individual within the team. This requires different language and techniques to those used for individual patient care. Those who resist ‘cookbook medicine’ (an ill-chosen term for the delivery of reliable, high-efficiency, protocol-driven care) and imply that it is preferable for each individual doctor to decide the best treatment for ‘their’ patient in each and every clinical situation, based on their own critical analysis of the literature, are yesterday's men. Our energies should rather be spent on learning when a given situation justifies departing from ‘the way we do it round here’.

  • © 2013 Royal College of Physicians

References

  1. ↵
    1. Steel N
    Bachmann M Maisey S et al. Self reported receipt of care consistent with 32 quality indicators: national population survey of adults aged 50 or more in England. Br Med J 2008; 337:957.doi:10.1136/bmj.a957
    OpenUrlCrossRef
  2. ↵
    NHS Right Care. NHS atlas of variation (ver 2.0) London: NHS, 2011.
  3. ↵
    1. Mathews SC
    Pronovost PJ. Physician autonomy and informed decision making: finding the balance for patient safety and quality. JAMA 2008; 300:2913–5.doi:10.1001/jama.2008.846
    OpenUrlCrossRefPubMed
  4. ↵
    1. Richards S
    . Should the NHS strive to eradicate all unexplained variation? Yes. Br Med J 2009; 339:4811.doi:10.1136/bmj.b4811
    OpenUrlCrossRef
  5. ↵
    1. Lilford RJ
    . Should the NHS strive to eradicate all unexplained variation? No. Br Med J 2009; 339:4809.doi:10.1136/bmj.b4809
    OpenUrlCrossRef
  6. ↵
    1. Klein JG
    . Five pitfalls in decisions about diagnosis and prescribing. Br Med J 2005; 330:781–3.doi:10.1136/bmj.330.7494.781
    OpenUrlFREE Full Text
  7. ↵
    1. Atkinson S
    Ingham J Cheshire M Went S. Defining quality and quality improvement. Clin Med 2010; 10:537–9.
    OpenUrlFREE Full Text
  8. ↵
    1. Ovretveit J
    . Does improving quality save money? A review of evidence of which improvements to quality reduce costs to health service providers. London: The Health Foundation, 2009.
  9. ↵
    1. Tomson CR
    . What would it take to improve the quality of healthcare: more money, or more data? Clin Med 2009; 9:140–4.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Freeman T
    . Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Serv Manage Res 2002; 15:126–37.doi:10.1258/0951484021912897
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Donabedian A
    . Evaluating the quality of medical care. 1966. Milbank Q 2005; 83:691–729.doi:10.1111/j.1468-0009.2005.00397.x
    OpenUrlCrossRefPubMed
  12. ↵
    1. Lilford R
    Mohammed MA Spiegelhalter D Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004; 363:1147–54.doi:10.1016/S0140-6736(04)15901-1
    OpenUrlCrossRefPubMed
  13. ↵
    1. Smith KA
    Hayward RA. Performance measurement in chronic kidney disease. J Am Soc Nephrol 2011; 22:225–34.doi:10.1681/ASN.2010111152
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Lilford R
    Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won't go away. Br Med J 2010; 340:2016.doi:10.1136/bmj.c2016
    OpenUrlCrossRef
    1. Porter ME
    . What is value in health care? N Engl J Med 2010; 363:2477–81.doi:10.1056/NEJMp1011024
    OpenUrlCrossRefPubMed
  15. ↵
    1. Campbell MJ
    Jacques RM Fotheringham J et al. Developing a summary hospital mortality index: retrospective analysis in English hospitals over five years. Br Med J 2012; 344:1001.doi:10.1136/bmj.e1001
    OpenUrlCrossRef
  16. ↵
    1. Song Y
    Skinner J Bynum J et al. Regional variations in diagnostic practices. N Engl J Med 2010; 363:45–53.doi:10.1056/NEJMsa0910881
    OpenUrlCrossRefPubMed
  17. ↵
    1. Hawkes N
    . Patient coding and the ratings game. Br Med J 2010; 340:2153.doi:10.1136/bmj.c2153
    OpenUrlCrossRef
  18. ↵
    1. Hodsman A
    Ben-Shlomo Y Roderick P Tomson CR. The ‘centre effect’ in nephrology: what do differences between nephrology centres tell us about clinical performance in patient management? Nephron Clin Pract 2011; 119:c10–7; discussion c7.doi:10.1159/000321378
    OpenUrlCrossRef
  19. ↵
    1. Benneyan JC
    Lloyd RC Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care 2003; 12:458–64.doi:10.1136/qhc.12.6.458
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Thor J
    Lundberg J Ask J et al. Application of statistical process control in healthcare improvement: systematic review. Qual Saf Health Care 2007; 16:387–99.doi:10.1136/qshc.2006.022194
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Koetsier A
    van der Veer SN Jager KJ et al. Control charts in healthcare quality improvement. A systematic review on adherence to methodological criteria. Methods Inf Med 2012; 51:189–98.doi:10.3414/ME11-01-0055
    OpenUrlCrossRefPubMed
  22. ↵
    1. Kiefe CI
    Weissman NW Allison JJ et al. Identifying achievable benchmarks of care: concepts and methodology. Int J Qual Health Care 1998; 10:443–7.doi:10.1093/intqhc/10.5.443
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Kiefe CI
    Allison JJ Williams OD et al. Improving quality improvement using achievable benchmarks for physician feedback: a randomized controlled trial. JAMA 2001; 285:2871–9.doi:10.1001/jama.285.22.2871
    OpenUrlCrossRefPubMed
  24. ↵
    1. Berwick DM
    Calkins DR McCannon CJ Hackbarth AD. The 100,000 lives campaign: setting a goal and a deadline for improving health care quality. JAMA 2006; 295:324–7.doi:10.1001/jama.295.3.324
    OpenUrlCrossRefPubMed
  25. ↵
    1. Bradley EH
    Roumanis SA Radford MJ et al. Achieving door-to-balloon times that meet quality guidelines: how do successful hospitals do it? J Am Coll Cardiol 2005; 46:1236–41.doi:10.1016/j.jacc.2005.07.009
    OpenUrlCrossRefPubMed
  26. ↵
    1. Bradley EH
    Herrin J Wang Y et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med 2006; 355:2308–20.doi:10.1056/NEJMsa063117
    OpenUrlCrossRefPubMed
  27. ↵
    1. Berwick DM
    Nolan TW. Physicians as leaders in improving health care: a new series in Annals of Internal Medicine. Ann Intern Med 1998; 128:289–92.
    OpenUrlPubMed
  28. ↵
    1. Benning A
    Dixon-Woods M Nwulu U et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. Br Med J 2011; 342:199.doi:10.1136/bmj.d199
    OpenUrlCrossRef
  29. ↵
    1. Schouten LM
    Hulscher ME van Everdingen JJ et al. Evidence for the impact of quality improvement collaboratives: systematic review. Br Med J 2008; 336:1491–4.doi:10.1136/bmj.39570.749884.BE
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Trebble TM
    Hansi N Hydes T et al. Process mapping the patient journey: an introduction. Br Med J 2010; 341:4078.doi:10.1136/bmj.c4078
    OpenUrlCrossRef
  31. ↵
    1. van der Veer SN
    de Vos ML Jager KJ et al. Evaluating the effectiveness of a tailored multifaceted performance feedback intervention to improve the quality of care: protocol for a cluster randomized trial in intensive care. Implement Sci 2011; 6:119.doi:10.1186/1748-5908-6-119
    OpenUrlCrossRefPubMed
  32. ↵
    1. Bosch M
    van der Weijden T Wensing M Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract 2007; 13:161–8.doi:10.1111/j.1365-2753.2006.00660.x
    OpenUrlCrossRefPubMed
  33. ↵
    1. Grimshaw JM
    Shirran L Thomas R et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care 2001; 39(Suppl 2):II2–45.doi:10.1097/00005650-200108002-00002
    OpenUrlCrossRefPubMed
  34. ↵
    1. van der Veer SN
    Jager KJ Nache AM et al. Translating knowledge on best practice into improving quality of RRT care: a systematic review of implementation strategies. Kidney Int 2011; 80:1021–34.doi:10.1038/ki.2011.222
    OpenUrlCrossRefPubMed
  35. ↵
    1. Robb E
    Jarman B Suntharalingam G et al. Using care bundles to reduce in-hospital mortality: quantitative survey. Br Med J 2010; 340:1234.doi:10.1136/bmj.c1234
    OpenUrlCrossRef
  36. ↵
    1. McClellan WM
    Hodgin E Pastan S et al. A randomized evaluation of two health care quality improvement program (HCQIP) interventions to improve the adequacy of hemodialysis care of ESRD patients: feedback alone versus intensive intervention. J Am Soc Nephrol 2004; 15:754–60.doi:10.1097/01.ASN.0000115701.51613.D7
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Auerbach AD
    Landefeld CS Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med 2007; 357:608–13.doi:10.1056/NEJMsb070738
    OpenUrlCrossRefPubMed
Back to top
Previous articleNext article

Article Tools

Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
Learning from practice variation to improve the quality of care
Charles RV Tomson, Sabine N van der Veer
Clinical Medicine Feb 2013, 13 (1) 19-23; DOI: 10.7861/clinmedicine.13-1-19

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Learning from practice variation to improve the quality of care
Charles RV Tomson, Sabine N van der Veer
Clinical Medicine Feb 2013, 13 (1) 19-23; DOI: 10.7861/clinmedicine.13-1-19
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Identifying practice variation
    • Reducing practice variation
    • Evaluating QI
    • Summary and conclusions
    • References
  • Figures & Data
  • Info & Metrics

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Determining doctors' views on performance measurement and management of their clinical practice
  • Google Scholar

More in this TOC Section

  • The new UK internal medicine curriculum 
  • The Francis Crick Institute
  • ‘Every breath we take: the lifelong impact of air pollution’ – a call for action
Show more Professional Issues

Similar Articles

FAQs

  • Difficulty logging in.

There is currently no login required to access the journals. Please go to the home page and simply click on the edition that you wish to read. If you are still unable to access the content you require, please let us know through the 'Contact us' page.

  • Can't find the CME questionnaire.

The read-only self-assessment questionnaire (SAQ) can be found after the CME section in each edition of Clinical Medicine. RCP members and fellows (using their login details for the main RCP website) are able to access the full SAQ with answers and are awarded 2 CPD points upon successful (8/10) completion from:  https://cme.rcplondon.ac.uk

Navigate this Journal

  • Journal Home
  • Current Issue
  • Ahead of Print
  • Archive

Related Links

  • ClinMed - Home
  • FHJ - Home
clinmedicine Footer Logo
  • Home
  • Journals
  • Contact us
  • Advertise
HighWire Press, Inc.

Follow Us:

  • Follow HighWire Origins on Twitter
  • Visit HighWire Origins on Facebook

Copyright © 2021 by the Royal College of Physicians