What would it take to improve the quality of healthcare: more money, or more data?

Abstract
Despite a sustained and massive increase in spending with the NHS, the evidence that care has improved, other than in areas of performance that have been intensively managed or rewarded by additional cash bonuses, is poor to non-existent. This failure to achieve across-the-board improvement is attributable to the fact that the outcomes of healthcare are ‘system properties’ and are unlikely to improve as a result of more work being put through the same system, and instead will only improve if healthcare providers at all levels are actively encouraged to redesign the system to improve on current performance. The most important way to achieve the ‘will’ to make such changes is to use data, preferably collected with minimal additional work, to generate clinically convincing case-mix-adjusted analyses of quality of care. Examples are given from the centre-specific analyses published by the UK Renal Registry, a fully electronic registry that analyses data extracted direct from renal information technology systems used in each primary care trust that provides renal replacement therapy, and from other national and regional quality improvement programmes. The NHS has unrivalled opportunities to learn from high performance and to use this learning to narrow the gap between best and worst.
Introduction
The past five years have seen a sustained increase in government spending on the NHS. It is important to know whether this has been associated with an improvement in the quality of care.
Definitions
For the purposes of this review, I have adopted the dimensions of quality used by the Institute of Medicine – easily remembered using the acronym STEEEP:
safe
timely
efficient
effective
equitable
patient centred.1
Some of these dimensions are easier to measure than others. ‘Safety’ relates largely to the avoidance of harm caused by healthcare rather than by disease, and includes avoidance of healthcare-related infections, harm caused by drug treatment (eg interactions, allergic reactions, nephrotoxicity), avoidance of wrong-site surgery, and of thromboembolism, among many other avoidable harms. ‘Timely’ relates to avoidance of unwanted waits and delays for an initial consultation or for treatment. ‘Efficient’ means cost-effective – health gained per pound spent. ‘Effective’ is shorthand for evidence-based medicine. ‘Equitable’ conveys the concept that the quality of healthcare should be the same regardless of social or educational status, income, race, gender or sexual orientation. ‘Patient centred’ is a measure of the patient's experience of healthcare, including control, privacy, dignity and lack of fear – dimensions on which the NHS scores poorly in international comparisons (eg those conducted by the Commonwealth Fund) – best measured by patient surveys.
Has increased spending resulted in improved quality?
Total spending on the NHS increased from £44.9 billion to £86.4 billion between 2001/2 and 2005/6.2 Several reports have subsequently appeared on how this extra money was spent. The Wanless Report concluded that there had been major reductions in waiting times for treatment – for instance, the number of outpatients waiting more than 13 weeks to be seen fell from just over 500,000 in 1999 to less than 50,000 in 2005.3 A report from the Health Foundation found similar reductions in waiting times.4 A highly critical report from an independent thinktank conceded that the high-profile ‘targets’ (eg for staffing, facilities, waiting times, cancer treatment, treatment for coronary heart disease) had all been met, but at the expense of ‘abysmal’ performance on public health measures (eg rates of obesity), serious shortcomings in specialty areas not covered by targets (eg stroke, mental health), falling ‘productivity’ despite the new consultant contract, insignificant improvement in patients' involvement in their own care, and poor performance on many international comparisons, including cause-specific mortality.2 The National Audit Office measured the quality of out-of-hours services and found no correlation between the actual cost per head and the quality score.5 A Department of Health report on health inequalities demonstrated that there was an increase in the number of general practitioners (GPs) per head of population between 2002 and 2005. However, both in 2002 and in 2005, there was a linear gradient between GP provision and social deprivation, with less GPs per 100,000 population in the most deprived fifth of primary care trusts than in the least deprived – and with no reduction in this inequality between 2002 and 2005.6 The evidence that the increased expenditure on consultants' salaries incurred under the new consultant contract has resulted in improved efficiency or ‘productivity’ is nearly non-existent.3 In contrast, there is at least limited evidence that spending on the Quality and Outcomes Framework in the GP contract has resulted in improvements in those aspects of the quality of care specified within the framework.
It is reasonable to conclude from these observations that increased spending has improved performance only in areas that have been the subject of stringent targets and performance management, such as waiting times, but without overall improvement, and to the possible detriment of areas in which no targets were set. Targets and performance management have been associated with ‘gaming’.7,8 This is, to say the least, a disappointing result from a massive increase in investment.
Why might increased spending not have improved the overall quality of care?
Spending more money on the healthcare system and expecting higher quality, other than for example higher volume, is reminiscent of an aphorism attributed to Albert Einstein: ‘the definition of insanity is doing the same thing over and over again and expecting different results’. Or, in the phrase coined by Don Berwick and Paul Batalden – the ‘first law of improvement’ – ‘Every system is perfectly designed to achieve exactly the results it achieves’.9 This phrase captures the concept that quality of care is a system property, in exactly the same way that quality of a manufacturing process or service industry is much more dependent on the systems and processes in place to achieve the desired outcome than on the individual human beings working within the system. (The phrase was itself ‘borrowed’ from W Edwards Deming, one of the gurus of quality improvement in manufacturing, who observed ‘If you have a stable system, then there is no use to specify a goal. You will get whatever the system will deliver. A goal beyond the capability of the system will not be reached’.10)
The ‘first law’ is perhaps slightly unfair in that healthcare systems are not ‘designed’ to achieve significant rates of wound infection, thromboembolism, catheter-related infection, ventilator-associated pneumonia, or over-anticoagulation, for example; but the point is that these and other events are system properties, and cannot be eradicated by exhorting people to work harder, to be more careful, to go on more refresher courses, or even by employing more people to work within the same system; the only way to achieve significant improvement in the quality of care is to redesign the system.
Most clinicians (particularly doctors) are initially reluctant to endorse this analysis, trained as they are to think about the care of individual patients. Resistance to ‘cookbook medicine’ – a term sometimes applied to system redesign that ‘makes the desired action the default’,11 and embeds evidence-based protocols into ‘the way we do things here’ misses the point: try the analogy that physicians should spend time deciding which recipe to apply, and when to adjust it for guests with particular dietary requirements, rather than pretending to be able to memorise entire cookbooks while cooking for 25 different five-course dinner parties each day. Many of us have also come through a time in the NHS when there was significant underfunding compared to other health systems, which made it easy to argue that poor quality of care in the NHS was a direct result. This argument is more difficult to sustain today, when a massive increase in spending has amplified the volume of care, but without a parallel increase in quality – analogous to a factory turning out more cars but with the same defect rate. Increasingly, attention is being paid to the enormous waste incurred by poor systems, and the opportunities for improving efficiency as well as all other aspects of quality by application of ‘lean thinking’ to healthcare.12
Observations from the UK Renal Registry
The concept of quality of care as a system property can be illustrated by examples from the UK Renal Registry (UKRR). The UKRR is a fully electronic disease registry that receives quarterly data extracts from information systems in routine clinical use in dialysis and transplant centres for the care of patients receiving renal replacement therapy (RRT) for established renal failure (ERF). These data extracts include demographic information, cause of ERF, co-morbidity at the start of RRT, and laboratory data. Reports are issued annually that include not just demographic data (eg incidence, prevalence, numbers of patients on each modality of RRT) but also fully de-anonymised reports on each centre's performance against a number of audit standards, set by the Renal Association, including measures of correction of anaemia, control of serum calcium and phosphate, and haemodialysis dose. The UKRR is thus able to report on important measures of the quality of care.
Figure 1 shows the ‘caterpillar plots’ – league tables – summarising each centre's performance for a measure of haemodialysis dose (urea reduction ratio, URR) – an important standard, because low dialysis dose is associated with poor outcomes including poor survival. Although the variation in performance (measured by the percentage of patients whose URR was ≥65%) lessened over time, the relative positions of the three centres (highlighted) remained largely constant. These differences are likely to reflect differing clinical processes relating to dialysis prescription, rather than to case mix or measurement artefacts. The stability of these processes can also be demonstrated using statistical process control charts, as in Figure 2.
Caterpillar plots illustrating the proportion of haemodialysis patients with a urea reduction ratio greater than 65% in the last quarter of the year in each centre reporting data to the UK Renal Registry in 2002 and in 2005. Three centres are highlighted to illustrate the general point that position in the ‘league table’ is stable from year to year, with occasional exceptions. These three centres held the same relative positions in 2003 and 2004. CI = confidence interval; URR = urea reduction ratio.
A statistical process control time series chart showing the quarterly data on the proportion of haemodialysis patients achieving a urea reduction ratio greater than 65% in Bristol, the middle of the three centres highlighted in Figure 1. These plots are widely used in industrial quality control and increasingly used in healthcare. This plot illustrates just how stable the process has been for the last few years in Bristol, and gives support to the use of data from the last quarter of the year as an overall measure of performance against a clinical standard. URR = urea reduction ratio.
These examples are clearly of most interest to renal physicians, who need to learn from the high performance centres. However, the lessons are also generic: outcomes of care are stable system properties, and understanding how to change the system is therefore critically important for improving the quality of care.
Redesigning complex systems – complexity theory and small tests of change
The Institute for Healthcare Improvement (IHI) teaches that three things are required to make change happen – will, ideas, and execution. In the words of Goethe, ‘It is not enough to have knowledge, one must also apply it. It is not enough to have wishes, one must also accomplish’.
Generating ‘will’ using data
The will to achieve improvement in the quality of healthcare might appear to be a ‘given’ – why would any clinician, or manager, not want to improve the quality of care? However, achieving any degree of system re-design is difficult. The barriers include fear, mistrust between clinicians and managers, and the conviction among clinicians – which students appear to learn at medical school – that all the problems of the NHS are caused by underfunding and/or managers. The most important key to breaking down these barriers is reliable, high-quality, clinically endorsed information on the quality of care and, in particular, in variations in the quality that cannot be attributed to case mix – or persist after statistical adjustment for case mix. Sooner or later, clinicians, presented with information that the quality of care in their centre compares poorly with that in another centre with similar case mix, facilities, and funding, will want to do something about it. The higher quality the analysis and presentation of this information, the more rapidly clinicians are likely to move through the ‘stages of facing reality’ from denial, to blaming someone or something else, to accepting the need to learn how to change their own system of care.
Information on variations in the quality of care can be drawn from a variety of data sources. These include specialty-specific audits requiring additional data collection (eg the Myocardial Ischaemia National Audit Project (MINAP), the National Stroke Audit); data derived from administrative databases and hospital discharge codes (such as those reported by Dr Foster using the Hospital Episode Statistics (HES) database); and data drawn directly from clinical information systems used in routine clinical care (such as those used by the UKRR and by some district and regional diabetes databases). MINAP, to which nearly all acute hospitals have been contributing since 2003, has already reported major improvements in door-to-needle time and in 30-day mortality, and it is hard to escape the conclusion that these improvements are due at least in part to a ‘ratcheting up’ of standards of care caused by the display of each hospital's performance against the national aggregate.13,14 However, these and other similar audits require considerable investment in data collection systems and will never be applicable across the range of NHS practice.
Many clinicians distrust data derived from HES, having seen at first hand that discharge codes can be inaccurate and misleading about individual clinicians' outcomes, but these inaccuracies do not appear to introduce bias.15 Aylin et al compared the use of HES and of clinical databases to perform risk adjustment for survival after four major surgical procedures, and found that the model using HES performed at least as well as that using the clinical database.16 HES data are also used to generate hospital standardised mortality ratio (HSMR), a measure of risk-adjusted mortality in hospitals: in Walsall, the information that HSMR was the highest of all acute trusts in England prompted a quality improvement and clinical governance programme that was followed by major and sustained improvements in HSMR over the subsequent four years.17 Another persuasive example of the power of routine data comes from South Manchester, which was named by Dr Foster as having the highest risk-adjusted mortality after coronary artery bypass grafting in 2001. Despite initial reluctance to accept these findings, a regional quality improvement programme was started, with open publication of each surgeon's results (www.nwheartaudit.nhs.uk), and five years later the same centre's risk-adjusted results are among the lowest in the country.18,19
Finding ideas for improvement
Once the will to improve has been generated, clinicians and managers need to know what to do, other than ‘try harder’, to improve outcomes. Often this is a question of the more reliable implementation of existing knowledge.20 The IHI's 100,000 lives campaign, for instance, taught US hospitals how to implement packages of evidence-based measures for central line care, ventilator care, medicines management, prevention of wound infection, and management of heart attacks.21–23 It might seem bizarre that so much effort is required to encourage better implementation of what we already know, but this is the crucial point: care is delivered in highly complex systems, and some work better than others. Learning from high-performing systems is key. Here the NHS still has major opportunities for improvement. Other than small-scale reports (eg the Delivering quality and value series from the NHS Institute for Innovation and Improvement), there is no systematic way of learning from high performance, or even of publicly identifying the high performers in a given area. The Healthcare Commission, for instance, identifies and helps poor performers, but has no power to seek out and describe best practice.
Describing best practice is methodologically complex: a study that identified six strategies that were associated with faster door-to-balloon time in acute myocardial infarction took several years, a preliminary qualitative study, and extensive field testing of a questionnaire.24 There are other barriers to understanding how to spread best practice, too: a recent quality improvement report that described successful reduction of central line-associated bacteraemia using a bundle of existing evidence-based interventions resulted in extensive investigation by the Office for Human Research Protections on the basis that the investigators had failed to obtain consent from all patients and providers – allowing the conclusion that ‘hospitals can freely implement practices they think will improve care so long as they don't investigate whether improvement actually occurs’.25,26 Similar barriers to the study of clinical processes – involving no threat at all to patients' autonomy or safety – occur in the UK, and require reform. There is hope that the Health Innovation Council will work to remove these barriers.
Making change happen in complex systems
Once an organisation has accepted that its performance could improve, and has garnered ideas to generate that improvement, it still has to work out how to implement these ideas. The existing model, where a new detailed policy will be written, passed through a series of committees, and then announced, is a comprehensive and bureaucratic failure. The alternative is to empower those who work within the system to make very small, incremental changes to it while providing real-time feedback on performance, using the ‘model for improvement’ taught by the IHI and the NHS Institute.27–29 This methodology was used in several large successful improvement collaboratives in the NHS. However, although these collaboratives exposed some individuals to improvement methodology, it is still a major challenge to the NHS to spread these skills more widely in the hope of promoting real and sustained improvement of all dimensions of the quality of healthcare. Without that change, no amount of investment will obtain the quality of care we aspire to.
Summary
Spending more on the current system will generate more healthcare of the same quality. Meaningful analyses of reliable, clinically relevant measures of the quality of healthcare can generate the will to make change happen. There are plenty of ideas for how to improve, though many more could be generated if we were systematically to study and learn from high performance rather than concentrating on performance management of underperformers. Making changes in complex systems is difficult, and can best be achieved by empowering those who understand and work within the system to make incremental changes in the right direction.
Footnotes
- This article is based on a regional lecture given in Birmingham on 4 October 2007
- © 2009 Royal College of Physicians
Reference
- ↵Institute of Medicine. Crossing the quality chasm: a new health care system for the 21st century. Washington, DC: National Academy Press, 2001.
- ↵Gubb J. The NHS and the NHS plan: is the extra money working? Civitas: Institute for the Study of Civil Society, 2006.
- ↵Wanless D, Appleby J, Harrison A, Patel D. Our future health secured? A review of NHS funding and performance. London: King's Fund, 2007.
- ↵Leatherman S, Sutherland K. A quality chartbook: patient and public experience in the NHS. The Health Foundation: Quest for Quality and Improved Performance. London: The Health Foundation, 2007.
- ↵National Audit Office. The provision of out-of-hours care in England. London: National Audit Office, 2006.
- ↵Department of Health. Tackling health inequalities status report on the programme for action. London: DH, 2005.
- ↵Bevan G, Hood C. Have targets improved performance in the English NHS? BMJ 2006; 332:419–22.doi:10.1136/bmj.332.7538.419
- ↵Pitches D, Burls A, Fry-Smith A. How to make a silk purse from a sow's ear – a comprehensive review of strategies to optimise data for corrupt managers and incompetent clinicians. BMJ 2003; 327:1436–9.
- ↵Berwick DM. A primer on leading the improvement of systems. BMJ 1996; 312:619–22.doi:10.1136/bmj.312.7031.619
- ↵Deming WE. Out of the crisis. Cambridge, MA: MIT Press, 2000.
- ↵
- ↵Jones D, Mitchell A. Lean thinking for the NHS. London: The NHS Confederation, 2006.
- ↵Birkhead JS, Walker L, Pearson M et al. Improving care for patients with acute coronary syndromes: initial results from the National Audit of Myocardial Infarction Project (MINAP). Heart 2004; 90:1004–9.doi:10.1136/hrt.2004.034470
- ↵Walker L, Birkhead J, Weston C, Pearson J, Quinn T. Myocardial Infarction National Audit Project (MINAP): How the NHS manages heart attacks. Sixth public report 2007. London: Royal College of Physicians, 2007.
- ↵Croft GP, Williams JG, Mann RY, Cohen D, Phillips CJ. Can hospital episode statistics support appraisal and revalidation? Randomised study of physician attitudes. Clin Med 2007; 7:332–8.
- ↵Aylin P, Bottle A, Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007; 334:1044.doi:10.1136/bmj.39168.496366.55
- ↵Jarman B, Bottle A, Aylin P, Browne M. Monitoring changes in hospital standardised mortality ratios. BMJ 2005; 330:329.doi:10.1136/bmj.330.7487.329
- ↵
- ↵Bridgewater B, Grayson AD, Brooks N et al. Has the publication of cardiac surgery outcome data been associated with changes in practice in northwest England: an analysis of 25,730 patients undergoing CABG surgery under 30 surgeons over eight years. Heart 2007; 93:744–8.doi:10.1136/hrt.2006.106393
- ↵
- ↵McCannon CJ, Schall MW, Calkins DR, Nazem AG. Saving 100,000 lives in US hospitals. BMJ 2006; 332:1328–30.doi:10.1136/bmj.332.7553.1328
- Tanne JH. US campaign to save 100,000 lives exceeds its target. BMJ 2006; 332:1468.doi:10.1136/bmj.332.7556.1468-b
- ↵
- ↵
- ↵
- ↵Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance (1st edn). San Francisco: Jossey-Bass, 1996.
- NHS Institute for Innovation and Improvement. Improvement leaders guide: improvement knowledge and skills. General improvement skills. London: NHS Institute for Innovation and Improvement, 2005.
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.