Elsevier

The Lancet

Volume 363, Issue 9415, 3 April 2004, Pages 1147-1154
The Lancet

Series
Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma

https://doi.org/10.1016/S0140-6736(04)15901-1Get rights and content

Summary

The history of monitoring the outcomes of health care by external agencies can be traced to ancient times. However, the danger, now as then, is that in the search for improvement, comparative measures of mortality and morbidity are often overinterpreted, resulting in judgments about the underlying quality of care. Such judgments can translate into performance management strategies in the form of capricious sanctions (such as star ratings) and unjustified rewards (such as special freedoms or financial allocations). The resulting risk of stigmatising an entire institution injects huge tensions into health-care organisations and can divert attention from genuine improvement towards superficial improvement or even gaming behaviour (ie, manipulating the system). These dangers apply particularly to measures of outcome and throughput. We argue that comparative outcome data (league tables) should not be used by external agents to make judgments about quality of hospital care. Although they might provide a reasonable measure of quality in some high-risk surgical situations, they have little validity in acute medical settings. Their use to support a system of reward and punishment is unfair and, unsurprisingly, often resisted by clinicians and managers. We argue further that although outcome data are useful for research and monitoring trends within an organisation, those who wish to improve care for patients and not penalise doctors and managers, should concentrate on direct measurement of adherence to clinical and managerial standards.

Section snippets

Outcome data

During the 1980s and 1990s, the concept of outcome measurement became popular,14, 15, 16 and it was based on the premise that outcome is the ultimate measure of quality of care.17 This notion can be traced back to Ernest Codman's end-results idea.12 Outcome data can be patient-rated (satisfaction and quality of life) or recorded by an external party (mortality and morbidity). For now, we use outcome as short-hand for observed mortality and morbidity. Use of outcomes to compare quality of care

Outcomes—what do they tell us?

Outcomes are influenced by definitions, data quality, patient case-mix, clinical quality of care and chance.

Performance measures and indicators

The distinction between a measure of quality and an indicator of quality is important. Generally speaking we have very few real measures of quality. For example, post operative length of stay is a measure of the patient's hospital stay, but only an indicator of quality—eg, a patient's long stay might represent postoperative complications or poor

Correlating quality of clinical care with outcomes

In several studies, researchers have combined case-mix adjusted outcomes with measures of clinical quality of care, obtained at the same time from the same institutions. Many of these investigators found no association between quality and outcome. For example, Park and colleagues37 recorded no correlation between outcome and quality of care for congestive heart failure or pneumonia. Best and Cowper38 noted no significant associations between outcome and quality of acute medical care in

If not outcomes then what?

We need performance measures which better reflect the quality of care. We will come later to the distinction between performance management (systems of punishment or reward) and continual non-judgment improvement.

Figure 1 shows a conceptual map of factors that could affect the final quality and quantity of care. The map depicts a causal chain starting from the structures in which management processes are nested. Structural factors are those that cannot easily be affected at the organisational

Structural factors and institutional processes

Several measurable structural and institutional factors are associated with clinical outcomes. Although few if any have been tested in prospective trials, the association is quite strong in some cases, and cause and effect conclusions are plausible, since one structural or institutional process can affect many clinical processes. For example, the quality of care at top and bottom ranking Veterans Affairs hospitals in the USA was assessed with case-note review48 and structured site visits.49

Clinical process measures

Measurement of clinical processes offers advantages over outcome based monitoring63, 64, 65 as a practical instrument to stimulate change. Clinical process measures should be based on agreed criteria, supported by evidence, or logic, and include actions such as appropriate use of β blockers after acute myocardial infarction,66 the use of lower tidal volume in acute respiratory distress syndrome,67 and avoiding delay in the use of antibiotics in pneumonia.68

Clinical process measures guide

Throughput

Some process measures are based on management data rather then adherence to clinical standards. These measures include waiting lists, ambulance response times, and delays in accident and emergency departments. Such performance data are potentially useful for quality improvement but when used for performance management they often lead to a focus on changing the numbers rather than genuinely improving the systems, just as quotas led Soviet farmers to play the system (eg, by deflating declared

Patient-rated outcome

We believe that organisations should measure and respond to the opinions of their service users. We also believe that it is reasonable for external organisations (eg, head office, service commissioners) to ensure that service providers do that. However, it is wrong to compare organisations (for performance management purposes) on the basis of differences in patients' satisfaction or quality of life. Patient-rated outcomes vary by many features such as age, wealth, and ethnic background.75 So

The role of performance monitoring—to judge or not to judge?

Imagine that you are managing a hospital ranking on the 98th centile for case-mix adjusted hospital mortality, and that this placement has contributed to you achieving only one star in a three star grading. You would not know whether your poor showing was due to: differences in definitions and data quality; chance, although the effects of this are quantifiable statistically; case-mix differences for which risk adjustment was inadequate; structural factors affecting clinical processes;

A framework for improvement

Consider the cases of Dr Harold Shipman,87 Bristol,5 and a high mortality cardiac surgeon.88 Although several outcome analyses have shown they could have been spotted earlier,88, 89, 90, 91 the implications for management were unclear until the reason for the unusual results had been investigated. Subsequent investigations uncovered a murderer in one case, poor systems of care in the second case,92 and a cardiac surgeon operating with an undiagnosed brain tumour in the third case. Even in these

Search strategy and selection criteria

We started our search for papers dealing with correlating quality of clinical care with outcomes, with the classic 1997 contribution from Iezzoni.26 We identified cited papers dealing with the relation between quality and outcome, and obtained relevant MESH headings which we then used as the basis for a systematic literature search in MEDLINE. From the initial yield of more than 5000 papers, we identified those that attempted to quantify the association between quality of care and

References (94)

  • BakerR et al.

    Monitoring mortality rates in general practice after Shipman

    BMJ

    (2003)
  • BRI Inquiry Panel

    Learning from Bristol: the report of the public inquiry into children's heart surgery at the Bristol Royal Infirmary 1984–1995

    (2001)
  • Good Hospital Guide for Britain and Ireland. London

    (2001)
  • PronovostPJ et al.

    Developing and implementing measures of quality of care in the intensive care unit

    Curr Opin Crit Care

    (2001)
  • GunningK et al.

    ABC of intensive care: outcome data and scoring systems

    BMJ

    (1999)
  • KnausWA et al.

    Variations in mortality and length of stay in intensive care units

    Ann Intern Med

    (1993)
  • SpiegelhalterD

    Surgical audit: statistical lessons from Nightingale and Codman

    J R Stat Soc

    (1999)
  • ChamblerAF et al.

    Lord Moynihan cuts Codman into audit

    Ann R Coll Surg Engl

    (1997)
  • KaskaSC et al.

    Historical perspective. Ernest Amory Codman, 1869–1940. A pioneer of evidence-based medicine: the end result idea

    Spine

    (1998)
  • ThomsonRG et al.

    Performance management at the crossroads in the NHS: don't go into the red

    Qual Health Care

    (2000)
  • EllwoodPM

    Shattuck lecture: outcomes management—a technology of patient experience

    N Engl J Med

    (1988)
  • EpsteinAM

    The outcomes movement: will it get us where we want to go?

    N Engl J Med

    (1990)
  • MulleyA

    Outcomes research: implications for policy and practice

  • DaleyJ et al.

    Risk-adjusted surgical outcomes

    Ann Rev Med

    (2001)
  • GlanceLG et al.

    Rating the quality of intensive care units: is it a function of the intensive care unit scoring system?

    Crit Care Med

    (2002)
  • HendersonJ et al.

    Recording of deaths in hospital information systems: implications for audit and outcome studies

    J Epidemiol Community Health

    (1992)
  • HartzAJ et al.

    Comparing hospitals that perform coronary artery bypass surgery: the effect of outcome measures and data sources

    Am J Public Health

    (1994)
  • JuliousSA et al.

    Crude rates of outcome

    Br J Surg

    (2000)
  • McKeeM et al.

    Mortality league tables: do they inform or mislead?

    Qual Health Care

    (1995)
  • IezzoniLI

    Risk adjustment for measuring health care outcomes

    (1994)
  • IezzoniLI

    The risks of risk adjustment

    JAMA

    (1997)
  • NormandSL et al.

    Statistical methods for profiling providers of medical care: issues and applications

    JAMA

    (1997)
  • DaleyJ

    Validity of risk adjustment methods

  • AshA et al.

    Evaluating the performance of risk adjustment methods: dichotomous measures

  • BlumbergMS

    Risk adjusting health care outcomes: a methodologic review

    Med Care Rev

    (1986)
  • GoldsteinH et al.

    Statistical aspects of institutional performance: league tables and their limitations (with discussion)

    J R Stat Soc

    (1996)
  • LandrumM et al.

    Analytical methods for constructing cross-sectional profiles of health care providers

    Health Ser Outcomes Res Methodol

    (2000)
  • GuptaN et al.

    Considerations in the development of Intensive Care Unit Report Cards

    J Intensive Care Med

    (2002)
  • McQuillanP et al.

    Confidential inquiry into quality of care before admission to intensive care

    BMJ

    (1998)
  • EbrahimS

    Do not resuscitate decisions: flogging dead horses or a dignified death?—resuscitation should not be withheld from elderly people without discussion

    BMJ

    (2000)
  • ParkRE et al.

    Explaining variations in hospital death rates: randomness, severity of illness, quality of care

    JAMA

    (1990)
  • BestWR et al.

    The ratio of observed-to-expected mortality as a quality of care indicator in non-surgical VA patients

    Med Care

    (1994)
  • JencksSF et al.

    Interpreting hospital mortality data: the role of clinical risk adjustment

    JAMA

    (1988)
  • KrumholzHM et al.

    Evaluation of a consumer-oriented internet health care report card: the risk of quality ratings based on mortality data

    JAMA

    (2002)
  • ThomasJW et al.

    Validating risk-adjusted mortality as an indicator for quality of care

    Inquiry

    (1993)
  • KahnKL et al.

    Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system

    JAMA

    (1990)
  • DuboisRW et al.

    Hospital inpatient mortality: is it a predictor of quality?

    N Engl J Med

    (1987)
  • Cited by (363)

    • Measuring patient satisfaction in acute care hospitals: nationwide monitoring in Switzerland

      2021, Zeitschrift fur Evidenz, Fortbildung und Qualitat im Gesundheitswesen
    View all citing articles on Scopus
    View full text