Skip to main content

Main menu

  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us

Clinical Medicine Journal

  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

User menu

  • Log in

Search

  • Advanced search
RCP Journals
Home
  • Log in
  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us
Advanced

Clinical Medicine Journal

clinmedicine Logo
  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

The white papers, quality indicators and clinical responsibility

Andrew Spencer
Download PDF
DOI: https://doi.org/10.7861/clinmedicine.12-1-19
Clin Med February 2012
Andrew Spencer
University Hospital, North Staffordshire
NHS Information Centre
Roles: Consultant paediatrician, National clinical lead for hospital specialties
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: andy.spencer@doctors.org.uk
  • Article
  • Info & Metrics
Loading

Abstract

The coalition government has set out its stall in a cluster of white paper consultation documents. One theme to emerge is a commitment to monitoring outcomes. It is outcomes, underpinned by National Institute for Health and Clinical Excellence quality standards, that are to be used to regulate the NHS and these will be made available to the public. This paper sets out the importance of measuring quality in the NHS and some of the principles involved in analysis, presentation and interpretation of results. Clinicians have a duty to improve patient care and measurement and comparison is one of the tools at their disposal. Clinical involvement in the development of metrics and quality indicators is essential for meaningful results and it is vital that clinicians now take ownership of the quality of the clinical data captured on their patients.

Key Words
  • hospital episode statistics (HES)
  • informatics
  • metrics
  • quality indicators

Introduction

The coalition government has set out its stall in a cluster of white paper consultation documents.1–3 A number of themes emerge but one that will be of direct interest to frontline clinicians is the commitment to monitoring outcomes.1 It is outcomes that are to be used to regulate the NHS and these will be made available to the public on government websites,4 through NHS Choices5 and by a number of independent information specialists.

It is important for frontline clinicians to understand the reasons for this approach. The World Health Organization's (WHO's) World health report 2000 was devoted to improving performance.6 The introductory paragraph states ‘The difference between a well-performing health system and one that is failing can be measured in death, disability, impoverishment, humiliation and despair’. Strong words perhaps, but the clear implication is that any government of a civilised society requires information to reassure itself, its subjects and the international community of the quality and equity of its healthcare system. Internationally it is not just WHO that takes an interest in such matters; the Organisation for Economic Cooperation and Development (OECD) has a Health Care Quality Indicator project7 which aims to develop a set of indicators to compare health services across member countries (including the UK).

In order to achieve optimal outcomes, the government has recognised the importance of adherence to quality standards and the National Institute for Health and Clinical Excellence (NICE) has been given the task of working with professionals to develop these standards for priority areas. Assessment of service quality requires the development of metrics and quality indicators.

Metrics and quality indicators

A metric refers to multiple or sequential numerical measurements of an attribute about a patient or service. For example, the monthly measurement of methicillin-resistant Staphylococcus aureus (MRSA) incidence is an MRSA metric.

A quality indicator (QI) is the use of one or more measures or metrics to provide information about change in the context of an objective, target or goal. For example, the reduction in overall incidence of MRSA in an institution over time is an indicator of the level achievement of infection control objectives. QIs do not provide a direct measure of service quality, but they do indicate which services are likely to benefit from further investigation. Metrics and QIs are first and foremost tools that healthcare professionals can use to improve the quality of the services they provide. It was Lord Kelvin (1824–1907) who said ‘if you cannot measure it, you cannot improve it’. Although this statement may not be true in its entirety, measurement is still a good starting point for those services which are amenable to this approach. The measurement of healthcare can be divided into the following three broad categories.8

Adequacy of service provision

The structure of the service may be examined to determine whether published standards are met such as those in the national service frameworks. This could include the availability of facilities as well as having the appropriate number of trained staff and the required protocols and pathways in place.

Process of care

This relates to the provision of optimal clinical practice and may be used to monitor whether guidelines or care pathways are followed. Examples might include:

  • the time to brain scan in patients developing acute stroke as this is critical for effective thrombolysis treatment

  • the percentage of mothers in preterm labour receiving timely antenatal steroids as this has been proven to reduce the severity of lung disease in infants.

The value of these measurements is determined by the underlying evidence base and its importance to clinical outcomes. Process measures are the easiest to interpret and the most likely to result in rapid improvement. On the other hand, surfacing the information can be expensive especially if case note review is required.

Outcome of care

This relates to the benefits which patients experience as a result of care and is the domain which has received the greatest emphasis in the white paper documents. Although judged to be most important, indicators in this category are the most difficult to interpret when used to assess service quality, because the outcome of any treatment will depend on a multitude of factors. For example, mortality will depend upon the illness severity, age of the patient and co-morbidities present, to name a few. Also, mortality may be affected by decisions taken by patients (how soon they seek medical advice), general practitioners (when and to whom they refer), hospital clinicians (accuracy of diagnosis and effective and timely treatment) and community services (quality of aftercare, social and family support). For these reasons, outcome measures are often subject to statistical adjustment to make allowance for some of the confounding factors.

Importance of good quality data

Measuring the quality of care, whether it is by process or outcome, requires good quality data. The most comprehensive dataset available for England is the hospital episode statistics (HES).9 Collected on all hospital admissions since 1989, this national resource contains coded information about diagnoses and procedures on every inpatient. These data are collected using a trust's patient administration systems (PAS). HES data which are held by the NHS Information Centre (IC) are linked10 to the national mortality statistics available through the Office for National Statistics (ONS) and can also be linked by the IC to other databases.

One of the perceived problems with HES is the fact that clinicians are often divorced from the process of coding the data, leading to inaccuracy.11 The Royal College of Physicians (RCP) has done a great deal of work through the iLab project to attempt to improve clinical engagement.12 They concluded that HES was not suitable for monitoring the performance of individual consultant physicians because it was originally designed for administrative purposes which does not relate well to current working practices. Furthermore, longstanding clinical disengagement from the validation and use of HES data was cited as one of the reasons for poor data quality. In a previous paper13 there was a call for a change in culture and process along with much greater clinician engagement in data collection and validation. In recent years some clinicians have become more aware of the importance of coding because of Payment by Results (PbR),14 although this may lead to coding to maximise income rather than for clinical accuracy. However, it is essential that clinicians become much more involved in the future as there is an intent to use this information to judge the quality of care they provide.3 Indeed, HES data are already used to develop quality indicators which are published by NHS Choices.5 A discussion document outlining seven key issues that need to be improved to make HES more useable and clinically relevant has been published by the Academy of Royal Medical Colleges.15

The white paper documents3 suggest that QIs will be published at increasing levels of disaggregation from trust level, down to specialty and even consultant-led teams. As data become more disaggregated, casemix adjustment becomes more difficult and data quality more critical. Other data sources, such as national audits and specialist databases, will also be used, but linkage to HES often enhances the scope of the information available. At a regional level, HES data are used by the quality observatories16 to support quality monitoring and improvement.

Mortality ratios

Mortality is one of the easiest outcomes to measure as it is unequivocal and always accurately recorded by ONS. In order to compare death rates between organisations or geographical locations the standardised mortality ratio (SMR) has been developed.17 In this method all deaths following a procedure are used to determine the relative contribution of independent variables such as age, sex, co-morbidities and illness severity. Each patient is allocated a risk of death. If the organisation achieves a death rate that is matched by the risk profile of its patients then the SMR will be 100%. A low SMR indicates that the service is doing better than expected and a high value the converse. In the hospital standardised mortality ratio (HSMR) this technique is applied to a basket of 56 common diagnostic groups, in fact the ones that are associated with 80% of hospital deaths.18 HES data are used as the data source for the risk adjustment calculations. Recently a new mortality indicator has been developed for the NHS, called the summary hospital level mortality indicator (SHMI).19 This is not an indicator of quality, its value lies in the opportunity to flag up hospitals with excessively high mortality so that hospital management boards can investigate and determine whether there is a problem that needs to be addressed. Ranking hospitals by SHMI will not be useful, but the values are likely to be mandated as part of a trust's ‘quality accounts’.

Patient-reported outcome measures

Another approach to obtaining outcome data is to determine how the patients evaluate the results of treatment using patient-reported outcome measures (PROMs).20 This is not easy because the questions used have to be validated for each condition. Standardised quality of life questionnaires may be used to make comparisons across conditions. For example, to determine whether patients appear to get most benefit from hernia or knee surgery. However, if a small hernia does not impact on quality of life scores, then the repair will not show improvement. This does not mean that hernia repair is not worthwhile. Hernia repair might rate as highly beneficial on a questionnaire validated for evaluating this procedure. For this reason, both types of assessments are necessary. PROMs probably gain the greatest traction when used to measure the outcome of a single procedure, such as a hip replacement. Even in relatively well-controlled environments, external factors such as casemix and patient expectations will affect the results. Also long-term outcomes, such as the longevity of a replacement hip, will not be evaluated by this means. Currently the Department of Health is piloting the use of PROMs in four surgical areas (hip and knee replacement, hernia repair and varicose veins surgery).21 The results are linked to HES and available for different providers on HES Online.22 The new outcome framework suggests that many more PROMs will be developed.1

Attributes of a good quality indicator

A good QI should aim to measure something that is:

  • unequivocaly

  • practical to measure

  • important to clinical practice

  • under-pinned by good evidence

  • amenable to change.

In this respect it is better to have a few good indicators than a plethora which do not meet these criteria.23 One approach that is being adopted by many specialty groups is to develop QIs and metrics around a pathway of care such as the stroke pathway.24 In this way, QIs can be used to ensure that the important clinical decisions in the pathway are achieved for the majority of patients.

Presentation of quality indicators

QIs may be collected for a variety of purposes.23 Service improvement is one of the most important and these QIs must be fed back to those delivering the service. QIs may also be collected for commissioners to ensure that the service meets their specification, for patients/the public to gain reassurance about the safety and effectiveness of care and by the government to demonstrate good governance to the taxpayer and to the international community. These requirements are different and not only affect which indicators are collected, but also the way the data are presented. It is inevitable that the majority of QI data will be open to public scrutiny because of the white paper imperative around transparency and public accountability.2 Thus it is important that the data presented are intuitive and do not readily lead to false conclusions. For example, ranking data is often misleading because it is natural to assume that a service ranked 1 is better than a service ranked 21. In fact, the difference between these two may simply be a matter of chance. One alternative approach is to compare performance using a funnel plot.25 In this approach the numerator (proportion of patients with adverse outcome) is shown on the vertical axis and the denominator (population studied) is shown on the horizontal axis. Confidence intervals (CI) are drawn on the graph, normally at the 95% and 99.8% level to identify two levels of outlier. The CI resembles a funnel because the CIs are wider for organisations with a smaller number of eligible patients. This is one reason why small hospitals are more likely to be at the top and the bottom of any ranking system. The advantage of this approach is that it easily identifies the statistical outliers, either positive or negative. It is important to appreciate that a statistical outlier is not necessarily a clinical outlier, because it is impossible to account for all possible variability in the data. QIs should be used to trigger an internal investigation when outliers are detected, but the conclusion might be that the service quality is not to blame for an apparent adverse outcome.

Conclusion

Analysis of the white paper documents makes it abundantly clear that the direction of travel for the NHS is quality improvement through measurement and reporting. But why should clinicians choose to get involved? Firstly, they have a duty to strive to improve quality of care and patient outcomes. QIs and metrics are important tools along with research and development, audit, clinical guidelines and care pathways to achieve this aim. Secondly, QIs can be used to promote equity across the service and across clinical networks. Thirdly, patients have a right to expect that evidence-based practice will be implemented in a timely and comprehensive way across the NHS. Finally, QIs will only improve patient care if clinicians assist in their development, own the indicators and act on the results.26

Clinicians also need to take ownership of the HES data collected on their patients to assure its accuracy. This will normally involve meeting regularly with clinical coders to review the data submitted by the trust. It also involves training juniors to appreciate the importance of the data. Only in this way will data quality improve. Although the connection might not be obvious, high quality data will improve patient care through the mechanisms described above.

Competing interests

The author is a national clinical lead for hospital specialties at the NHS Information Centre.

Acknowledgements

The author would like to thank Brian Derry for his expert review and comments, which has significantly improved the clarity of the text.

  • © 2012 Royal College of Physicians

References

  1. ↵
    Department of Health. Liberating the NHS: transparency in outcomes–a framework for the NHS 2010London: DHwww.dh.gov.uk/en/Consultations/Liveconsultations/DH_117583
  2. ↵
    Department of Health. Equity and excellence: liberating the NHS 2010London: DHwww.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_117794.pdf
  3. ↵
    Department of Health. Liberating the NHS: an information revolution 2010London: DHwww.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_120598.pdf
  4. ↵
    data.gov.uk.
  5. ↵
    1. NHS Choices
    Your health, your choices, www.nhs.uk/Pages/HomePage.aspx.
  6. ↵
    World Health Organization. The world health report 2000. Health systems: improving performance 2000Geneva: WHOwww.who.int/whr/2000/en/
  7. ↵
    1. Arah OA,
    2. Westert GP,
    3. Hurst J,
    4. Klazinga NS
    . A conceptual framework for the OECD Health Care Quality Indicators Project. Int J Qual Health Care 2006; 18Suppl 15–13doi:10.1093/intqhc/mzl024
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Donabedian A. Quality assurance
    . Structure, process and outcome. Nurs Stand 1992; 711 Suppl QA4–5
    OpenUrlPubMed
  9. ↵
    1. HES Online
    , www.hesonline.nhs.uk/Ease/servlet/ContentServer?siteID=1937.
  10. ↵
    1. HESonline
    . A guide to linked ONS-HES mortality datawww.hesonline.nhs.uk/Ease/servlet/ContentServer?siteID=1937&categoryID=1299
  11. ↵
    1. Audit Commission
    . Information and data quality in the NHSwww.audit-commission.gov.uk/nationalstudies/health/other/Pages/informationanddataqualityinthenhs.aspx
  12. ↵
    1. Croft GP.
    . Engaging clinicians in improving data quality in the NHS. 2006 [cited June 2010]www.rcplondon.ac.uk/sites/default/files/ilab-summary-report.pdf
  13. ↵
    1. Williams JG,
    2. Mann RY
    . Hospital episode statistics: time for clinicians to get involved? Clin Med 2002; 2: 34–7
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Audit Commission
    , PbR data assurance framework 2008/09. www.audit-commission.gov.uk/nationalstudies/health/pbr/pbrdataassuranceframework200809/Pages/default.aspx.
  15. ↵
    1. Spencer SA.
    (2011) Hospital episode statisitics (HES): improving the quality and value of hospital data: a discussion document, http://aomrc.org.uk/publications/reports-guidance.html.
  16. ↵
    1. West Midlands Quality Institute
    . Patient reported outcome measures (PROMS), pre and post operative data, April 2009 to October2009West Midlands. www.wmqi.westmidlands.nhs.uk/downloads/file/PROMS%20to%20Oct%202010%2030-3-11.pdf
  17. ↵
    1. London Health Observatory
    . Standardised mortality ratios2010[cited May 2010]; www.lho.org.uk/LHO_Topics/Data/Methodology_and_sources/agestandardisedrates.aspx
  18. ↵
    1. Jarman B
    . In defence of the hospital standardized mortality ratio. Healthc Pap 2008; 8: 37–42
    OpenUrlCrossRefPubMed
  19. ↵
    1. National Quality Board
    . Report from the Steering Group for the National Review of the Hospital Standardised Mortality Ratiowww.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_121328.pdf
  20. ↵
    1. Black N,
    2. Jenkinson C
    . Measuring patients' experiences and outcomes. BMJ 2009; 339: b2495
    OpenUrlFREE Full Text
  21. ↵
    Department of Health. Guidance on the routine collection of patient reported outcome measures (PROMs)www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_092625.pdf
  22. ↵
    Information Centre for Health and Social Care. HES online: patient reported outcome measures (PROMs) monthly report 2010www.hesonline.nhs.uk/Ease/servlet/ContentServer?siteID = 1937&categoryID = 1295
  23. ↵
    1. Raleigh VS,
    2. Foot C
    Getting the measure of quality: opportunities and challenges, The Kings Fund; 2010. http://kingsfund.koha-ptfs.eu/cgi-bin/koha/opac-detail.pl?biblionumber = 92792.
  24. ↵
    Acute stroke and TIA algorithm 2: stroke pathway. www.nice.org.uk/nicemedia/live/11646/38892/38892.pdf.
  25. ↵
    1. Kunadian B,
    2. Dunning J,
    3. Roberts AP,
    4. Morley R,
    5. de Belder MA
    . Funnel plots for comparing performance of PCI performing hospitals and cardiologists: demonstration of utility using the New York hospital mortality data. Catheter Cardiovasc Interv 2009; 73: 589–94doi:10.1002/ccd.21893
    OpenUrlCrossRefPubMed
  26. ↵
    1. Berwick DM,
    2. James B,
    3. Coye MJ
    . Connections between quality measurement and improvement. Med Care 2003; 411 SupplI30–8doi:10.1097/00005650-200301001-00004
    OpenUrlCrossRefPubMed
Back to top
Previous articleNext article

Article Tools

Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
The white papers, quality indicators and clinical responsibility
Andrew Spencer
Clinical Medicine Feb 2012, 12 (1) 19-22; DOI: 10.7861/clinmedicine.12-1-19

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
The white papers, quality indicators and clinical responsibility
Andrew Spencer
Clinical Medicine Feb 2012, 12 (1) 19-22; DOI: 10.7861/clinmedicine.12-1-19
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Metrics and quality indicators
    • Importance of good quality data
    • Mortality ratios
    • Patient-reported outcome measures
    • Attributes of a good quality indicator
    • Presentation of quality indicators
    • Conclusion
    • Competing interests
    • Acknowledgements
    • References
  • Info & Metrics

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Empowering clinical data collection at the point of care
  • Clinically led performance management in secondary healthcare: evaluating the attitudes of medical and non-clinical managers
  • Google Scholar

More in this TOC Section

  • The new UK internal medicine curriculum 
  • The Francis Crick Institute
  • ‘Every breath we take: the lifelong impact of air pollution’ – a call for action
Show more Professional Issues

Similar Articles

Navigate this Journal

  • Journal Home
  • Current Issue
  • Ahead of Print
  • Archive

Related Links

  • ClinMed - Home
  • FHJ - Home
clinmedicine Footer Logo
  • Home
  • Journals
  • Contact us
  • Advertise
HighWire Press, Inc.

Follow Us:

  • Follow HighWire Origins on Twitter
  • Visit HighWire Origins on Facebook

Copyright © 2021 by the Royal College of Physicians