Editorial comment: Measuring performance in unselected emergency admissions =========================================================================== * Oliver J Warren Emergency healthcare is complex, chaotic and challenging. We know from previous data collected in healthcare that we could do better. As a result of service configuration, working patterns and medical traditions, some of the sickest patients still do not see the right doctor quickly enough when they present to the emergency department, resulting in delays in receiving vital treatment. At the other end of the severity scale, some patients attend and are admitted unnecessarily, are often over-investigated and use up vital finite resources. We cannot improve what we cannot measure, so the creation of appropriate quality metrics for unfiltered emergency services is essential and time-critical. In this month's journal, Christopher Price and colleagues propose one such metric, namely ‘time to specialist’ (TTS), and explore its potential with thirteen clinical leads from a large NHS Foundation Trust. Their paper reveals the interesting tensions and challenges that arise when the introduction of a new performance measure is proposed. It is a useful addition to this emerging field and a starting point for further discussion. There is no perfect single metric in this setting (or indeed any other) and quite clearly, TTS has limitations. First, it is not clear who or what ‘a specialist’ is (and the responses in the paper reveal some intriguingly diverse opinions on this) but if we cannot define a ‘specialist’ we certainly cannot measure how quickly a patient is seen by one. Second, as some respondents in the study point out, many patients do not present with immediately apparent, singularly well-defined conditions that allow for immediate referral to the *appropriate* specialist. Third, TTS is not an outcome measure but a process measure, a proxy based upon the tenuous proposition that if a patient sees a ‘specialist’ quickly so it follows that their subsequent management will be correct and timely. Sadly, any of us who work in this area know this is not always the case. However, all metrics have flaws, and we should not dismiss TTS, or something similar, too quickly. While TTS is not an outcome measure it is, like most process measures, relatively simple and easily understood by patients and the public. Doctors may prefer outcome measures but these are often harder to measure, particularly in the short term, and less easily ‘translated’ to the layperson. Defining ‘specialist’ is problematic, but we do know that junior doctors admit patients more readily to hospital, are less likely to quickly recognise a sick patient and order more (unnecessary) tests than senior colleagues. The authors consider in their discussion if it would be more helpful to focus on time to ‘senior’ (consultant) rather than ‘specialist’ and they are right to do so. Changing the metric to how promptly patients see a consultant during their admission is a fairly good marker for the quality of any acute service. Price *et al* describe some respondents having concerns over ‘gaming’ or adverse unintended consequences if a service strove to achieve a good TTS at the expense of other clinical needs. Concerns like these are not new, and are valid. But they tend to occur when there is only one metric, given priority over all else. To avoid this we must measure more not less along the patient pathway to create a panel of composite indicators that give an overall picture of what is occurring and where improvements need to be made. Any panel must include some process measures, such as TTS or ‘time to treatment’, for certain conditions such as stroke or myocardial infarction where this is known to be the key determinant in patient outcome. The panel should also have outcome measures for acute conditions, such as survival from perforated viscus, and incorporate patient experience data too. By creating a number of different metrics, clinical teams can be reassured that decisions that may result in a ‘failure’ on one metric can still be taken as they are likely to improve patient outcomes measured elsewhere. Finally, the word ‘targets’ has negative connotations and is binary, suggesting you either ‘hit it’ or don’t. Moving to ‘minimum standards’ upon which the majority can agree would be a helpful starting point, such as ‘all patients admitted as an emergency should see a consultant within 24 hours of admission to hospital’, and then when this is not achieved reasons can be sought why. None of this is easy. Measuring how we are performing is uncomfortable, but not measuring and not knowing is worse. We need to learn as we go along, something many doctors feel uncertain doing. Clinical leadership will be essential, providing encouragement and ensuring that the right things are measured and also translated into information that non-clinicians can understand. One respondent in Price *et al*'s paper expresses concern that there would be ‘tension if TTS was used to improve service efficiency as well as patient care’. This concern I cannot fathom. If we can develop a metric, or a group of them, that can improve the efficiency of a service and patient care I see nothing but an excellent result for patients, clinicians and our tax-funded service. ## Footnotes * Editorial comment on ‘Senior clinician views regarding introduction of a “time to specialist“ quality measure for unselected emergency admissions’ by Christopher I Price, Sara McCafferty, Harry Hill and Peter McMeekin * © 2015 Royal College of Physicians