Skip to main content

Main menu

  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us

Future Healthcare Journal

  • FHJ Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About FHJ
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

User menu

  • Log in

Search

  • Advanced search
RCP Journals
Home
  • Log in
  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us
Advanced

Future Healthcare Journal

futurehosp Logo
  • FHJ Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About FHJ
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

The GMC national training survey: Does it have an impact?

Mathavi Uthayanan, Joanna Szram, Anand Mehta, Geeta Menon and Jonathan Round
Download PDF
DOI: https://doi.org/10.7861/fhj.2020-0031
Future Healthc J October 2020
Mathavi Uthayanan
AHealth Education England, London, UK and St George's University Hospitals NHS Foundation Trust, London, UK
Roles: clinical education and leadership fellow
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: m.uthayanan@hotmail.co.uk
Joanna Szram
BHealth Education England, London, UK
Roles: deputy postgraduate dean
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anand Mehta
BHealth Education England, London, UK
Roles: deputy postgraduate dean
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Geeta Menon
CHealth Education England, London, UK
Roles: postgraduate dean
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jonathan Round
DSt George's University Hospitals NHS Foundation Trust, London, UK
Roles: director of medical education
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
Loading

ABSTRACT

The General Medical Council (GMC) national trainee survey (NTS) monitors junior doctor training experience annually, which is then used by organisations such as Health Education England to inform quality management. Its validity as an assessment of the learning environment to drive improvement is frequently questioned; currently there are no published evidence-based studies to demonstrate its impact. To explore the effects of the GMC survey, we carried out a retrospective cohort study using publicly available GMC NTS survey data. We compared 2018 and 2019 scores in paediatrics in London across all 18 survey indicators, to identify any relationship between these 2 consecutive years of data. Our findings demonstrate that results of the GMC NTS in 1 year are associated with a change in the NTS the following year, with both an improvement in below average departments and deterioration in above average units. These findings suggest that annual GMC NTS results may have an impact on the quality of learning environments as measured in subsequent surveys – therefore they act as both a measure and a potential modifier of outcome.

KEYWORDS
  • GMC
  • NTS
  • HEE
  • medical education
  • feedback

Background

The General Medical Council (GMC) national trainee survey (NTS) monitors training experience annually.1–3 Using an online platform, all trainees are asked to rate the training conditions at the site at which they are working using rating scales. The survey is sent to all 75,000 doctors in training in the UK, and has a response rate of 94.8%.5 The responses are grouped into 18 indicators, including supervision, rota design, and curriculum coverage. The results are rated using standard deviation calculations into red, pink, white, moss and green (Fig 1).6

Fig 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 1.

Explanation of the General Medical Council national training survey colour scheme.1

Survey results are used, in conjunction with other tools, by Health Education England (HEE) and the education and training governing bodies in the devolved nations, to explore the quality of postgraduate training, sometimes leading to changes in where trainees work and are trained.7 Nevertheless, the validity of the data is questioned with issues raised around, variously ‘forcing’ respondents to answer; the survey not being taken seriously by trainees; maintaining confidentiality; sample size; and question wording.2,4

There is anecdotal evidence that positive scores may encourage trainers in the department to maintain and improve standards of training; there is also a perception that ‘white’ scores, in particular, may lead to complacency.8 Similarly, poor results may demoralise departments under significant pressure, particularly with staffing issues, but can also be a stimulus for change.9 The survey results may ‘fall on deaf ears’; amid a busy, fraught department with competing priorities and an already stretched workforce, junior doctors and their training will often not be prioritised.10 We wanted to explore how NTS results in 1 year might relate to changes in scores the following year.

Objectives

We aimed to use GMC survey data in 2 consecutive years to examine whether GMC results may be associated with influencing a change in performance within a training department.

We had a null hypothesis; GMC NTS results are not associated with bringing about change in educational and training performance of a clinical department.

Methodology

We downloaded raw scores for all 18 indicators in the NTS in 2018 and 2019 for paediatrics from the GMC website.1 We chose to look at trainee feedback (specialty trainee years 1–7 (ST1–7) as a run-through programme) at all levels, received from the paediatric specialty in all hospital sites within London. The score for each site was compared with the specialty's national mean. We calculated the standard deviation (SD) for each score at all 33 hospital sites in London and then the Z score for each indicator at each site, as a measure of that indicator's deviation away from the national mean (eg Z=0 signified that a value is equal to the national mean, Z=1 indicated that a value is one SD above the national mean, and Z=−1 was one SD below). Using the Z scores in 2018 for each indicator as a baseline, we calculated the change in Z scores in 2019 by indicator by site (Z score in 2019 minus Z score in 2018).

Where an indicator had fewer than three respondents replying, no score was available so all raw scores for the indicator for both years in that respondent group were excluded from analysis. A total of 606 Z scores could have been calculated for each year (total number of scores was 1,212) of which 11 had fewer than three respondents. This resulted in a total of 595 Z scores per year, therefore 595 data points were used to calculate the Pearson's r value shown in Fig 2.

Fig 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 2.

Relationship between Z score in 2018 (x-axis) and change in Z score between 2018 and 2019. Pearson's r = −0.60.

Results

The data showed that departments with survey indicators below the national mean were more likely to improve in the following year than other departments; there was a significant association between the distance of indicator score from the mean and the observed improvement (Fig 2). Conversely, departments with indicators above the mean tended to show deterioration in their survey results in the following year. Pearson's r value was −0.60 indicating a moderately strong inverse correlation between a poor Z score in 1 year and an improved Z score the following year. The relationship between higher baseline scores and subsequent deterioration was weaker.

The improvement in poorly performing departments was more marked than the deterioration of units who performed well, therefore the data were re-plotted, separating out indicators that were above and below the mean in 2018 (Figs 3 and 4, respectively). This demonstrated that indicators with positive scores in the GMC NTS in 2018 are only weakly associated with a worsening of performance the following year. This analysis also showed that indicators with scores below the mean in 2018 showed a more substantial improvement (Pearson's r value of −0.53 compared with −0.35) for the whole cohort.

Fig 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 3.

Relationship between national training survey scores in 2018 and scores in 2019 for trusts with above average national training survey scores. Pearson's r = −0.35.

Fig 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 4.

Relationship between wwww training survey scores in 2018 and score in 2019 for trusts with below average national training survey scores. Pearson's r = −0.53.

Discussion

The findings of this study demonstrate that the results of the GMC NTS in 1 year were associated with change in the survey results the following year in a single consecutive year analysis, with evidence of improvement in below average departments and deterioration in above average units. This study appears to suggest that GMC NTS results may influence the performance of a department as measured in the subsequent survey year. There are however a number of possible reasons for these findings. One explanation is that these results may reflect the response of an organisation to the survey results, using the dataset as a stimulus for change in a department that may be struggling to maintain an effective learning environment and, for others, may reflect relative complacency in departments that perform well, or within the ‘normal range’. Departments that performed well in the first year showed a weaker correlation to a change in results in the subsequent years; this may be due to more sustainable mechanisms in place to correct issues and ensure continued good performance. Another cause for change in results is a cohort effect ie every year there will be changes in the group of trainees as they rotate placement, enter or leave the specialty.

There are a number of limitations to this study. Firstly, there may be errors in survey completion, leading to erroneous values in one group and then ‘regression to the mean’ or normalisation of scores the following year – although the different results seen in analysis of below and above average indicators makes this unlikely. Secondly, the study looked at only one year-to-year comparison, in one specialty in a single region; it is possible that analysing data across several years of survey data may generate different results. Thirdly, this finding may not be applicable to other training programmes, particularly internal medicine and foundation training, in which trainees move through a number of different departments, sites and organisations each year with varying levels of specific interest in the post they are placed in at the time of the survey. The NTS asks trainees to respond to the questions based on the current department in which they work, and in many – particularly true in paediatrics – the trainees will have rotated into new posts only a few weeks before the survey opens in March; for this reason, many schools of paediatrics have run their own surveys at a different time of year. Further analysis of these school surveys and comparison to the GMC data would allow for the findings of this study to be interrogated in more detail.

Other future work could also focus on organisation level data, as approach to below average survey results is likely to vary by factors within trusts and hospitals, such as staffing levels and skill mix, educational leadership, funding and service pressures. The statutory education bodies (Health Education England, Health Education and Improvement Wales, NHS Education for Scotland, and Northern Ireland Medical and Dental Training Agency) each review GMC survey data annually and implement quality review and improvement programmes based on the findings of the survey. Each of the four nations have individual approaches, and there are also regional variations within each of these arm's-length bodies. Further analysis to look at potential geographical differences would therefore be of value when considering the variation in interventions.

Conclusion

Results in the GMC NTS in 1 year may influence the likelihood, direction and degree of change in the following year, particularly in departments with comparatively poor performance, which are significantly likely to show improvement in subsequent years. It may also be that errors within the survey lead to aberrant values, which then lead to a natural normalisation of scores the following year, although this would not explain the different effect (up and down) as well as the difference in magnitude of the effect (Pearson's r value of −0.35 and −0.53) on below and above average indicators. An effective survey is not defined just by the quality of data and analysis, but also how it can impact improvement for the population being surveyed. This study provides evidence for such an effect in the GMC NTS in London paediatric departments, and is worthy of investigation in other training programmes and regions.

  • © Royal College of Physicians 2020. All rights reserved.

References

  1. ↵
    1. General Medical Council
    . National training survey reports. GMC, 2020. www.gmc-uk.org/about/what-we-do-and-why/data-and-research/national-training-surveys-reports [Accessed 11 April 2020].
  2. ↵
    1. Manton RN
    . Why the General Medical Council's national training survey is so important. British Journal of Hospital Medicine 2019;80:492–3.
    OpenUrl
  3. ↵
    1. NHS Employers
    . GMC National training survey – initial findings. NHS, 2019. www.nhsemployers.org/news/2019/07/gmc-national-training-survey [Accessed 11 April 2020].
  4. ↵
    1. Joint Royal Colleges Physicians Training Board
    . The state of physicianly training in the UK: Report 2 2019. JRCPTB, 2019. www.jrcptb.org.uk/sites/default/files/JRCPTB_SoPT_report_2019.pdf [Accessed 02 April 2020].
  5. ↵
    Health Education England – North West: Postgraduate Medicine and Dentistry. GMC Surveys. HEE, 2020. www.nwpgmd.nhs.uk/quality/gmc-surveys [Accessed 11 April 2020].
  6. ↵
    1. General Medical Council
    . What do the results mean? GMC, 2020. www.gmc-uk.org/help/education-data-reporting-tool-help/what-do-the-results-mean#what-are-indicators-and-how-are-indicator-scores-calculated [Accessed 11 April 2020].
  7. ↵
    1. BBC News
    . Canterbury hospital junior doctors moved over lack of training. BBC News, 2017. www.bbc.co.uk/news/uk-england-kent-39336675 [Accessed 11 April 2020].
  8. ↵
    1. Insights Health
    . Bring on the reds – an alternative guide to the GMC trainee survey. Insights Health, 2019. https://insightshealth.wordpress.com/bring-on-the-reds-an-alternative-guide-to-the-gmc-trainee-survey [Accessed 11 April 2020].
  9. ↵
    1. Gregory S
    , Demartini C. Satisfaction of doctors with their training: evidence from UK. BMC Health Serv Res 2017;17910:851.
    OpenUrl
  10. ↵
    1. Doctors’ Association UK.
    1 in 4 doctors burnt out according to GMC survey. DAUK, 2018. www.dauk.org/news/2018/7/9/1-in-4-doctors-burnt-out-according-to-gmc-survey [Accessed 11 April 2020].
View Abstract
Back to top
Previous articleNext article

Article Tools

Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
The GMC national training survey: Does it have an impact?
Mathavi Uthayanan, Joanna Szram, Anand Mehta, Geeta Menon, Jonathan Round
Future Healthc J Oct 2020, 7 (3) 205-207; DOI: 10.7861/fhj.2020-0031

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
The GMC national training survey: Does it have an impact?
Mathavi Uthayanan, Joanna Szram, Anand Mehta, Geeta Menon, Jonathan Round
Future Healthc J Oct 2020, 7 (3) 205-207; DOI: 10.7861/fhj.2020-0031
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • ABSTRACT
    • Background
    • Objectives
    • Methodology
    • Results
    • Discussion
    • Conclusion
    • References
  • Figures & Data
  • Info & Metrics

Related Articles

  • No related articles found.
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Incivility and the clinical learner
  • The great escape? The rise of the escape room in medical education
Show more Educating well

Similar Articles

Navigate this Journal

  • Journal Home
  • Current Issue
  • Ahead of Print
  • Archive

Related Links

  • ClinMed - Home
  • FHJ - Home

Other Services

  • Advertising
futurehosp Footer Logo
  • Home
  • Journals
  • Contact us
  • Advertise
HighWire Press, Inc.

Follow Us:

  • Follow HighWire Origins on Twitter
  • Visit HighWire Origins on Facebook

Copyright © 2020 by the Royal College of Physicians