Skip to main content

Main menu

  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us

Clinical Medicine Journal

  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

User menu

  • Log in

Search

  • Advanced search
RCP Journals
Home
  • Log in
  • Home
  • Our journals
    • Clinical Medicine
    • Future Healthcare Journal
  • Subject collections
  • About the RCP
  • Contact us
Advanced

Clinical Medicine Journal

clinmedicine Logo
  • ClinMed Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Author guidance
    • Instructions for authors
    • Submit online
  • About ClinMed
    • Scope
    • Editorial board
    • Policies
    • Information for reviewers
    • Advertising

Content validity of a clinical problem solving test for use in recruitment to the acute specialties

Gemma Crossingham, Thomas Gale, Martin Roberts, Alison Carr, Jeremy Langton and Ian Anderson
Download PDF
DOI: https://doi.org/10.7861/clinmedicine.11-1-23
Clin Med February 2011
Gemma Crossingham
1 Plymouth Hospitals NHS Trust, Plymouth
Roles: ST5 in anaesthesia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: gemma.crossingham@phnt.swest.nhs.uk
Thomas Gale
1 Plymouth Hospitals NHS Trust, Plymouth
Roles: Consultant anaesthetist
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Martin Roberts
2 Institute of Clinical Education, Peninsula College of Medicine and Dentistry, Plymouth
Roles: Research fellow
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alison Carr
1 Plymouth Hospitals NHS Trust, Plymouth
Roles: Consultant anaesthetist
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jeremy Langton
1 Plymouth Hospitals NHS Trust, Plymouth
Roles: Consultant anaesthetist
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ian Anderson
1 Plymouth Hospitals NHS Trust, Plymouth
Roles: Consultant anaesthetist
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
Loading

Abstract

Clinical problem solving tests (CPSTs) have been shown to be reliable and valid for recruitment to general practice (GP) training programmes. This article presents the results from a Department of Health-funded pilot into the use of a CPST designed for recruitment to the acute specialties (AS). The pilot paper consisted of 99 items from the validated GP question bank and 40 new items aimed specifically at topics of relevance to AS training. The CPST successfully differentiated between applicants. The overall test and the GP section showed high internal reliability, whereas the AS pilot section performed less well. A detailed item analysis revealed that the AS pilot items were, on average, more difficult and of poorer quality than the GP items. Important issues that need to be addressed in the early development phase of a test used for high stakes selection to specialty training programmes are discussed.

Key Words
  • acute specialties
  • clinical problem solving test
  • content validity
  • machine marked test
  • postgraduate
  • specialty selection

The 2008 Tooke Report Aspiring to excellence1 made the following comment in regard to the shortlisting process for recruitment to specialty training posts:

There would be considerable attractions in having a scheme which was both more accurate and less labour intensive. Successful models for shortlisting include the UK GP [general practice] selection system and the US system, both of which are based on scores in applied knowledge tests. These have advantages in being able to identify weak applicants…Such an approach deserves further evaluation.

Subsequent research has shown the machine marked tests (MMTs) used in GP selection are as valid and reliable as the current standard shortlisting method of scoring application form questions.2 In light of this, this article presents the preliminary results from an ongoing Department of Health-funded pilot study of the development of a clinical problem solving test (CPST) tailored to the acute medical specialties, looking specifically at its reliability, content validity and face validity.

The initial pilot test

The CPST paper consisted of 139 items mapped onto the foundation programme curriculum: 99 taken from the GP bank of validated items and 40 newly written items aimed specifically at topics of relevance to acute specialty (AS) training. The tried and tested GP items provided a useful benchmark against which to assess the performance of the newly developed AS items. The new items were written by a group of subject matter experts, including consultants in acute medicine, anaesthesia, emergency medicine and intensive care medicine.

Applicants attending selection centres in 2008 at the South West Peninsula Deanery for CT1 training posts in anaesthesia, core medical training and the acute care common stem (ACCS) were invited to sit the first pilot CPST. Participation (or not) did not influence selection outcome in any way and participants consented to the linking of test results with other personal data. The paper was administered on the day of interview and an overall response rate of 74% (125 of 169 applicants) was achieved.

Reliability of the test

The distribution of test scores was approximately normal in each section (GP and AS), indicating an absence of ceiling or floor effects and showing that the test has the potential to differentiate between candidates. Both the overall test and the GP item section showed very high internal reliability (Cronbach's α = 0.90 and 0.92 respectively). The AS pilot section performed less well (α = 0.43), though this is partly attributable to the smaller number of items used. When corrected for test length (Spearman–Brown), the expected reliability of a 99-item test of equivalent AS items would be 0.65, still below the 0.9 threshold recommended for high stakes assessment.3

A detailed item analysis revealed that the AS pilot items were on the whole more difficult (mean item facility = 0.50 ν 0.61) and of poorer quality (mean item partial = 0.09 ν 0.29) than the GP items. This is not an entirely unexpected finding; the GP items had been refined over many years of development whereas the AS specific items were all being administered for the first time having been written on a tight timescale before the 2008 recruitment round.

Applicant feedback

In order to assess the face validity of the MMT, applicants completed a feedback questionnaire to evaluate the MMT alongside six other selection centre stations used on the day of interview. Applicants were asked to rate the selection tools for relevance to selection, fairness and opportunity to demonstrate ability using a five-point rating scale (1 = poor; 2 = borderline; 3 = satisfactory; 4 = good; 5 = excellent). Although the MMT was less well regarded than the other selection stations in all three aspects, only the ratings for relevance were significantly lower than the other selection stations (Table 1).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Applicant ratings of selection centre stations and machine marked tests (MMTs) for ‘fairness’, ‘opportunity to demonstrate ability’ and ‘relevance’.

Relevance of the items

It was postulated that the significantly lower score for ‘relevance to selection’ might be due to the use of primary care-related questions in an AS population. To investigate this further, 59 trainees and consultants in anaesthesia, core medical training and AS were asked to rate a random sample of items taken from the pilot CPST for ‘appropriateness’ to selection for specialty training on a scale of 1 (totally inappropriate) to 4 (entirely appropriate). The random sample was stratified by question source to contain 15 items from the GP section of the test and 15 from the AS section. Respondents were blinded to the source of the items, which were presented in random order. As shown in Fig 1 the AS items were largely regarded as appropriate, whereas the GP items were mainly felt to be inappropriate for selection to specialty training. Average ratings for the AS items (mean = 3.43, SD = 0.25) were higher than those for the GP items (mean = 2.09, SD = 0.41) and the difference was statistically significant (t-test = 10.68, df = 23, p<0.001).

Fig 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig 1.

Histograms of mean scores for appropriateness to selection awarded to acute specialty (AS) and general practice (GP) questions split by question source (n = 59 × 30 = 1,770).

Discussion

The use of an MMT potentially enables standardisation of shortlisting processes, is time-effective, cost-effective and can predict subsequent success at postgraduate examinations.4,5 A nationally administered MMT is established for recruitment to UK training posts for general practice and has been used for core medical training with evidence of good reliability and validity when compared to standard shortlisting and interview scores.2,6

Similarly, validated CPST questions from the GP question bank perform well in terms of reliability and item quality in an AS population. However, the content validity of GP items was significantly lower than items written specifically for the acute specialties within this population. Both the GP question bank and the AS items used in this pilot are mapped onto the foundation programme curriculum7 and National Person Specifications for specialty training posts.8 However, it is evident that different specialty recruitment processes may focus on various aspects of foundation experience. Content validity is an important consideration when introducing a new method of recruitment and justifies robust test specification on nationally agreed blueprints.

Face validity refers, not to what the test actually measures but, to what it appears to measure and pertains to whether the test ‘looks valid’ to the applicants who take it. Applicants rated the MMT lower than the other selection centre stations for relevance at the point of interview. This may be due to the fact that applicants do not like sitting a paper-based test at the point of interview but may be something that can be improved with further refinement of test specification and items used. The study aims to evaluate the MMT as it would be used in practice: at the point of application rather than at a future interview.

Recent evidence suggests that the combination of a situational judgement test (SJT) with a CPST can maximise validity and efficiency.2 SJTs target non-clinical domains and assess professional behaviour and attitudes that have been found to correlate with subsequent workplace performance.9 An SJT based on complex job analysis, tailored to the assessment of professionalism in an acute specialty population is currently under development.

Important issues that need to be addressed in the early development phase of a test used for high stakes assessment for selection to specialty training programmes have been described. Having tested the reliability of a generic CPST in the AS population in this pilot, the next phase will involve test refinement for content and item quality with expansion of the specialty specific item pool. Subsequent studies will be extended to include units of application throughout the UK to further test the refined items on a wider demographic sample and investigate the construct validity of the MMT in relation to current shortlisting and interview processes.

Acknowledgements

We would like to thank Dr Simon Plint and the GP National Recruitment Office for contributing items for the test. Professor Fiona Patterson and Dr Victoria Carr (Work Psychology Group) for their assistance with data analysis. Dr Peter Riou (emergency medicine consultant) and Dr Nick Withers (consultant physician) made significant input to the acute specialties question bank. Mr Neil Squires (specialty recruitment manager), Dr Peter Davies (training programme director, anaesthesia) and members of the South West Peninsula Deanery team were invaluable in implementing the test. This project was run by the Anaesthesia Recruitment Validation Group from the South West Peninsula Deanery with significant support from the Directorate of Anaesthesia, Derriford Hospital. Study supported by Department of Health funding.

  • Royal College of Physicians

References

  1. ↵
    1. Tooke J
    . Aspiring to excellence: findings and recommendations of the Independent Inquiry into Modernising Medical Careers London: MMC Inquiry
  2. ↵
    1. Patterson F,
    2. Baron H,
    3. Carr V,
    4. Plint S,
    5. Lane P
    Evaluation of three short-listing methodologies for selection into postgraduate training in general practice. Med Educ 2009;43:50–7doi:10.1111/j.1365-2923.2008.03238.x
    OpenUrlCrossRefPubMed
  3. ↵
    1. Nunnally JC,
    2. Bernstein IH
    . Psychometric theory 1994 3rd edn. New York: McGraw-Hill
  4. ↵
    1. Carmichael KDMD,
    2. Westmoreland JBMD,
    3. Thomas JAMD,
    4. Patterson RMP
    Relation of residency selection factors to subsequent orthopaedic in-training examination performance. Southern Med J 2005;98:528–32doi:10.1097/01.SMJ.0000157560.75496.CB
    OpenUrlCrossRefPubMed
  5. ↵
    1. Matveevskii AS,
    2. Gravenstein N
    Role of simulators, educational programs, and nontechnical skills in anesthesia resident selection, education, and competency assessment. J Crit Care 2008;23:167–72doi:10.1016/j.jcrc.2007.11.009
    OpenUrlCrossRefPubMed
  6. ↵
    1. Patterson F,
    2. Carr V,
    3. Zibarras L,
    4. et al.
    New machine-marked tests for selection into core medical training: evidence from two validation studies. Clin Med 2009;9:417–20
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Department of Health
    . The Foundation Programme Curriculum. London: Department of Health, 2007.
  8. ↵
    Medical Specialty Training (England) Person Specifications. www.mmc.nhs.uk/specialty_training_2010/specialty_training_2009/person_specifications.aspx#themed.
  9. ↵
    1. McDaniel MA,
    2. Morgeson FP,
    3. Finnegan EB,
    4. Campion MA,
    5. Braverman EP
    Use of situational judgement tests to predict job performance: a clarification of the literature. J Appl Psychol 2001;86:730–40
    OpenUrlCrossRefPubMed
Back to top
Previous articleNext article

Article Tools

Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
Content validity and the acute specialties
Gemma Crossingham, Thomas Gale, Martin Roberts, Alison Carr, Jeremy Langton, Ian Anderson
Clinical Medicine Feb 2011, 11 (1) 22-25; DOI: 10.7861/clinmedicine.11-1-23

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Content validity and the acute specialties
Gemma Crossingham, Thomas Gale, Martin Roberts, Alison Carr, Jeremy Langton, Ian Anderson
Clinical Medicine Feb 2011, 11 (1) 22-25; DOI: 10.7861/clinmedicine.11-1-23
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • The initial pilot test
    • Reliability of the test
    • Applicant feedback
    • Relevance of the items
    • Discussion
    • Acknowledgements
    • References
  • Figures & Data
  • Info & Metrics

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • The new UK internal medicine curriculum 
  • The Francis Crick Institute
  • ‘Every breath we take: the lifelong impact of air pollution’ – a call for action
Show more Professional Issues

Similar Articles

FAQs

  • Difficulty logging in.

There is currently no login required to access the journals. Please go to the home page and simply click on the edition that you wish to read. If you are still unable to access the content you require, please let us know through the 'Contact us' page.

  • Can't find the CME questionnaire.

The read-only self-assessment questionnaire (SAQ) can be found after the CME section in each edition of Clinical Medicine. RCP members and fellows (using their login details for the main RCP website) are able to access the full SAQ with answers and are awarded 2 CPD points upon successful (8/10) completion from:  https://cme.rcplondon.ac.uk

Navigate this Journal

  • Journal Home
  • Current Issue
  • Ahead of Print
  • Archive

Related Links

  • ClinMed - Home
  • FHJ - Home
clinmedicine Footer Logo
  • Home
  • Journals
  • Contact us
  • Advertise
HighWire Press, Inc.

Follow Us:

  • Follow HighWire Origins on Twitter
  • Visit HighWire Origins on Facebook

Copyright © 2021 by the Royal College of Physicians