Elsevier

The Lancet

Volume 357, Issue 9260, 24 March 2001, Pages 945-949
The Lancet

Series
Assessment of clinical competence

https://doi.org/10.1016/S0140-6736(00)04221-5Get rights and content

Summary

Tests of clinical competence, which allow decisions to be made about medical qualification and fitness to practise, must be designed with respect to key issues including blueprinting, validity, reliability, and standard setting, as well as clarity about their formative or summative function. Multiple choice questions, essays, and oral examinations could be used to test factual recall and applied knowledge, but more sophisticated methods are needed to assess clinical performance, including directly observed long and short cases, objective structured clinical examinations, and the use of standardised patients. The goal of assessment in medical education remains the development of reliable measurements of student performance which, as well as having predictive value for subsequent clinical competence, also have a formative, educational role.

Section snippets

Blueprinting

If students focus on learning only what is assessed, assessment in medical education must validate the objectives set by the curriculum. Test content should be carefully planned against learning objectives–a process known as blueprinting.2 For undergraduate curricula, for which the definition of core content is now becoming a requirement,3 this process could be easier than for postgraduate examinations, where curriculum content remains more broadly defined. However, conceptual frameworks

Standard setting

Inferences about students' performance in tests are essential to any assessment of competence. When assessment is used for summative purposes, the score at which a student will pass or fail has also to be defined. Norm referencing, comparing one student with others, is frequently used in examination procedures if a specified number of candidates are required to pass—ie, in some college membership examinations. Performance is described relative to the positions of other candidates. As such,

Validity versus reliability

Just as summative and formative elements of assessment need careful attention when planning clinical competence testing, so do the issues of reliability and validity.

Reliability is a measure of the reproducibility or consistency of a test, and is affected by many factors such as examiner judgments, cases used, candidate nervousness, and test conditions. Two aspects of reliability have been well researched: inter-rater and inter-case (candidate) reliability. Inter-rater reliability measures the

Assessment of “knows” and “knows how”

The assessment of medical undergraduates has tended to focus on the pyramid base: “knows”—ie, the straight factual recall of knowledge, and “knows how”—ie, the application of knowledge to problem-solving and decision-making. This method might be appropriate in early stages of the medical curriculum, but, as skill teaching is more vertically integrated, careful planning of assessment formats becomes crucial. Various test formats of factual recall are available, which are easy to devise and

Traditional long and short cases

Although abandoned for many years in North America, the use of unstandardised real patients in long and short cases to assess clinical competence remains a feature of both undergraduate and postgraduate assessment in the UK. Such examinations are increasingly challenged on the grounds of authenticity and unreliability. Long cases are often unobserved, the assessment relies on the candidate's presentation, representing an assessment of “knows how” rather than “shows how”. Generally, only one

Assessment of “does”

The real challenge lies in the assessment of a student's actual performance on the wards or in the consulting room. Increasing attention is being placed on this type of assessment in postgraduate training, because revalidation of a clinician's fitness to practise and the identification of badly performing doctors are areas of public concern. Any attempt at assessment of performance has to balance the issues of validity and reliability, and there has been little research into possible approaches

References (34)

  • VR Neufeld et al.

    Assessing clinical competence, vol 7

    (1985)
  • D Dauphinee

    Determining the content of certification examinations

  • Tomorrow's doctors: recommendations on undergraduate medical education

    (1993)
  • RB Hays et al.

    Longitudinal reliability of the Royal Australian College of General Practitioners certification examination

    Med Educ

    (1995)
  • MD Cusimano

    Standard setting in medical education

    Acad Med

    (1996)
  • DB Swanson

    A measurement framework for performance based tests

  • DB Swanson et al.

    Performance-based assessment: lessons learnt from the health professions

    Educ Res

    (1995)
  • JJ Norcini et al.

    Reliability, validity and efficiency of multiple choice questions and patient management problem items formats in the assessment of physician competence

    Med Educ

    (1985)
  • BF Stalenhoef-Halling et al.

    The feasibility, acceptability and reliability of openended questions in a problem based learning curriculum

  • Wass V, Jones R, van der Vleuten CPM. Standardised or real patients to test clinical competence? The long case...
  • DI Newble et al.

    Psychometric characteristics of the objective structured clinical examination

    Med Educ

    (1996)
  • GE Miller

    The assessment of clinical skills/competence/performance

    Acad Med

    (1990)
  • SM Case et al.

    Extended matching items: a practical alternative to free response questions

    Teach Learn Med

    (1993)
  • SM Case et al.

    Constructing written test questions for the basic and clinical sciences

    (1996)
  • PHAM Frijns et al.

    The effect of structure in scoring methods on the reproducibility of tests using open ended questions

  • CPM Van der Vleuten

    The assessment of professional competence: developments, research and practical implications

    Adv Health Sci Educ

    (1996)
  • R Wakeford et al.

    Improving oral examinations: selecting, training and monitoring examiners for the MRCGP

    BMJ

    (1995)
  • Cited by (0)

    View full text