A critique of the specialty certificate examinations of the Federation of Royal Colleges of Physicians of the UK

Editor – We welcome John Cookson's interest in our new specialty certificate examinations (SCEs) (Clin Med April 2010 pp 141–4). However, his critique was based on a limited selection of the available information, so this correspondence provides a fuller update for readers. Four years on from the pilot examinations in 2006, after 11 diets in eight separate specialties and with almost 10,000 questions in the bank, there is much to report.1 The contribution from specialists throughout the UK to this effort has been superb.
Cookson is correct in saying that the pilot examinations were not mapped robustly to the curricula. Furthermore, progressive revision of the specialty curricula during the last two years has presented a moving target for the new examining boards. We have risen to this challenge. From 2009, each SCE blueprint has been mapped to the appropriate curriculum and every usable question related to the relevant curriculum domain. Question-writing groups are giving priority to the remaining gaps.
He criticises the curricula for differentiating between knowledge, skills and attitudes and expresses concern that the SCEs assess only knowledge. Although single best answer questions can evaluate problem-solving skills and clinical judgement, the SCEs were always intended as knowledge-based assessments. They were not designed to test skills or attitudes, which, we agree, are much better evaluated by direct observation and discussion face to face.
Cookson expresses disappointment that the indices of reliability in the pilots were inconsistent. Values of Cronbach's a obtained in examination diets of 200 questions, involving small cohorts with a narrow range of ability, are unlikely to reach 0.9. Indeed, recent research into the use of reliability suggests that the standard error of measurement may be a more appropriate metric.2 Nevertheless, it is reassuring that in nine out of 11 SCE diets to date reliability values have exceeded 0.8.
We appreciate the challenge of standard setting for new examinations. For information, the SCEs use the same criterion-referencing process (the Angoff method) used for the MRCP(UK) written examinations in recent years. Although many of those involved in the process had no previous experience, their task was made simpler by taking as a consistent yardstick the knowledge expected of a newly appointed specialist.
Footnotes
Please submit letters for the editor's consideration within three weeks of receipt of Clinical Medicine. Letters should ideally be limited to 350 words, and sent by email to: Clinicalmedicine{at}rcplondon.ac.uk
- © 2010 Royal College of Physicians
References
- ↵
- Joint Committee on Higher Medical Training
- ↵
- Tighe J
A critique of the specialty certificate examinations of the Federation of Royal Colleges of Physicians of the UK
It is good to read an update and that data continue to be gathered about the performance of the examination. The progress in question writing is impressive but it will be an ongoing task as some questions will perform poorly, some will go out of date and soon many will be remembered by candidates after the examination and maybe passed around.
It is correct that single best answer questions can evaluate problem solving so it seems a pity that a decision was taken to test only factual recall rather than utilisation of knowledge. The maintenance of validity is indeed not helped by the learning outcomes in the various curricula. My problem is not just that only knowledge outcomes are being assessed. They are written so that they mostly require knowledge recall rather than higher order thinking and further many outcomes are not listed under the appropriate heading so that an examination testing only those listed under knowledge will miss important topics. This shows the importance of designing assessment systems along with the outcomes, something previously neglected in the foundation programme. My anxieties about setting a specialist examination at the lowest level of Miller's triangle remain.
I identified educational impact, cost-effectiveness and acceptability as issues requiring more information and I was hoping this would be forthcoming. It would be a pity I think, if this examination led candidates to acquire most of their learning from books rather than from patients; a survey of their learning strategy would be of considerable interest. There must also now be some robust data on costs. Even if specialists are giving their time freely there is an opportunity cost; if they are writing questions they cannot be doing something else. The publication of robust costings would be a service to all of us who struggle to provide a good examination product for the resources available.
Establishing that standard setting is at the level of the new consultant will certainly be helpful. There must now be considerable information about the consistency of standard setting between the various diets and specialties. Publication of this would help answer the questions raised by the data in the pilot.
Footnotes
Please submit letters for the editor's consideration within three weeks of receipt of Clinical Medicine. Letters should ideally be limited to 350 words, and sent by email to: Clinicalmedicine{at}rcplondon.ac.uk
- © 2010 Royal College of Physicians
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.