Sorensen, J. L.5; Thellesen, L.4; Strandbygaard, J.4; Svendsen, Kira Dynnes6; Christensen, Karl Bang6; Johansen, M4; Langhoff-Roos, P.4; Ekelund, K.4; Ottesen, B.5; Van der Vleuten, C.4
1 Section of Gynaecology, Obstetrics and Paediatrics, Department of Clinical Medicine, Faculty of Health and Medical Sciences, Københavns Universitet2 Department of Clinical Medicine, Department of Clinical Medicine, Faculty of Health and Medical Sciences, Københavns Universitet3 Section of Biostatistics, Department of Public Health, Faculty of Health and Medical Sciences, Københavns Universitet4 unknown5 Department of Clinical Medicine, Department of Clinical Medicine, Faculty of Health and Medical Sciences, Københavns Universitet6 Section of Biostatistics, Department of Public Health, Faculty of Health and Medical Sciences, Københavns Universitet
a review and an example
Background: The literature is sparse on written test developmentin a post-graduate multi-disciplinary setting. Developing and evalu-ating knowledge tests for use in multi-disciplinary post-graduatetraining is challenging. The objective of this study was to describethe process of developing and evaluating a multiple-choice question(MCQ) test for use in a multi-disciplinary training program inobstetric-anesthesia emergencies. Methods: A multi-disciplinary working committee with 12members representing six professional healthcare groups andanother 28 participants were involved. Recurrent revisions of theMCQ items were undertaken followed by a statistical analysis. TheMCQ items were developed stepwise, including decisions on aimsand content, followed by testing for face and content validity, con-struct validity, item–total correlation, and reliability. Results: To obtain acceptable content validity, 40 out of originally50 items were included in the final MCQ test. The MCQ test wasable to distinguish between levels of competence, and good con-struct validity was indicated by a significant difference in the meanscore between consultants and first-year trainees, as well as betweenfirst-year trainees and medical and midwifery students. Evaluationof the item–total correlation analysis in the 40 items set revealed that11 items needed re-evaluation, four of which addressed contentissues in local clinical guidelines. A Cronbach’s alpha of 0.83 forreliability was found, which is acceptable. Conclusion: Content and construct validity and reliability wereacceptable. The presented template for the development of thisMCQ test could be useful to others when developing knowledgetests and may enhance the overall quality of test development.
Acta Anaesthesiologica Scandinavica, 2015, Vol 59, Issue 1, p. 123-133