This resource provides information on a variety of assessment methods that are frequently used in health professional education1.
Purposes of assessment
The main purpose of assessment is to obtain formative or summative information on students’ progress. Other purposes are to obtain diagnostic information for placement purposes, to monitor throughput rates, to inform external stakeholders and for certification purposes.
Formative assessments are used to provide students with feedback on their progress and to inform lecturers whether students have mastered course material. Formative assessments are typically not awarded marks. Rather, global scores of mastery, oral or written feedback is provided. Formative assessments are typically situated within courses, not at the end of the course. Formative assessment should not be conflated with continuous assessments, where the marks that are allocated count towards the students’ year marks.
Summative assessments are used to determine students’ progress by the end of a course. Pass/fail decisions are based on marks obtained in summative assessments.
Principles of assessment
- Validity and Reliability
- Fairness
- Feasibility
- Educational Effect
Norm- and criterion-referenced assessment methods2,5
In assessment theory, a distinction is made between norm-referenced and criterion-referenced assessments. The difference between the two theoretical perspectives is important as they yield different types of information about students’ progress, they have different purposes and they are marked or scored differently. In health professional education, criterion-referenced assessments are frequently used in clinical or work-based settings where the assessment focus is on students’ performance. Norm-referenced assessments are typically used in classroom settings where factual or theoretical knowledge is assessed.
Norm-referenced assessment methods are more traditional and are based on psychometric theory. In criterion-referenced assessments, student performance is judged on assessment criteria linked to outcomes or competencies. As pre-specified assessment criteria are explicit and given to the student up front, criterion-referenced methods are more transparent.
Norm-referenced and criterion-referenced assessments are not mutually exclusive. Both approaches are used in assessment practice and both contribute to the decisions regarding students’ levels of competence.
References
- Walubo, A., Burch, V., Parmar, P., Raidoo, D., Cassimjee, M., Onia, R. & Ofei, F. (2003). A model for selecting assessment methods for evaluating medical students in African medical schools. Academic Medicine, 78, 9, 899-906
- Luckett, K. & Sutherland, L. (2000). Assessment Practices that Improve Teaching and Learning, in (ed) S. Makoni. Improving Teaching and Learning in Higher Education: A Handbook for Southern Africa. Johannesburg: University of Witwatersrand Press
- Office of Health Sciences Education, Queen’s University (2008) Improve learning through formative assessment. The Teaching Doctor: Illuminating Evidence-based Ideas for Effective Teaching
- Shumway, J.M. & Harden, R.M. (2003) AMEE Guide no. 25: The assessment of learning outcomes for the competent and reflective practitioner. Medical Teacher, 25, 6, 569-584
-
Validity and Reliability
In medical education concerns of validity and reliability of the assessment method are always important. A valid assessment should measure what it intend to measure or was designed to measure. Validity refers to the degree that the evidence supports that the test interpretations are correct.
Reliability refers to the consistency of assessment and the ability of results to be generalized to other performances. Reliability implies that the same results will be obtained when the assessment is administered to a student under the same or similar conditions.
References
- Downing, S. M. 2003. Validity: on the meaningful interpretation of assessment data. Medical Education. 37(9):830–837. Read more.
- Norcini, J.J. and McKinley, D.W. (2007). Assessment methods in medical education Teaching and Teacher Education, 23: 239–250. 1
- Van Der Vleuten C. P.M, Schuwirth and Lambert W.T. (2005). Assessing professional competence: from methods to programmes. Medical Education, 39, (3): 309–317.
-
Standard Setting
Standard setting is concerned with setting a cut off score that separates the competent from the non-competent student on a test or performance based assessment. It is a process that involves agreement by experts in the course or subject what constitutes competent versus non-competent taking into account the content taught and year of study 1.
There are various approaches in standard setting for both written and performance based assessment to support a fair process in coming to an agreement of what constitutes good enough to be considered competent 3.
The references below discusses the different approaches used in standard setting which can help the lecturer to make an informed choice of which method to use that best supports the selected assessment method 2.
References
- Friedman Ben-David, M. (2000). AMEE Guide No.18: Standard setting in student assessment. Medical Teacher, 22(2) 120-130.
- Norcini, J. (2003). Setting standards on educational tests. Medical Education, 37:464–469.
- Southgate, L., Hays, R.B., Norcini, J., Mulholland, H., Ayers, B., Woolliscroft, H., Cusimano, M., McAvoy, P., Ainsworth, M., Haist, S. and Campbell, M. (2001). Setting performance standards for medical practice: a theoretical framework. Medical Education, 35: 474-481.
-
Frequently used assessment methods
Frequently used assessment methods
Norm-referenced assessment methods
- MCQs and Extended Matching Items
- Computer-based assessment
- Modified essay questions
- Short answer questions
- Progress test
Criterion-based assessments
- Mini-Cex
- OSCE
- Case Presentations
- Oral exams
- Portfolio exams
- Group assessment
- Peer assessment
- Self- assessments
References
1. Constructing Written Test Questions for the Basic and Clinical Sciences, 3rd revision. (2002). Contributing authors: Case, S.M. and Swanson, D. B. National Board of Medical Examiners (NBME). Printed in the United States of America.
2. Gaytan, J & McEwen, B. (2003). Effective online instructional and assessment strategies. American Journal of Distance Education, 21(3), 117–132
3. Shumway, J.M. & Harden, R.M. (2003) AMEE Guide no. 25: The assessment of learning outcomes for the competent and reflective practitioner. Medical Teacher, 25, 6, 569-584
4. Norcini, J. & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher 29, 855-871
5. Newble, D. (2004) Techniques for measuring clinical competence: Objective Standard Clinical Examinations. Medical Education, 38, 199-203 (Blueprint)
6. Burch, V. & Seggie, J. (2005). Portfolio assessment using a structured interview. Medical Education, 39, 114.
7. Victoria University Teaching Development Centre. (2004). Improving Teaching and Learning: Group Work and Group Assessment. Victoria University of Wellington, Australia.
8. Finn, G.B. & Garner, J. (2011). Twelve tips for implementing a successful peer assessment. Medical Teacher, 33, 443–446
9. Colthart, I., Bagnall, G., Evans, A., Allbutt, H., Haig, A., Illing, J. & McKinstry, B. (2008). The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. Medical Teacher, 30, 124–145
-
Resources
- Assessment of Group Work: Summary of a Literature Review
This resource reports a summary of literature describing the assessment of group work in higher education. Five peer- reviewed articles were selected, based on the inclusion criterion was that the article provided a description of group work assessment. In addition, an unpublished resource on group assessment was consulted and results of a meta-analysis comparing teacher and peer evaluation was included. Abstracts of the articles reviewed are provided in Appendix 1, to guide further reading. - Multiple Choice Questions - An Introduction
- Guide to designing and writing learning outcomes for health professional education
This document provides guidelines for designing learning outcomes. For those who wish to engage in designing outcomes, the first page provides a brief overview that is followed by step-by-step guidelines and examples. The debate on the pros and cons of adopting an outcomes-based approach continues (for example, see Bleakly et al, 2011). - Vula: Intro to item analysis
- Vula administration guide
- Assessment of Group Work: Summary of a Literature Review
Assessment
- Blueprinting: Curriculum alignment tool: A resource for teachers. Ige & Pienaar, 2020
“A blueprint specifies all the elements of performance relevant to the assessment so that appropriate samples of activity and corresponding methods can be selected according to their relative importance to the overall assessment process. In blueprinting, the essential elements of the assessment are arranged on a multidimensional grid” (Crossley, et al., 2002). The purpose in tabulating teaching and learning activities, learning outcomes and assessment is to ensure the quality of the curriculum is upheld (Biggs, 1999). - Dept of Health Sciences Education, article: Assessment during the time of emergency remote teaching.
Many institutions that use face to face means of delivering their educational program have well-rehearsed educational processes with a teaching and assessment plan guided by the core learning requirements. The learning material/content would be refined from prior years and in some ways the assessment results have a sense of certainty. Read more here.