Abstract
To help determine the role that the exami nation formats play in evaluation, 227 second-year medical students were administered two parallel exams. One required students to generate their own medical problem lists (the generate group). The other required students to select problem lists from a series of alternatives provided in the examination (the select group). It was predicted that the select group would score significantly higher than students generating their own lists. Average overall scores of 42% and 57% correct answers for the generate and the select groups, respec tively, indicated that all second-year medi cal students had difficulty formulating problem lists. Significant quantitative and qualitative differences were noted between the select group, which usually composed properly integrated problems, and the generate group, which constructed partially correct answers consisting of unintegrated cues. The relative utility of generate or select response formats for diagnostic and certify ing examinations is discussed further.
Get full access to this article
View all access options for this article.
