Abstract
Reliability and validity of the scorable problem list is demonstrated using responses from 95 medical students, residents, interns, and faculty. This technique determines errors made in synthesizing the abnormalities in a data base into a problem list. There errors are: failure to identify and fully specify complex problems, misuse of cues, and failure to identify single elements which stand alone as management prob lems. The scorable problem list has remarkable interrater reliability and the capacity to discriminate performance by level of training (allowing the generation of criteria for a group). In addition, some characteristics of the growth of decision- making skill are suggested, including an increasing ability to recognize patterns (diagnostic entities), to be precise in the use of qualifying terminology, to properly place cues into working hypotheses, and to recognize single issues which, themselves are management problems. In contrast to traditional "patient-management problems," the task is not to arrive at a specific single diagnosis and treatmet, but to sift the abnormalities mto a working problem list and preserve ambiguity where it exists in an open system with no cueing.
Get full access to this article
View all access options for this article.
