Abstract
This experiment explored how expert credit administrators and trained, but not yet expert, credit administrators (i.e., trained nonexperts, TNEs) differ in their ability to generate and verify inferences. Subjects read a case study describing a bank that appeared to be growing rapidly but, because of quality control problems, was heading for major difficulty. Subsequently, they performed a retelling task, a recognition task, and a problem identification/justification task. For each task, dependent measures were derived to detect the ability to retrieve explicit facts about the case and the ability to infer and reason about the case. Results showed that experts and TNEs did not differ in their ability to encode or retrieve the facts given explicitly in the case. However, for each task, marked differences appeared for inferencing and reasoning ability. Taken together, the data argue that the ability to reason within a complex domain is not simply the result of acquiring the requisite declarative knowledge. Rather, reasoning ability depends on problem solving practice and real-world experience in the domain. Implications for instruction are discussed.
Get full access to this article
View all access options for this article.
