Abstract
A learning-by-doing training technique was compared to full-information training in both a generic laboratory task and an application setting. The generic task required trainees to search for targets in a dynamic numerical array. Cells in the array were defined by rate of change, probability of target, and cost of errors. Learning-by-doing trainees had to adopt attention allocation strategies using only running point-total feedback. Informed subjects were given cell parameter values. During training informed subjects were superior until the sixth training day. At transfer to a new, unknown parameter set learning-by-doing subjects showed a clear superiority until the third day of transfer. They had apparently learned to infer system dynamics from system feedback. This finding was not replicated in the context of a flight simulator. The failure to validate the laboratory result in the more applied setting could be due to the effects of additional dimensions and skills required for the flying task that are not included in the laboratory task.
Get full access to this article
View all access options for this article.
