Abstract
The learning to use (or disregard) alarms was analyzed in an experiment in which participants had to decide whether a simulated system was intact or malfunctioning, based on the reading of a gauge-like display. In two of the three experimental conditions participants received a warning when the system, that functioned as a signal detector with the same sensitivity as the participant, detected a value that was beyond a preset response criterion (which was either lnβw = −1 or lnβw = .5). Results showed that participants' responses were always less cautious than the optimal setting. Already in the first 100 trial block (out of 5) existed clear differences between the warning conditions, as well as differences between the no-warning condition and a warning condition for which the optimal β were the same. These results indicate that the learning process has two components—first a rapid adjustment of the response to the condition, and than a slow, and in some conditions non-existing, convergence towards the optimum.
Get full access to this article
View all access options for this article.
