Abstract
Observers were required to detect a signal that underwent a decrease in probability of occurrence throughout a session. Contrary to what might be expected with the "ideal observer" hypothesis of signal detection theory, changes in the proportions of hits and false alarms, as well as in their associated response times and confidence ratings, indicated that observers reacted to the decrease in signal probability by adopting a less stringent criterion for making signal responses. The empirical pattern of changes in response proportions is compared with that predicted by a response stabilization process. An adaptation----level hypothesis is then proposed, in which the cutoff adopted by the observer is equal to the cumulatively based mean of all the sensory intensities experienced by the observer up to that trial. Such a mechanism for criterion control offers a simple but very general explanation for criterion changes in signal detection.
Get full access to this article
View all access options for this article.
