Abstract
An important application of cognitive architectures is to provide human performance models that capture psychological mechanisms in a form that can be “programmed” to predict task performance of human-machine system designs. Earlier models accounted for some key aspects of performance in a two-talker task, but spatial separation of the speech sources produces complex effects not yet represented. Adding some first-principle mechanisms to the earlier models suggests that this fundamental aspect of multi-talker speech perception can be accounted for as well.
Get full access to this article
View all access options for this article.
