Abstract
This study empirically examines parameter-mapping sonifications to represent the difference or error between the desired state and the current state of the system being controlled using the non-speech audio. Eight different types of data-to-sound mappings were prepared by which the error was mapped onto either or both of two acoustic parameters of intensity and frequency. The mappings were tested through an experiment using a one-dimensional tracking task. The experimental results shows the following points: 1) as for the tracking task, the mappings are preferable in which the sound intensity falls as the error decreases, 2) no audible sound when no error remains is effective because listeners can easily identify the achievement to the goal from the sound itself, and 3) it seems more effective to map the error to the intensity and frequency redundantly than only to either of them, if perceptual interaction between changes in loudness and pitch is considered enough.
Get full access to this article
View all access options for this article.
