Abstract
This paper focuses on the development of an automatic sound classifier for digital hearing aids that aims to enhance the listening comprehension when the user goes from a sound environment to another different one. The approach consists in dividing the classifying algorithm into two layers that make use of two-class algorithms that work more efficiently: the input signal discriminated by the first layer into either speech or non-speech is ulteriorly classified more specifically depending on whether the user is in a conversation (both in quiet or in the presence of background noise) or in a noisy ambient in the absent of speech. The system results in having four classes, labeled speech in quiet, speech in noise, stationary noisy environments (for instance, an aircraft cabin), and non-stationary noisy environments. The combination of classifiers that has been found to be more successful in terms of probability of correct classification consists of a system that makes use of Multilayer Perceptrons for those classification tasks in which speech is involved, and a Fisher Linear Discrimnant for distinguising stationary noisy environments from the non-stationary ones. The system performance has been found to be higher than that of other more classical approaches, and even superior than that of our preliminary work.
Get full access to this article
View all access options for this article.
