Abstract
Frequency-following responses (FFRs) are neural signals that reflect the brain’s encoding of acoustic characteristics, such as speech intonation. While traditional machine learning models have been used to classify FFRs elicited under various conditions, the potential of deep learning models in FFR research remains underexplored. This study investigated the efficacy of a three-layer artificial neural network (ANN) in detecting the presence or absence of FFRs elicited by a rising intonation of the English vowel /i/. The ANN was trained and tested on FFR recordings, using F0 estimates derived from the spectral domain as input data. Model performance was evaluated by systematically varying the number of inputs, hidden neurons, and the number of sweeps included in the recordings. The prediction accuracy of the ANN was significantly influenced by the number of inputs, hidden neurons, and sweeps. Optimal configurations included 6–8 inputs and 4–6 hidden neurons, achieving a prediction accuracy of approximately 84% when the signal-to-noise ratio was enhanced by including 100 or more sweeps. These results provide a foundation for future applications in auditory processing assessments and clinical diagnostics.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
