Automatic User Preferences Selection of Smart Hearing Aid Using BioAid
Journal article
Siddiqui, H., Saleem, A., Raza, M., Kainat Zafar, Riccardo Russo and Dudley-Mcevoy, S. (2022). Automatic User Preferences Selection of Smart Hearing Aid Using BioAid. Sensors. 22 (20), p. 8031. https://doi.org/10.3390/s22208031
Authors | Siddiqui, H., Saleem, A., Raza, M., Kainat Zafar, Riccardo Russo and Dudley-Mcevoy, S. |
---|---|
Abstract | Noisy environments, changes and variations in the volume of speech, and non-face-to-face conversations impair the user experience with hearing aids. Generally, a hearing aid amplifies sounds so that a hearing-impaired person can listen, converse, and actively engage in daily activities. Presently, there are some sophisticated hearing aid algorithms available that operate on numerous frequency bands to not only amplify but also provide tuning and noise filtering to minimize background distractions. One of those is the BioAid assistive hearing system, which is an open-source, freely available downloadable app with twenty-four tuning settings. Critically, with this device, a person suffering with hearing loss must manually alter the settings/tuning of their hearing device when their surroundings and scene changes in order to attain a comfortable level of hearing. However, this manual switching among multiple tuning settings is inconvenient and cumbersome since the user is forced to switch to the state that best matches the scene every time the auditory environment changes. The goal of this study is to eliminate this manual switching and automate the BioAid with a scene classification algorithm so that the system automatically identifies the user-selected preferences based on adequate training. The aim of acoustic scene classification is to recognize the audio signature of one of the predefined scene classes that best represent the environment in which it was recorded. BioAid, an open-source biological inspired hearing aid algorithm, is used after conversion to Python. The proposed method consists of two main parts: classification of auditory scenes and selection of hearing aid tuning settings based on user experiences. The DCASE2017 dataset is utilized for scene classification. Among the many classifiers that were trained and tested, random forests have the highest accuracy of 99.7%. In the second part, clean speech audios from the LJ speech dataset are combined with scenes, and the user is asked to listen to the resulting audios and adjust the presets and subsets. A CSV file stores the selection of presets and subsets at which the user can hear clearly against the scenes. Various classifiers are trained on the dataset of user preferences. After training, clean speech audio was convolved with the scene and fed as input to the scene classifier that predicts the scene. The predicted scene was then fed as input to the preset classifier that predicts the user’s choice for preset and subset. The BioAid is automatically tuned to the predicted selection. The accuracy of random forest in the prediction of presets and subsets was 100%. This proposed approach has great potential to eliminate the tedious manual switching of hearing assistive device parameters by allowing hearing-impaired individuals to actively participate in daily life by automatically adjusting hearing aid settings based on the acoustic scene |
Year | 2022 |
Journal | Sensors |
Journal citation | 22 (20), p. 8031 |
Publisher | MDPI |
ISSN | 1424-8220 |
Digital Object Identifier (DOI) | https://doi.org/10.3390/s22208031 |
Web address (URL) | https://www.mdpi.com/1424-8220/22/20/8031 |
Publication dates | |
Online | 20 Oct 2022 |
Publication process dates | |
Accepted | 19 Oct 2022 |
Deposited | 27 Oct 2022 |
Publisher's version | License File Access Level Open |
https://openresearch.lsbu.ac.uk/item/92552
Download files
63
total views53
total downloads1
views this month2
downloads this month