An Effective Conversion of Visemes to Words for High-Performance Automatic Lipreading.
Journal article
Fenghour, S., Chen, D., Guo, K., Li, B. and Xiao, P. (2021). An Effective Conversion of Visemes to Words for High-Performance Automatic Lipreading. Sensors. 21 (23). https://doi.org/s21237890
Authors | Fenghour, S., Chen, D., Guo, K., Li, B. and Xiao, P. |
---|---|
Abstract | As an alternative approach, viseme-based lipreading systems have demonstrated promising performance results in decoding videos of people uttering entire sentences. However, the overall performance of such systems has been significantly affected by the efficiency of the conversion of visemes to words during the lipreading process. As shown in the literature, the issue has become a bottleneck of such systems where the system's performance can decrease dramatically from a high classification accuracy of visemes (e.g., over 90%) to a comparatively very low classification accuracy of words (e.g., only just over 60%). The underlying cause of this phenomenon is that roughly half of the words in the English language are homophemes, i.e., a set of visemes can map to multiple words, e.g., "time" and "some". In this paper, aiming to tackle this issue, a deep learning network model with an Attention based Gated Recurrent Unit is proposed for efficient viseme-to-word conversion and compared against three other approaches. The proposed approach features strong robustness, high efficiency, and short execution time. The approach has been verified with analysis and practical experiments of predicting sentences from benchmark LRS2 and LRS3 datasets. The main contributions of the paper are as follows: (1) A model is developed, which is effective in converting visemes to words, discriminating between homopheme words, and is robust to incorrectly classified visemes; (2) the model proposed uses a few parameters and, therefore, little overhead and time are required to train and execute; and (3) an improved performance in predicting spoken sentences from the LRS2 dataset with an attained word accuracy rate of 79.6%-an improvement of 15.0% compared with the state-of-the-art approaches. |
Keywords | Gated Recurrent Unit; recurrent neural networks; visemes; Humans; Language; Lipreading; deep learning; neural networks; lip reading; robustness; augmentation; speech recognition |
Year | 2021 |
Journal | Sensors |
Journal citation | 21 (23) |
Publisher | MDPI |
ISSN | 1424-8220 |
Digital Object Identifier (DOI) | https://doi.org/s21237890 |
https://doi.org/10.3390/s21237890 | |
Web address (URL) | https://www.mdpi.com/1424-8220/21/23/7890 |
Publication dates | |
26 Nov 2021 | |
Online | 26 Nov 2021 |
Publication process dates | |
Deposited | 22 Nov 2021 |
Accepted | 20 Nov 2021 |
Publisher's version | License File Access Level Open |
Accepted author manuscript | License File Access Level Controlled |
Permalink -
https://openresearch.lsbu.ac.uk/item/8yvq5
Download files
204
total views91
total downloads6
views this month0
downloads this month