Decoder-Encoder LSTM for Lip Reading
Journal article
Fenghour, S., Chen, D. and Xiao, P. (2019). Decoder-Encoder LSTM for Lip Reading. Proceedings of the 2019 8th International Conference on Software and Information Engineering. https://doi.org/10.1145/3328833.3328845
Authors | Fenghour, S., Chen, D. and Xiao, P. |
---|---|
Abstract | The success of automated lip reading has been constrained by the inability to distinguish between homopheme words, which are words have different characters and produce the same lip movements (e.g. ”time” and ”some”), despite being intrinsically different. One word can often have different phonemes (units of sound) producing exactly the viseme or visual equivalent of phoneme for a unit of sound. Through the use of a Long-Short Term Memory Network with word embeddings, we can distinguish between homopheme words or words that produce identical lip movements. The neural network architecture achieved a character accuracy rate of 77.1% and a word accuracy rate of 72.2%. |
Year | 2019 |
Journal | Proceedings of the 2019 8th International Conference on Software and Information Engineering |
Publisher | ACM |
Digital Object Identifier (DOI) | https://doi.org/10.1145/3328833.3328845 |
Publication dates | |
Online | 09 Apr 2019 |
09 Apr 2019 | |
Publication process dates | |
Deposited | 25 Mar 2019 |
Accepted | 23 Mar 2019 |
Accepted author manuscript | File Access Level Open |
License | http://www.acm.org/publications/policies/copyright_policy#Background |
Permalink -
https://openresearch.lsbu.ac.uk/item/866z6
Download files
217
total views613
total downloads8
views this month7
downloads this month