We focus on the word-level visual lipreading, which requires to decode the word from the speaker's video. Recently, many state-of-the-art visual lipreading methods explore the end-to-end trainable deep models, involving the use of 2D convolutional networks (e.g., ResNet) as the front-end visual feature extractor and the sequential model (e.g., Bi-LSTM or Bi-GRU) as the back-end. Although a deep 2D convolution neural network can provide informative image-based features, it ignores the temporal motion existing between the adjacent frames. In this work, we investigate the spatial-temporal capacity power of I3D (Inflated 3D ConvNet) for visual lipreading. We demonstrate that, after pre-trained on the large-scale video action recognition dataset (e.g., Kinetics), our models show a considerable improvement of performance on the task of lipreading. A comparison between a set of video model architectures and input data representation is also reported. Our extensive experiments on LRW shows that a two-stream I3D model with RGB video and optical flow as the inputs achieves the state-of-the-art performance.
翻译:我们的焦点是字级视觉读取,这需要从发言者的视频中解码这个词。 最近,许多最先进的视觉读取方法探索了端到端的深层模型,包括使用2D演动网络(例如ResNet)作为前端视觉特征提取器,以及作为后端的顺序模型(例如Bi-LSTM或Bi-GRU),虽然深2D演动神经网络可以提供基于图像的功能,但它忽略了相邻框架之间的时间运动。在这项工作中,我们调查了I3D(充气的3D CononNet)的空间-时空能力能力,用于视觉唇读。我们证明,在对大型视频动作识别数据集(例如Kinetics)进行预先训练后,我们的模型显示在唇读任务上的表现有相当大的改进。对一组视频模型结构与输入数据表之间的比较也报告了我们在LRRWA上的广泛实验显示,光电动三D(即光学流)和光学模型实现双流的光学表现。