We present Bifocal RNN-T, a new variant of the Recurrent Neural Network Transducer (RNN-T) architecture designed for improved inference time latency on speech recognition tasks. The architecture enables a dynamic pivot for its runtime compute pathway, namely taking advantage of keyword spotting to select which component of the network to execute for a given audio frame. To accomplish this, we leverage a recurrent cell we call the Bifocal LSTM (BFLSTM), which we detail in the paper. The architecture is compatible with other optimization strategies such as quantization, sparsification, and applying time-reduction layers, making it especially applicable for deployed, real-time speech recognition settings. We present the architecture and report comparative experimental results on voice-assistant speech recognition tasks. Specifically, we show our proposed Bifocal RNN-T can improve inference cost by 29.1% with matching word error rates and only a minor increase in memory size.
翻译:我们展示了Bifocal RNN-T, 即常年神经网络转换器(RNN-T)结构的一个新变体,该结构旨在改进语音识别任务的推导时间。该结构为运行时间计算路径提供了动态轴,即利用关键字定位选择网络的哪个组成部分用于给定的音频框架。为此,我们利用一个我们称为Bifocal LSTM(BFLSTM)的经常性单元格。该结构与其他优化战略相容,例如量化、垃圾化和适用时间缩减层,使其特别适用于部署的实时语音识别设置。我们展示了该结构,并报告了语音辅助语音识别任务的比较实验结果。具体地说,我们展示了我们提议的Bifocal RNNN-T可以用匹配的字误差率提高29.1%的推算成本,而记忆大小仅略有增加。