Although large foundation models pre-trained by self-supervised learning have achieved state-of-the-art performance in many tasks including automatic speech recognition (ASR), knowledge distillation (KD) is often required in practice to transfer the knowledge learned by large teacher models into much smaller student models with affordable computation and memory costs. This paper proposes a novel two-stage KD framework to distil the knowledge from multiple speech foundation models as teachers into a single student neural transducer model for ASR. In the first stage, the student model encoder is pre-trained using the embeddings extracted from multiple teacher models. In the second stage, the student encoder is fine-tuned with the audio-text pairs based on the ASR task. Experiments on the LibriSpeech 100-hour subset show that the proposed KD framework improves the performance of both streaming and non-streaming student models when using only one teacher. The performance of the student model can be further enhanced when multiple teachers are used jointly, achieving word error rate reductions (WERRs) of 17.5% and 10.6%. Our proposed framework can be combined with other existing KD methods to achieve further improvements. Further WERRs were obtained by incorporating extra unlabelled data during encoder pre-training, leading to a total relative WERR of 55.0% on the non-streaming student model.
翻译:虽然通过自监督学习预训练的大型基础模型在许多任务中包括自动语音识别(ASR)方面取得了最先进的性能,但在实践中经常需要进行知识蒸馏(KD),以将大型教师模型学习的知识转移至具有可承受的计算和存储成本的更小的学生模型。本文提出了一种新的两阶段KD框架,以将来自多个语音基础模型的知识作为教师,蒸馏成单个用于ASR的学生神经转换器模型。在第一阶段中,学生模型编码器使用从多个教师模型提取的嵌入进行预训练。第二阶段,学生编码器根据ASR任务基于音频文本对进行微调。在LibriSpeech 100小时子集上的实验显示,当仅使用一个教师时,所提出的KD框架改善了流和非流学生模型的性能。当同时使用多个教师时,学生模型的性能可以进一步提高,实现了分别为17.5%和10.6%的词错误率降低。我们提出的框架可以与其他现有的KD方法相结合,以实现进一步的改进。在编码器预训练期间加入额外的无标签数据会获得进一步的词错误率降低,从而在非流学生模型上获得总相对词错误率降低55.0%的结果。