Speech utterances recorded under differing conditions exhibit varying degrees of confidence in their embedding estimates, i.e., uncertainty, even if they are extracted using the same neural network. This paper aims to incorporate the uncertainty estimate produced in the xi-vector network front-end with a probabilistic linear discriminant analysis (PLDA) back-end scoring for speaker verification. To achieve this we derive a posterior covariance matrix, which measures the uncertainty, from the frame-wise precisions to the embedding space. We propose a log-likelihood ratio function for the PLDA scoring with the uncertainty propagation. We also propose to replace the length normalization pre-processing technique with a length scaling technique for the application of uncertainty propagation in the back-end. Experimental results on the VoxCeleb-1, SITW test sets as well as a domain-mismatched CNCeleb1-E set show the effectiveness of the proposed techniques with 14.5%-41.3% EER reductions and 4.6%-25.3% minDCF reductions.
翻译:在不同条件下记录的讲话在不同的条件下,其嵌入估计中表现出不同程度的信任度,即不确定性,即使它们是通过同一神经网络提取的。本文旨在将x-Victor网络前端产生的不确定性估计值与一种概率性线性分辨分析(PLDA)后端评分纳入语音校验中。为了实现这一点,我们得出了一个后方或共差矩阵,该矩阵测量从框架精度到嵌入空间的不确定性。我们建议PLDA的评分使用不确定性传播的逻辑类比功能。我们还建议用一个长度缩放技术取代长期的处理前技术,以在后端应用不确定性传播。VoxCeleb-1、SITW测试组的实验结果以及一个域-miscatch CNCeleb1-E集的实验结果显示了拟议技术的有效性,削减了14.5%-41.3% EER和4.6%-25.3% minDCF的削减。