In this paper a novel cross-device text-independent speaker verification architecture is proposed. Majority of the state-of-the-art deep architectures that are used for speaker verification tasks consider Mel-frequency cepstral coefficients. In contrast, our proposed Siamese convolutional neural network architecture uses Mel-frequency spectrogram coefficients to benefit from the dependency of the adjacent spectro-temporal features. Moreover, although spectro-temporal features have proved to be highly reliable in speaker verification models, they only represent some aspects of short-term acoustic level traits of the speaker's voice. However, the human voice consists of several linguistic levels such as acoustic, lexicon, prosody, and phonetics, that can be utilized in speaker verification models. To compensate for these inherited shortcomings in spectro-temporal features, we propose to enhance the proposed Siamese convolutional neural network architecture by deploying a multilayer perceptron network to incorporate the prosodic, jitter, and shimmer features. The proposed end-to-end verification architecture performs feature extraction and verification simultaneously. This proposed architecture displays significant improvement over classical signal processing approaches and deep algorithms for forensic cross-device speaker verification.
翻译:在本文中,提出了一个新的跨时间跨概念文本独立扬声器核查架构。用于发言者核查任务的最先进的深层结构中,大部分考虑的是梅尔频阴极系数。相比之下,我们提议的暹粒共振神经网络架构使用Mel-频谱光谱系数从相邻的光谱-时空特征的依赖性中获益。此外,虽然光谱-时空特征在发言者核查模型中证明非常可靠,但它们仅代表了发言者声音短期声级特征的某些方面。然而,人类声音由若干语言层面组成,如声学、词汇学、动听和语音学等,可在发言者核查模型中使用。为弥补光谱-时空特征中这些遗留下来的缺陷,我们提议通过部署多层感应网络,纳入偏振、急和闪烁特征,加强拟议的Siams-波神经网络架构。拟议的终端-端核查架构同时进行特征提取和深度核查。这一拟议架构展示了用于法医处理的典型信号分析方法。