We propose DiffSep, a new single channel source separation method based on score-matching of a stochastic differential equation (SDE). We craft a tailored continuous time diffusion-mixing process starting from the separated sources and converging to a Gaussian distribution centered on their mixture. This formulation lets us apply the machinery of score-based generative modelling. First, we train a neural network to approximate the score function of the marginal probabilities or the diffusion-mixing process. Then, we use it to solve the reverse time SDE that progressively separates the sources starting from their mixture. We propose a modified training strategy to handle model mismatch and source permutation ambiguity. Experiments on the WSJ0 2mix dataset demonstrate the potential of the method. Furthermore, the method is also suitable for speech enhancement and shows performance competitive with prior work on the VoiceBank-DEMAND dataset.
翻译:我们提出DiffSep, 这是一种基于随机差异方程式(SDE)比对的新的单一频道源分离方法。 我们设计了一个量身定做的连续时间扩散混合过程, 从分离源开始, 并结合到以其混合物为核心的高斯分布。 这个配方让我们应用基于分数的基因建模机制。 首先, 我们训练一个神经网络, 以近似边际概率或扩散混合过程的得分功能。 然后, 我们用它来解决SDE逐渐将源源与其混合物分离的反向时间。 我们提出了一个经过修改的训练战略, 以处理模型不匹配和来源变异的模糊性。 对WSJ0 2mix数据集的实验显示了该方法的潜力。 此外, 该方法还适合于语音增强, 并显示与语音Bank- DEAND数据集先前的工作相比的性能竞争力。