Achieving robust generalization in speech deepfake detection (SDD) remains a primary challenge, as models often fail to detect unseen forgery methods. While research has focused on model-centric and algorithm-centric solutions, the impact of data composition is often underexplored. This paper proposes a data-centric approach, analyzing the SDD data landscape from two practical perspectives: constructing a single dataset and aggregating multiple datasets. To address the first perspective, we conduct a large-scale empirical study to characterize the data scaling laws for SDD, quantifying the impact of source and generator diversity. To address the second, we propose the Diversity-Optimized Sampling Strategy (DOSS), a principled framework for mixing heterogeneous data with two implementations: DOSS-Select (pruning) and DOSS-Weight (re-weighting). Our experiments show that DOSS-Select outperforms the naive aggregation baseline while using only 3% of the total available data. Furthermore, our final model, trained on a 12k-hour curated data pool using the optimal DOSS-Weight strategy, achieves state-of-the-art performance, outperforming large-scale baselines with greater data and model efficiency on both public benchmarks and a new challenge set of various commercial APIs.
翻译:在语音深度伪造检测(SDD)中实现稳健的泛化能力仍是一个主要挑战,因为模型往往无法检测未见过的伪造方法。尽管现有研究主要关注以模型为中心和以算法为中心的解决方案,但数据构成的影响却常被忽视。本文提出一种以数据为中心的方法,从两个实际角度分析SDD数据格局:构建单一数据集与聚合多个数据集。针对第一个角度,我们进行了大规模实证研究,以刻画SDD的数据缩放规律,量化数据来源多样性和生成器多样性的影响。针对第二个角度,我们提出了多样性优化采样策略(DOSS)——一个用于混合异构数据的理论框架,包含两种实现方式:DOSS-Select(剪枝)和DOSS-Weight(重加权)。实验表明,DOSS-Select在使用仅3%总可用数据的情况下,性能优于朴素聚合基线。此外,我们采用最优DOSS-Weight策略在12k小时精选数据池上训练的最终模型,在公开基准测试和包含多种商业API的新挑战集上均取得了最先进的性能表现,同时以更高的数据利用效率和模型效率超越了大规模基线模型。