Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, few studies have been conducted on the case of large domain discrepancies between a source and a target domain. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented-domains, we train the source-dominant model and the target-dominant model that have complementary characteristics. Using our confidence-based learning methodologies, e.g., bidirectional matching with high-confidence predictions and self-penalization using low-confidence predictions, the models can learn from each other or from its own results. Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain. Extensive experiments demonstrate the superiority of our proposed method on three public benchmarks: Office-31, Office-Home, and VisDA-2017.
翻译:未受监督的域适应方法(UDA)用于学习领域差异表示方法取得了显著进展,然而,对源与目标领域之间巨大域差异的情况的研究很少。在本文件中,我们建议采用UDA方法,有效处理如此大域差异。我们采用了基于固定比例的混合方法,以扩大源与目标领域之间的多个中间领域。从扩大域,我们培训源-源-主模式和具有互补特点的目标-主模式。我们利用基于信任的学习方法,例如,与高信任预测的双向匹配,以及利用低信任预测的自我惩罚,这些模型可以相互学习,也可以从自身成果中学习。通过我们提议的方法,这些模型将源与目标领域之间的领域知识逐步转移。我们从扩大域到扩大域,我们培训了源-主模式和具有互补性特点的目标-主模式。我们采用基于信任的学习方法,例如双向匹配,利用低信任预测,这些模型可以相互学习,也可以从彼此学习,或者从自己的结果中学习。通过我们提议的方法,这些模型将域知识从源向目标领域向目标领域转移。广泛的实验表明我们拟议方法在三个办公室、总部和VisDA-2017的三种公共基准上具有优势。