It is desirable to transfer the knowledge stored in a well-trained source model onto non-annotated target domain in the absence of source data. However, state-of-the-art methods for source free domain adaptation (SFDA) are subject to strict limits: 1) access to internal specifications of source models is a must; and 2) pseudo labels should be clean during self-training, making critical tasks relying on semantic segmentation unreliable. Aiming at these pitfalls, this study develops a domain adaptive solution to semantic segmentation with pseudo label rectification (namely \textit{PR-SFDA}), which operates in two phases: 1) \textit{Confidence-regularized unsupervised learning}: Maximum squares loss applies to regularize the target model to ensure the confidence in prediction; and 2) \textit{Noise-aware pseudo label learning}: Negative learning enables tolerance to noisy pseudo labels in training, meanwhile positive learning achieves fast convergence. Extensive experiments have been performed on domain adaptive semantic segmentation benchmark, \textit{GTA5 $\to$ Cityscapes}. Overall, \textit{PR-SFDA} achieves a performance of 49.0 mIoU, which is very close to that of the state-of-the-art counterparts. Note that the latter demand accesses to the source model's internal specifications, whereas the \textit{PR-SFDA} solution needs none as a sharp contrast.
翻译:在没有源数据的情况下,最好将储存在训练有素的来源模型中的知识转移到非附加说明的目标领域。然而,在源自由域适应(SFDA)方面,最先进的方法受到严格的限制:(1) 获取源模型的内部规格是必须的;(2) 假标签在自我培训期间应当是干净的,使依赖语义分割的关键任务变得不可靠。针对这些陷阱,本研究开发了一种域性适应性解决办法,用假标签校正化(即\ textit{PR-SFDA})进行语义分割,这分为两个阶段:1)\textit{Confific-firmed-forfree域适应(SFDA),它分为两个阶段:1)\textit{Confilation-firmilation-firmility 学习}:最大平方块损失适用于目标模型的正规化模式,以确保对预测的信心;和2) textitit{Noise-awa reguild 学习能够容忍在培训中使用噪音的假标签,同时,正面学习可以快速趋同。在域调调校正校正的校正(GTA5_SFDA_toxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx