Deep neural networks are susceptible to learn biased models with entangled feature representations, which may lead to subpar performances on various downstream tasks. This is particularly true for under-represented classes, where a lack of diversity in the data exacerbates the tendency. This limitation has been addressed mostly in classification tasks, but there is little study on additional challenges that may appear in more complex dense prediction problems including semantic segmentation. To this end, we propose a model-agnostic and stochastic training scheme for semantic segmentation, which facilitates the learning of debiased and disentangled representations. For each class, we first extract class-specific information from the highly entangled feature map. Then, information related to a randomly sampled class is suppressed by a feature selection process in the feature space. By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes, and the model is able to learn more debiased and disentangled feature representations. Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks, with especially notable performance gains on under-represented classes.
翻译:深心神经网络容易学习具有纠缠特征的偏差模型,这些模型可能导致在各种下游任务上产生分解性能。对于代表性不足的班级来说尤其如此,因为数据缺乏多样性会加剧这一趋势。这一限制大多在分类任务中得到解决,但很少研究在更复杂的密集预测问题(包括语义分化)中可能出现的额外挑战。为此,我们提议了一个用于语义分解的模型――不可知性和随机性培训计划,这有利于学习分解和分解的方言。我们首先从高度纠结特征图中提取类特定信息。然后,与随机抽样的班级有关的信息被特征选择过程抑制。通过随机删除每个培训分解的班级信息,我们有效地减少了各班级之间的特征依赖性,而模型能够学习更加分解性和分解性特征的方言。通过我们的方法培训的模型在多个分解性分解性基准上展示了强烈的结果,在代表性不足的班级上特别显著的成绩。