Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas. However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it. Most previous methods are not able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it. To address this gap, we perform a systematic exploration into training-free uncertainty estimation for dense regression, an unrecognized yet important problem, and provide a theoretical construction justifying such estimations. We propose three simple and scalable methods to analyze the variance of outputs from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout. They operate solely during the inference, without the need to re-train, re-design, or fine-tune the models, as typically required by state-of-the-art uncertainty estimation methods. Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to training-required state-of-the-art methods.
翻译:确定性估算是评价计算机愿景中深层次学习模型的稳健性的关键步骤,特别是在风险敏感领域应用时,对于评估计算机愿景中深层次学习模型的稳健性而言,特别是在风险敏感领域。然而,大多数最先进的深层次学习模型要么没有获得不确定性估计,要么需要进行重大修改(例如制定适当的巴伊西亚治疗方法)才能获得这种评估。大多数以前的方法都无法在架子上采用任意的模型,在没有再培训或重新设计的情况下产生不确定性估计。为了弥补这一差距,我们系统地探索对密集回归的无培训性不确定性估计,这是一个未承认但重要的问题,并提供理论构建,证明这种估计是合理的。我们提出了三种简单和可扩缩的方法,用以分析在可容忍的干扰下从受过训练的网络获得的产出差异:推价转换、推论、推论和推论退出。这些方法仅仅在推论期间运作,而无需重新配置、重新配置或微调模型,这是国家不确定性估计方法通常要求的典型要求。即使没有在培训中进行这种比较的不确定性估计,也无需涉及这种精确的方法。