This paper tackles a new photometric stereo task, named universal photometric stereo. Unlike existing tasks that assumed specific physical lighting models; hence, drastically limited their usability, a solution algorithm of this task is supposed to work for objects with diverse shapes and materials under arbitrary lighting variations without assuming any specific models. To solve this extremely challenging task, we present a purely data-driven method, which eliminates the prior assumption of lighting by replacing the recovery of physical lighting parameters with the extraction of the generic lighting representation, named global lighting contexts. We use them like lighting parameters in a calibrated photometric stereo network to recover surface normal vectors pixelwisely. To adapt our network to a wide variety of shapes, materials and lightings, it is trained on a new synthetic dataset which simulates the appearance of objects in the wild. Our method is compared with other state-of-the-art uncalibrated photometric stereo methods on our test data to demonstrate the significance of our method.
翻译:本文处理的是一个新的光度立体任务,称为通用光度立体。 与现有任务不同,这些任务假定了具体的物理照明模型; 因此,大大限制了它们的可用性, 这项任务的解决方案算法应该适用于在任意照明变异下具有不同形状和材料的物体, 而不假定任何特定模型。 为了解决这一极具挑战性的任务, 我们提出了一个纯粹的数据驱动方法, 通过提取通用照明代表, 名为全球光度背景, 取代恢复物理照明参数, 从而消除先前的照明假设。 我们使用这些参数, 像校准光度立体网络中的照明参数, 来恢复普通的地面矢量。 为了调整我们的网络, 使其适应各种形状、 材料和照明, 它被训练为一个新的合成数据集, 模拟野生物体的外观。 我们的方法与其他测试数据上最先进的、 未经校正的光度的光度测立立立体法方法相比, 以显示我们方法的意义 。