Photorealistic style transfer aims to transfer the style of one image to another, but preserves the original structure and detail outline of the content image, which makes the content image still look like a real shot after the style transfer. Although some realistic image styling methods have been proposed, these methods are vulnerable to lose the details of the content image and produce some irregular distortion structures. In this paper, we use a high-resolution network as the image generation network. Compared to other methods, which reduce the resolution and then restore the high resolution, our generation network maintains high resolution throughout the process. By connecting high-resolution subnets to low-resolution subnets in parallel and repeatedly multi-scale fusion, high-resolution subnets can continuously receive information from low-resolution subnets. This allows our network to discard less information contained in the image, so the generated images may have a more elaborate structure and less distortion, which is crucial to the visual quality. We conducted extensive experiments and compared the results with existing methods. The experimental results show that our model is effective and produces better results than existing methods for photorealistic image stylization. Our source code with PyTorch framework will be publicly available at https://github.com/limingcv/Photorealistic-Style-Transfer
翻译:光学风格传输的目的是将一个图像的风格转换为另一个图像,但保留了内容图像的原始结构和详细大纲,使内容图像看起来仍然像在风格传输后的真实镜头。虽然提出了一些现实的图像样式方法,但这些方法很容易丢失内容图像的细节,并产生一些不正常的扭曲结构。在本文中,我们使用高分辨率网络作为图像生成网络。与其他方法相比,这些方法减少了分辨率,然后恢复了高分辨率,但我们的新一代网络在整个过程中保持高分辨率。通过将高分辨率子网络与低分辨率子网络连接成平行和多次多级聚合,高分辨率子网络可以不断从低分辨率子网络接收信息。这使得我们的网络能够丢弃图像中所含的信息较少,因此产生的图像可能有一个更精密的结构,减少扭曲性,这对视觉质量至关重要。我们进行了广泛的实验,并将结果与现有方法进行比较。实验结果显示,我们的模型有效,并产生比现有的摄影真实性图像同步化方法更好的结果。我们的源码与 MAY/Torpiral- transfiral-firal-real-real-real-reallistalingsmal-al-listal-list-listal-listal-listal-listal-fering