Image representation is a fundamental task in computer vision. Recently, Gaussian Splatting has emerged as an efficient representation framework, and its extension to 2D image representation enables lightweight, yet expressive modeling of visual content. While recent 2D Gaussian Splatting (2DGS) approaches provide compact storage and real-time decoding, they often produce blurry or indistinct boundaries when the number of Gaussians is small due to the lack of contour awareness. In this work, we propose a Contour Information-Aware 2D Gaussian Splatting framework that incorporates object segmentation priors into Gaussian-based image representation. By constraining each Gaussian to a specific segmentation region during rasterization, our method prevents cross-boundary blending and preserves edge structures under high compression. We also introduce a warm-up scheme to stabilize training and improve convergence. Experiments on synthetic color charts and the DAVIS dataset demonstrate that our approach achieves higher reconstruction quality around object edges compared to existing 2DGS methods. The improvement is particularly evident in scenarios with very few Gaussians, while our method still maintains fast rendering and low memory usage.
翻译:图像表示是计算机视觉中的基础任务。近年来,高斯溅射已成为一种高效的表示框架,其向二维图像表示的扩展实现了对视觉内容的轻量化且富有表现力的建模。尽管现有的二维高斯溅射方法提供了紧凑的存储和实时解码能力,但由于缺乏轮廓感知,当高斯分布数量较少时,它们往往会产生模糊或不清晰的边界。本研究提出了一种轮廓信息感知的二维高斯溅射框架,将对象分割先验知识融入基于高斯的图像表示中。通过在光栅化过程中将每个高斯分布约束在特定的分割区域内,我们的方法避免了跨边界混合,并在高压缩条件下保持了边缘结构。我们还引入了一种预热方案以稳定训练并提升收敛性。在合成色卡和DAVIS数据集上的实验表明,与现有的二维高斯溅射方法相比,我们的方法在物体边缘区域实现了更高的重建质量。这种改进在高斯分布数量极少的情况下尤为明显,同时我们的方法仍保持了快速渲染和低内存占用的优势。