Vision-based 3D Semantic Scene Completion (SSC) has received growing attention due to its potential in autonomous driving. While most existing approaches follow an ego-centric paradigm by aggregating and diffusing features over the entire scene, they often overlook fine-grained object-level details, leading to semantic and geometric ambiguities, especially in complex environments. To address this limitation, we propose Ocean, an object-centric prediction framework that decomposes the scene into individual object instances to enable more accurate semantic occupancy prediction. Specifically, we first employ a lightweight segmentation model, MobileSAM, to extract instance masks from the input image. Then, we introduce a 3D Semantic Group Attention module that leverages linear attention to aggregate object-centric features in 3D space. To handle segmentation errors and missing instances, we further design a Global Similarity-Guided Attention module that leverages segmentation features for global interaction. Finally, we propose an Instance-aware Local Diffusion module that improves instance features through a generative process and subsequently refines the scene representation in the BEV space. Extensive experiments on the SemanticKITTI and SSCBench-KITTI360 benchmarks demonstrate that Ocean achieves state-of-the-art performance, with mIoU scores of 17.40 and 20.28, respectively.
翻译:基于视觉的三维语义场景补全因其在自动驾驶领域的潜力而日益受到关注。尽管现有方法大多遵循以自我为中心的范式,通过在整个场景上聚合与扩散特征,但它们往往忽略了细粒度的物体级细节,导致语义和几何上的模糊性,尤其在复杂环境中。为应对这一局限,我们提出了Ocean,一种以物体为中心的预测框架,其将场景分解为独立的物体实例,以实现更精确的语义占据预测。具体而言,我们首先采用轻量级分割模型MobileSAM从输入图像中提取实例掩码。接着,我们引入一个三维语义分组注意力模块,该模块利用线性注意力在三维空间中聚合以物体为中心的特征。为处理分割错误和缺失实例,我们进一步设计了一个全局相似性引导注意力模块,其利用分割特征进行全局交互。最后,我们提出了一个实例感知的局部扩散模块,该模块通过生成过程改进实例特征,并随后在鸟瞰图空间中细化场景表示。在SemanticKITTI和SSCBench-KITTI360基准上进行的大量实验表明,Ocean实现了最先进的性能,其mIoU分数分别达到17.40和20.28。