Can our video understanding systems perceive objects when a heavy occlusion exists in a scene? To answer this question, we collect a large scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems are not satisfying. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 14.4, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. Moreover, to complement missing object cues caused by occlusion, we propose a plug-and-play module called temporal feature calibration. Built upon MaskTrack R-CNN and SipMask, we report an AP of 15.2 and 15.0 respectively. The OVIS dataset is released at http://songbai.site/ovis , and the project code will be available soon.
翻译:我们的视频理解系统能否感知到当场中存在严重封闭时的物体? 为了回答这个问题,我们收集了一个大型数据集,名为 OVIS,用于隐蔽视频实例分割,即同时检测、分解和跟踪隐蔽场景中的事例。 OVIS 由来自25个语义类的296k高品质掩体遮罩组成,通常会发生物体隔离。虽然我们的人类视觉系统可以通过背景推理和关联来理解隐蔽的事例,但我们的实验表明,目前的视频理解系统并不令人满意。在 OVIS 数据集上,通过最新算法实现的最高AP值仅为14.4,这表明我们仍然处于在现实世界情景中理解对象、事件和视频的新生阶段。此外,为了补充封闭造成的缺失对象提示,我们提议了一个称为时间特征校准的插片模块。在MaskTrack R-CNN 和SipMask 上,我们报告AP 分别是15.2和15.0。 OVIS 数据设置将在http://ongbai上发布。