We present a fully-supervized method for learning to segment data structured by an adjacency graph. We introduce the graph-structured contrastive loss, a loss function structured by a ground truth segmentation. It promotes learning vertex embeddings which are homogeneous within desired segments, and have high contrast at their interface. Thus, computing a piecewise-constant approximation of such embeddings produces a graph-partition close to the objective segmentation. This loss is fully backpropagable, which allows us to learn vertex embeddings with deep learning algorithms. We evaluate our methods on a 3D point cloud oversegmentation task, defining a new state-of-the-art by a large margin. These results are based on the published work of Landrieu and Boussaha 2019.
翻译:我们提出了一个完全超超化的学习方法,用于学习以相邻图形构建的分区数据。 我们引入了图形结构对比损失, 一种由地面真实分解构建的损失函数。 它促进学习顶端嵌入, 这些嵌入在理想的区段内是同质的, 并在界面上有着高度的对比。 因此, 计算这种嵌入的片断式近似可以产生接近目标分解的图形分割。 这一损失完全反向可变, 使我们能够学习与深层学习算法的脊椎嵌入。 我们评估了我们用于3D点云层叠加任务的方法, 以大幅度的边距定义了一个新的状态。 这些结果以Landrieu 和 Boussaha 2019 的出版工作为基础。