In computer- and robot-assisted orthopedic surgery (CAOS), patient-specific surgical plans derived from preoperative imaging define target locations and implant trajectories. During surgery, these plans must be accurately transferred, relying on precise cross-registration between preoperative and intraoperative data. However, substantial modality heterogeneity across imaging modalities makes this registration challenging and error-prone. Robust, automatic, and modality-agnostic bone surface registration is therefore clinically important. We propose NeuralBoneReg, a self-supervised, surface-based framework that registers bone surfaces using 3D point clouds as a modality-agnostic representation. NeuralBoneReg includes two modules: an implicit neural unsigned distance field (UDF) that learns the preoperative bone model, and an MLP-based registration module that performs global initialization and local refinement by generating transformation hypotheses to align the intraoperative point cloud with the neural UDF. Unlike SOTA supervised methods, NeuralBoneReg operates in a self-supervised manner, without requiring inter-subject training data. We evaluated NeuralBoneReg against baseline methods on two publicly available multi-modal datasets: a CT-ultrasound dataset of the fibula and tibia (UltraBones100k) and a CT-RGB-D dataset of spinal vertebrae (SpineDepth). The evaluation also includes a newly introduced CT--ultrasound dataset of cadaveric subjects containing femur and pelvis (UltraBones-Hip), which will be made publicly available. NeuralBoneReg matches or surpasses existing methods across all datasets, achieving mean RRE/RTE of 1.68°/1.86 mm on UltraBones100k, 1.88°/1.89 mm on UltraBones-Hip, and 3.79°/2.45 mm on SpineDepth. These results demonstrate strong generalizability across anatomies and modalities, providing robust and accurate cross-modal alignment for CAOS.
翻译:在计算机与机器人辅助骨科手术(CAOS)中,基于术前影像制定的患者特异性手术规划定义了目标位置和植入物轨迹。术中需将这些规划精确转移,依赖于术前与术中数据间的精准跨模态配准。然而,不同成像模态间显著的异质性使得该配准任务充满挑战且易出错。因此,鲁棒、自动且模态无关的骨表面配准具有重要的临床意义。我们提出NeuralBoneReg,一种自监督的、基于表面的框架,它使用3D点云作为模态无关的表征来配准骨表面。NeuralBoneReg包含两个模块:一个学习术前骨模型的隐式神经无符号距离场(UDF),以及一个基于MLP的配准模块,该模块通过生成变换假设来对齐术中点云与神经UDF,从而执行全局初始化和局部优化。与最先进的监督方法不同,NeuralBoneReg以自监督方式运行,无需跨受试者的训练数据。我们在两个公开可用的多模态数据集上评估了NeuralBoneReg与基线方法:一个关于腓骨和胫骨的CT-超声数据集(UltraBones100k),以及一个关于脊柱椎体的CT-RGB-D数据集(SpineDepth)。评估还包括一个新引入的包含股骨和骨盆的尸体CT-超声数据集(UltraBones-Hip),该数据集将公开提供。NeuralBoneReg在所有数据集上均达到或超越了现有方法,在UltraBones100k上实现了平均RRE/RTE为1.68°/1.86 mm,在UltraBones-Hip上为1.88°/1.89 mm,在SpineDepth上为3.79°/2.45 mm。这些结果证明了其在不同解剖结构和模态间强大的泛化能力,为CAOS提供了鲁棒且准确的跨模态对齐。