Traditional object detection models in medical imaging operate within a closed-set paradigm, limiting their ability to detect objects of novel labels. Open-vocabulary object detection (OVOD) addresses this limitation but remains underexplored in medical imaging due to dataset scarcity and weak text-image alignment. To bridge this gap, we introduce MedROV, the first Real-time Open Vocabulary detection model for medical imaging. To enable open-vocabulary learning, we curate a large-scale dataset, Omnis, with 600K detection samples across nine imaging modalities and introduce a pseudo-labeling strategy to handle missing annotations from multi-source datasets. Additionally, we enhance generalization by incorporating knowledge from a large pre-trained foundation model. By leveraging contrastive learning and cross-modal representations, MedROV effectively detects both known and novel structures. Experimental results demonstrate that MedROV outperforms the previous state-of-the-art foundation model for medical image detection with an average absolute improvement of 40 mAP50, and surpasses closed-set detectors by more than 3 mAP50, while running at 70 FPS, setting a new benchmark in medical detection. Our source code, dataset, and trained model are available at https://github.com/toobatehreem/MedROV.
翻译:传统医学成像中的目标检测模型在封闭集范式下运行,限制了其检测新标签目标的能力。开放词汇目标检测(OVOD)解决了这一限制,但由于数据集稀缺和文本-图像对齐较弱,在医学成像领域仍未得到充分探索。为填补这一空白,我们提出了MedROV,首个用于医学成像的实时开放词汇检测模型。为实现开放词汇学习,我们构建了一个大规模数据集Omnis,包含跨越九种成像模态的60万个检测样本,并引入一种伪标注策略来处理多源数据集中缺失的标注。此外,我们通过整合来自大型预训练基础模型的知识来增强泛化能力。通过利用对比学习和跨模态表示,MedROV能有效检测已知及新颖结构。实验结果表明,MedROV在医学图像检测中优于先前最先进的基础模型,平均绝对提升达40 mAP50,并超越封闭集检测器超过3 mAP50,同时以70 FPS的速度运行,为医学检测设立了新基准。我们的源代码、数据集和训练模型可在https://github.com/toobatehreem/MedROV获取。