Single-molecule localization microscopy (SMLM) allows reconstructing biology-relevant structures beyond the diffraction limit by detecting and localizing individual fluorophores -- fluorescent molecules stained onto the observed specimen -- over time to reconstruct super-resolved images. Currently, efficient SMLM requires non-overlapping emitting fluorophores, leading to long acquisition times that hinders live-cell imaging. Recent deep-learning approaches can handle denser emissions, but they rely on variants of non-maximum suppression (NMS) layers, which are unfortunately non-differentiable and may discard true positives with their local fusion strategy. In this presentation, we reformulate the SMLM training objective as a set-matching problem, deriving an optimal-transport loss that eliminates the need for NMS during inference and enables end-to-end training. Additionally, we propose an iterative neural network that integrates knowledge of the microscope's optical system inside our model. Experiments on synthetic benchmarks and real biological data show that both our new loss function and architecture surpass the state of the art at moderate and high emitter densities. Code is available at https://github.com/RSLLES/SHOT.
翻译:单分子定位显微术(SMLM)通过随时间检测并定位单个荧光团(标记在观测样本上的荧光分子)来重建超分辨率图像,从而突破衍射极限,重构具有生物学意义的结构。目前,高效的SMLM要求荧光团发射信号不重叠,这导致采集时间较长,阻碍了活细胞成像。近期的深度学习方法能够处理更密集的发射信号,但它们依赖于非极大值抑制(NMS)层的变体,这些层不可微分,且可能通过其局部融合策略丢弃真实阳性信号。在本研究中,我们将SMLM训练目标重新表述为集合匹配问题,推导出一种最优传输损失函数,该函数在推理过程中无需NMS,并实现了端到端训练。此外,我们提出了一种迭代神经网络,将显微镜光学系统的知识整合到模型中。在合成基准和真实生物数据上的实验表明,我们新的损失函数和架构在中等及高发射密度下均超越了现有技术水平。代码发布于 https://github.com/RSLLES/SHOT。