Low dose computed tomography is a mainstream for clinical applications. How-ever, compared to normal dose CT, in the low dose CT (LDCT) images, there are stronger noise and more artifacts which are obstacles for practical applications. In the last few years, convolution-based end-to-end deep learning methods have been widely used for LDCT image denoising. Recently, transformer has shown superior performance over convolution with more feature interactions. Yet its ap-plications in LDCT denoising have not been fully cultivated. Here, we propose a convolution-free T2T vision transformer-based Encoder-decoder Dilation net-work (TED-net) to enrich the family of LDCT denoising algorithms. The model is free of convolution blocks and consists of a symmetric encoder-decoder block with sole transformer. Our model is evaluated on the AAPM-Mayo clinic LDCT Grand Challenge dataset, and results show outperformance over the state-of-the-art denoising methods.
翻译:低剂量计算断层法是临床应用的主流。 与正常剂量CT相比,低剂量CT图像中是如何出现更强烈的噪音和更多的文物,这些是实际应用的障碍。 在过去几年里,以进化为基础的端到端深层学习方法被广泛用于LDCT图像脱色。 最近,变压器表现优于进化,其特性互动更多。然而,它在LDCT脱色中的应用尚未充分培育。在这里,我们提议采用无革命性T2T视觉变压器基于Encoder-decoder Dielder Dilation Net-work(TE-net)来丰富LDCT脱色算法的家族。该模型不含进化区块,由具有唯一变压器的对称编码解密器块组成。 我们的模型在AAPM-Mayo诊所LDCT GrandChallenger数据集上进行了评价,结果显示在州解析方法上的性。