Motion blur caused by camera or object movement severely degrades image quality and poses challenges for real-time applications such as autonomous driving, UAV perception, and medical imaging. In this paper, a lightweight U-shaped network tailored for real-time deblurring is presented and named RT-Focuser. To balance speed and accuracy, we design three key components: Lightweight Deblurring Block (LD) for edge-aware feature extraction, Multi-Level Integrated Aggregation module (MLIA) for encoder integration, and Cross-source Fusion Block (X-Fuse) for progressive decoder refinement. Trained on a single blurred input, RT-Focuser achieves 30.67 dB PSNR with only 5.85M parameters and 15.76 GMACs. It runs 6ms per frame on GPU and mobile, exceeds 140 FPS on both, showing strong potential for deployment on the edge. The official code and usage are available on: https://github.com/ReaganWu/RT-Focuser.
翻译:由相机或物体运动引起的运动模糊会严重降低图像质量,并对自动驾驶、无人机感知和医学成像等实时应用构成挑战。本文提出了一种专为实时去模糊设计的轻量级U型网络,并将其命名为RT-Focuser。为了平衡速度与精度,我们设计了三个关键组件:用于边缘感知特征提取的轻量级去模糊块、用于编码器集成的多层级集成聚合模块,以及用于渐进式解码器细化的跨源融合块。在仅使用单张模糊输入进行训练的情况下,RT-Focuser仅以5.85M参数和15.76 GMACs的计算量,即可达到30.67 dB的峰值信噪比。它在GPU和移动设备上每帧处理时间仅为6毫秒,在两者上均超过140 FPS,显示出在边缘端部署的强大潜力。官方代码及使用说明发布于:https://github.com/ReaganWu/RT-Focuser。