RetinaNet是2018年Facebook AI团队在目标检测领域新的贡献。它的重要作者名单中Ross Girshick与Kaiming He赫然在列。来自Microsoft的Sun Jian团队与现在Facebook的Ross/Kaiming团队在当前视觉目标分类、检测领域有着北乔峰、南慕容一般的独特地位。这两个实验室的文章多是行业里前进方向的提示牌。 RetinaNet只是原来FPN网络与FCN网络的组合应用,因此在目标网络检测框架上它并无特别亮眼创新。文章中最大的创新来自于Focal loss的提出及在单阶段目标检测网络RetinaNet(实质为Resnet + FPN + FCN)的成功应用。Focal loss是一种改进了的交叉熵(cross-entropy, CE)loss,它通过在原有的CE loss上乘了个使易检测目标对模型训练贡献削弱的指数式,从而使得Focal loss成功地解决了在目标检测时,正负样本区域极不平衡而目标检测loss易被大批量负样本所左右的问题。此问题是单阶段目标检测框架(如SSD/Yolo系列)与双阶段目标检测框架(如Faster-RCNN/R-FCN等)accuracy gap的最大原因。在Focal loss提出之前,已有的目标检测网络都是通过像Boot strapping/Hard example mining等方法来解决此问题的。作者通过后续实验成功表明Focal loss可在单阶段目标检测网络中成功使用,并最终能以更快的速率实现与双阶段目标检测网络近似或更优的效果。

最新内容

Object detection in aerial images is an important task in environmental, economic, and infrastructure-related tasks. One of the most prominent applications is the detection of vehicles, for which deep learning approaches are increasingly used. A major challenge in such approaches is the limited amount of data that arises, for example, when more specialized and rarer vehicles such as agricultural machinery or construction vehicles are to be detected. This lack of data contrasts with the enormous data hunger of deep learning methods in general and object recognition in particular. In this article, we address this issue in the context of the detection of road vehicles in aerial images. To overcome the lack of annotated data, we propose a generative approach that generates top-down images by overlaying artificial vehicles created from 2D CAD drawings on artificial or real backgrounds. Our experiments with a modified RetinaNet object detection network show that adding these images to small real-world datasets significantly improves detection performance. In cases of very limited or even no real-world images, we observe an improvement in average precision of up to 0.70 points. We address the remaining performance gap to real-world datasets by analyzing the effect of the image composition of background and objects and give insights into the importance of background.

0
0
下载
预览

最新论文

Object detection in aerial images is an important task in environmental, economic, and infrastructure-related tasks. One of the most prominent applications is the detection of vehicles, for which deep learning approaches are increasingly used. A major challenge in such approaches is the limited amount of data that arises, for example, when more specialized and rarer vehicles such as agricultural machinery or construction vehicles are to be detected. This lack of data contrasts with the enormous data hunger of deep learning methods in general and object recognition in particular. In this article, we address this issue in the context of the detection of road vehicles in aerial images. To overcome the lack of annotated data, we propose a generative approach that generates top-down images by overlaying artificial vehicles created from 2D CAD drawings on artificial or real backgrounds. Our experiments with a modified RetinaNet object detection network show that adding these images to small real-world datasets significantly improves detection performance. In cases of very limited or even no real-world images, we observe an improvement in average precision of up to 0.70 points. We address the remaining performance gap to real-world datasets by analyzing the effect of the image composition of background and objects and give insights into the importance of background.

0
0
下载
预览
Top