The goal of multi-modal learning is to use complimentary information on the relevant task provided by the multiple modalities to achieve reliable and robust performance. Recently, deep learning has led significant improvement in multi-modal learning by allowing for the information fusion in the intermediate feature levels. This paper addresses a problem of designing robust deep multi-modal learning architecture in the presence of imperfect modalities. We introduce deep fusion architecture for object detection which processes each modality using the separate convolutional neural network (CNN) and constructs the joint feature map by combining the intermediate features from the CNNs. In order to facilitate the robustness to the degraded modalities, we employ the gated information fusion (GIF) network which weights the contribution from each modality according to the input feature maps to be fused. The weights are determined through the convolutional layers followed by a sigmoid function and trained along with the information fusion network in an end-to-end fashion. Our experiments show that the proposed GIF network offers the additional architectural flexibility to achieve robust performance in handling some degraded modalities, and show a significant performance improvement based on Single Shot Detector (SSD) for KITTI dataset using the proposed fusion network and data augmentation schemes.
翻译:多模式学习的目标是利用多种模式所提供的相关任务的补充信息,实现可靠和稳健的业绩。最近,深层次学习使多模式学习有了显著改进,允许信息在中间特征水平上融合。本文件涉及在不完善模式下设计稳健的深层多模式学习结构的问题。我们引入了物体探测的深层聚合结构,利用单独的共振神经网络(CNN)处理每种模式,并结合CNN的中间特征构建联合特征图。为了促进退化模式的稳健性,我们采用了封闭式信息聚合网络,根据输入特征图对每种模式的贡献进行加权。加权是通过演进层确定,然后是结构功能,然后与信息融合网络一道,以端对端方式进行培训。我们的实验表明,拟议的GIF网络提供了额外的建筑灵活性,以便在处理某些退化模式方面实现稳健的绩效,并显示在使用拟议的单级射击探测器和增强系统系统对KITTI数据系统进行的重大性能改进。