In this paper, we propose a novel Deep Micro-Dictionary Learning and Coding Network (DDLCN). DDLCN has most of the standard deep learning layers (pooling, fully, connected, input/output, etc.) but the main difference is that the fundamental convolutional layers are replaced by novel compound dictionary learning and coding layers. The dictionary learning layer learns an over-complete dictionary for the input training data. At the deep coding layer, a locality constraint is added to guarantee that the activated dictionary bases are close to each other. Next, the activated dictionary atoms are assembled together and passed to the next compound dictionary learning and coding layers. In this way, the activated atoms in the first layer can be represented by the deeper atoms in the second dictionary. Intuitively, the second dictionary is designed to learn the fine-grained components which are shared among the input dictionary atoms. In this way, a more informative and discriminative low-level representation of the dictionary atoms can be obtained. We empirically compare the proposed DDLCN with several dictionary learning methods and deep learning architectures. The experimental results on four popular benchmark datasets demonstrate that the proposed DDLCN achieves competitive results compared with state-of-the-art approaches.
翻译:在本文中,我们建议建立一个新型深微调学习和编码网络(DDLCN)。 DDLCN拥有大部分标准的深层学习层(集合、完全、连接、输入/输出等),但主要区别在于基本相联层被新的复合词典学习和编码层所取代。字典学习层为输入培训数据学习一个过于完整的字典。在深编码层,增加了一个地方限制,以保证激活的字典基础彼此接近。接下来,激活字典原子聚集在一起,传递到下一个复合词典学习和编码层。这样,第一个字典中激活的原子可以通过第二字典中的更深原子来代表。直观地说,第二字典旨在学习输入词典共享的精细的字典。通过这种方式,可以取得对词典原子的更丰富和歧视性的低层次表述。我们实证地将拟议的DDCNCN与若干词典学习方法和深层次学习结构加以比较。通过四种实验性结果,可以比较DDDD。