CCF C类 | IJCNN 2019 Special Section : 信息论与深度学习

2018 年 12 月 7 日 Call4Papers
CCF C类 | IJCNN 2019 Special Section : 信息论与深度学习

Special Section

Information Theory and Deep Learning


When submitting your paper via,  please select “S01: Information Theory and Deep Learning

All special session papers go through the same rigorous review process. Special sessions with only a small number of accepted papers will be cancelled, and the accepted papers moved to the regular oral or poster sessions


Deep learning led to a significant breakthrough in many applications in machine learning and signal processing. However, only little is known about the theory behind this successful paradigm. Recently, different information theoretic concepts (e.g., the information bottleneck principle, the Renyi's entropy functional and its multivariate extension) began to shed light on the analysis of multilayer perceptrons (MLPs), stacked autoencoders (SAEs) and the baseline convolutional neural networks (CNNs).

Despite the great potential of the past work, there are several open questions when it comes to applying information theoretic concepts to design and interpret more complex learning architectures, such as the recurrent neural networks (RNNs), the generative adversarial networks (GANs), etc.

On the other hand, recent work also demonstrates the success of deep learning architectures in traditional signal processing, communication and information theory communities, like channel estimation, source or channel coding, and inference in internet of things (IoTs).

This special session seeks to improve the current understanding of deep learning architectures (e.g., the training phase, the generalization capability, and the layer representations, etc.) with information theoretic concepts and, at the same time, exploit possible solutions on the next generation communication system design using deep learning techniques (e.g., the RNNs, the CNNs and SAEs, etc.). It also aims to bring a common focus to the deep learning and information theory communities to solve common problems, like the data compression, the message transmission under noisy environment, etc.


Paper submission: Dec. 15, 2018 (there may be a two-week extension)

Acceptance notification: Jan. 30, 2019


Topics of interest for this special session include but are not limited to:

- Design and interpretation of deep learning architectures with information theoretic concepts

- Design, implementation and optimization of deep learning architectures for communications

- Information theoretic applications on signal processing, computer vision, natural language processing, etc.

- Estimation of Information theoretic quantities (e.g., entropy, mutual information, divergence, etc.)


Jose C. Principe, University of Florida, IEEE  Life Fellow

Robert Jenssen, UiT - The Arctic University of Norway

Shujian Yu, University of Florida




International Joint Conference on Neural Networks由国际神经网络学会(INNS)与IEEE计算智能学会合作举办,是神经网络及相关领域研究人员和其他专业人员的首次国际会议。该会议将邀请世界知名演讲者就神经网络理论和应用、计算神经科学、机器人学和分布式智能领域进行演讲。除了定期举行口头和海报介绍的技术会议外,会议计划还将包括特别会议、竞赛、辅导和有关当前感兴趣主题的讲习班。官网链接:

Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.


Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.


Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.


Latest deep learning methods for object detection provide remarkable performance, but have limits when used in robotic applications. One of the most relevant issues is the long training time, which is due to the large size and imbalance of the associated training sets, characterized by few positive and a large number of negative examples (i.e. background). Proposed approaches are based on end-to-end learning by back-propagation [22] or kernel methods trained with Hard Negatives Mining on top of deep features [8]. These solutions are effective, but prohibitively slow for on-line applications. In this paper we propose a novel pipeline for object detection that overcomes this problem and provides comparable performance, with a 60x training speedup. Our pipeline combines (i) the Region Proposal Network and the deep feature extractor from [22] to efficiently select candidate RoIs and encode them into powerful representations, with (ii) the FALKON [23] algorithm, a novel kernel-based method that allows fast training on large scale problems (millions of points). We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach. We assess the effectiveness of the approach on a standard Computer Vision dataset (PASCAL VOC 2007 [5]) and demonstrate its applicability to a real robotic scenario with the iCubWorld Transformations [18] dataset.


Why deep neural networks (DNNs) capable of overfitting often generalize well in practice is a mystery in deep learning. Existing works indicate that this observation holds for both complicated real datasets and simple datasets of one-dimensional (1-d) functions. In this work, for natural images and low-frequency dominant 1-d functions, we empirically found that a DNN with common settings first quickly captures the dominant low-frequency components, and then relatively slowly captures high-frequency ones. We call this phenomenon Frequency Principle (F-Principle). F-Principle can be observed over various DNN setups of different activation functions, layer structures and training algorithms in our experiments. F-Principle can be used to understand (i) the behavior of DNN training in the information plane and (ii) why DNNs often generalize well albeit its ability of overfitting. This F-Principle potentially can provide insights into understanding the general principle underlying DNN optimization and generalization for real datasets.


Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.

CCF推荐 | 国际会议信息6条
7+阅读 · 2019年8月13日
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
10+阅读 · 2019年6月24日
人工智能 | CCF推荐期刊专刊约稿信息6条
3+阅读 · 2019年2月18日
人工智能 | SCI期刊专刊信息3条
5+阅读 · 2019年1月10日
人工智能类 | 国际会议/SCI期刊专刊信息9条
4+阅读 · 2018年7月10日
计算机类 | 期刊专刊截稿信息9条
4+阅读 · 2018年1月26日
人工智能 | 国际会议/SCI期刊约稿信息9条
3+阅读 · 2018年1月12日
8+阅读 · 2017年12月15日
计算机类 | 国际会议信息7条
3+阅读 · 2017年11月17日
17+阅读 · 2017年9月13日
Zobeir Raisi,Mohamed A. Naiel,Paul Fieguth,Steven Wardell,John Zelek
14+阅读 · 2020年6月8日
Hyper-Parameter Optimization: A Review of Algorithms and Applications
Tong Yu,Hong Zhu
12+阅读 · 2020年3月12日
Yu Cheng,Duo Wang,Pan Zhou,Tao Zhang
46+阅读 · 2019年9月8日
Yash Srivastava,Vaishnav Murali,Shiv Ram Dubey,Snehasis Mukherjee
4+阅读 · 2019年8月27日
Siyu He,Yin Li,Yu Feng,Shirley Ho,Siamak Ravanbakhsh,Wei Chen,Barnabás Póczos
3+阅读 · 2018年11月15日
Speeding-up Object Detection Training for Robotics with FALKON
Elisa Maiettini,Giulia Pasquale,Lorenzo Rosasco,Lorenzo Natale
6+阅读 · 2018年8月27日
Training behavior of deep neural network in frequency domain
Zhi-Qin J. Xu,Yaoyu Zhang,Yanyang Xiao
3+阅读 · 2018年8月21日
MAT-CNN-SOPC: Motionless Analysis of Traffic Using Convolutional Neural Networks on System-On-a-Programmable-Chip
Somdip Dey,Grigorios Kalliatakis,Sangeet Saha,Amit Kumar Singh,Shoaib Ehsan,Klaus McDonald-Maier
3+阅读 · 2018年7月5日
Xiaowei Hu,Xuemiao Xu,Yongjie Xiao,Hao Chen,Shengfeng He,Jing Qin,Pheng-Ann Heng
9+阅读 · 2018年5月16日
Yuhua Chen,Wen Li,Christos Sakaridis,Dengxin Dai,Luc Van Gool
9+阅读 · 2018年3月8日