电子电气工程 | 国际会议信息5条

2018 年 10 月 12 日 Call4Papers


ICDSP 2019

International Conference on Digital Signal Processing

全文截稿: 2018-11-05
开会时间: 2019-02-23
会议难度: ★★
会议地点: Jeju Island, Korea
2019 3rd International Conference on Digital Signal Processing (ICDSP 2019) will be held in Jeju Island, Korea, from Saturday, February 23, to Tuesday, February 26, 2019.  It is mainly organised by International Academy of Computing Technology (IACT). The symposium will provide the world's leading scientists and researchers related to Digital Signal Processing with the best chances for scientific and technological exchange. Jeju Island, the largest island off the coast of the Korean peninsula, is often referred to as “the Hawaii of Korea” for its tropical weather and volcanic nature which is great place for conference. We cordially invite you to participate in the ICDSP 2019, which will be performed in this mysterious island and we look forward to seeing you in Jeju Island.


ICET 2019

International Conference on Electronics Technology

全文截稿: 2018-12-05
开会时间: 2019-05-10
会议难度: ★★
会议地点: Chengdu, China
The 2nd International Conference on Electronics Technology (ICET 2019) will be held in Chengdu, China, during May 10-13, 2019. This event is organized by Sichuan Institute of Electronics, with the support of University of Electronic Science and Technology of China, Sichuan University of China, Southwest Jiaotong University of China and Singapore Institute of Electronics.

ICET conference for the exchange of information between senior and young scientists from academic communities and electronic industries from around the world on topics related to their experimental and theoretical work in the very wide-spread field of electronics and micro/nanoelectronics technology and electronic packaging.

In the frame of a unique combination of poster exhibitions, oral paper presentations, and individual workshops, senior and junior researchers invite speeches from all over the world, come together to discuss scientific problems and their teaching experiences as well as plan and organize international cooperations and student exchanges in a convenient and multicultural atmosphere.


IRS 2019

International Radar Symposium

全文截稿: 2018-12-14
开会时间: 2019-06-26
会议难度: ★★★
The International Radar Symposium aims to provide a forum for both academic and industrial professionals in radar from all over the world and to bring together academicians, researchers, engineers, system analysts, graduate and undergraduate students with government and non-government organizations to share and discuss both theoretical and practical knowledge. We invite everybody to submit outstanding and valuable original research papers and participate in the technical exhibition during the conference.

The scope of the Symposium includes, but is not limited to the following major topics.
During submission, authors are requested to indicate which main topic is adressed by their contribution, conference subjects include:

Multi-Channel and Array Processing
Adaptive Signal Processing / STAP
SAR / ISAR Imaging
Compressive Sensing
Cognitive Radar
Localisation and Tracking
Ground Moving Target Indication
Sensor Data Fusion
Passive, Bistatic and Multi-Static Radar
Forward Scattering Radar
HF and Over-the-Horizon Radar
Millimeter Wave and THz Radar
MIMO Radar
UWB and Noise Radar
Antennas, Arrays and Beamforming
Propagation of Radar Signals
Polarimetric Radar / Radar Polarimetry
Radar and Clutter Modelling
Ground / Airborne / Spaceborne Radar
Radar Remote Sensing
Automotive and Maritime Radar
Weather Radar


ECT 2019

European Conference on Thermoelectrics

全文截稿: 2019-02-01
开会时间: 2019-09-23
会议难度: ★★
会议地点: Limassol, Cyprus
It is our great pleasure to host the 17th European Conference on Thermoelectrics (ECT2019) that will be held in Limassol (CYPRUS) on September 23th-25th, 2019.

It is Cyprus’ turn to continue the tradition and host this conference that typically attracts experts, scientists and engineers from Research and Industry all around Europe and abroad. The series of European Conferences on Thermoelectrics (ECT) is promoted by the European Thermoelectric Society with the aim to disseminate recent scientific and technical advancements in the field of thermoelectrics. ECT further aims to enhance the communication between research institutions and industries for promotion of thermoelectric applications and to provide a forum for exchange of information and achievements. Physics, Engineering, Materials Science and Chemistry are combined in this interdisciplinary meeting, enabling us to discuss all about our promising thermoelectric alloys/devices/applications!

You are, thus, all invited to attend the conference and discuss about the state-of-the-art technology, latest advances in material science, device and system design, as well as market opportunities. At the same time you will also be able to enjoy Cyprus, an island with a long history which is at the crossroads of civilizations. Its unique combination of world class hotels and resorts,  Mediterranean charm and sunny and clear skies provide perfect setting for holding a productive and a successful ECT meeting. Get ready to enjoy the sea, the sun and the terrific Cypriot cuisine!


AES 2019

The 7th Advanced Electromagnetics Symposium

全文截稿: 2019-02-25
开会时间: 2019-07-24
会议难度: ★★★
会议地点: Lisbon, Portugal
Be a part of AES 2019, the 7th Advanced Electromagnetics Symposium and take the opportunity to meet, interact and network with the experts of Electromagnetics. The program will facilitate discussions on various relevant topics of the subject among the participants in a dynamic setting. The program will also feature keynote and invited speakers addressing the most pressing issues of the subject and best practices to inspire the participants.

Additionally, through its unique from-Conference-to-Journal-Publication concept, AES offers an opportunity for authors to submit their papers to a special issue in Advanced Electromagnetics journal.

AES 2019 will be held in conjunction with the 9th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META 2019). A registered participant to one of the events can attend sessions in both events but can only present papers in the event he is registered in. In order to present papers in both events, one needs to register for each.

下载Call4Papers App,获取更多详细内容!

点赞 0

Convolutional neural networks (CNNs) can model complicated non-linear relations between images. However, they are notoriously sensitive to small changes in the input. Most CNNs trained to describe image-to-image mappings generate temporally unstable results when applied to video sequences, leading to flickering artifacts and other inconsistencies over time. In order to use CNNs for video material, previous methods have relied on estimating dense frame-to-frame motion information (optical flow) in the training and/or the inference phase, or by exploring recurrent learning structures. We take a different approach to the problem, posing temporal stability as a regularization of the cost function. The regularization is formulated to account for different types of motion that can occur between frames, so that temporally stable CNNs can be trained without the need for video material or expensive motion estimation. The training can be performed as a fine-tuning operation, without architectural modifications of the CNN. Our evaluation shows that the training strategy leads to large improvements in temporal smoothness. Moreover, in situations where the quantity of training data is limited, the regularization can help in boosting the generalization performance to a much larger extent than what is possible with na\"ive augmentation strategies.

点赞 0

Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE_L metrics.

点赞 0

In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game.

点赞 0

Our goal is for a robot to execute a previously unseen task based on a single video demonstration of the task. The success of our approach relies on the principle of transferring knowledge from seen tasks to unseen ones with similar semantics. More importantly, we hypothesize that to successfully execute a complex task from a single video demonstration, it is necessary to explicitly incorporate compositionality to the model. To test our hypothesis, we propose Neural Task Graph (NTG) Networks, which use task graph as the intermediate representation to modularize the representations of both the video demonstration and the derived policy. We show this formulation achieves strong inter-task generalization on two complex tasks: Block Stacking in BulletPhysics and Object Collection in AI2-THOR. We further show that the same principle is applicable to real-world videos. We show that NTG can improve data efficiency of few-shot activity understanding in the Breakfast Dataset.

点赞 0

This paper presents a method of learning qualitatively interpretable models in object detection using popular two-stage region-based ConvNet detection systems (i.e., R-CNN). R-CNN consists of a region proposal network and a RoI (Region-of-Interest) prediction network.By interpretable models, we focus on weakly-supervised extractive rationale generation, that is learning to unfold latent discriminative part configurations of object instances automatically and simultaneously in detection without using any supervision for part configurations. We utilize a top-down hierarchical and compositional grammar model embedded in a directed acyclic AND-OR Graph (AOG) to explore and unfold the space of latent part configurations of RoIs. We propose an AOGParsing operator to substitute the RoIPooling operator widely used in R-CNN, so the proposed method is applicable to many state-of-the-art ConvNet based detection systems. The AOGParsing operator aims to harness both the explainable rigor of top-down hierarchical and compositional grammar models and the discriminative power of bottom-up deep neural networks through end-to-end training. In detection, a bounding box is interpreted by the best parse tree derived from the AOG on-the-fly, which is treated as the extractive rationale generated for interpreting detection. In learning, we propose a folding-unfolding method to train the AOG and ConvNet end-to-end. In experiments, we build on top of the R-FCN and test the proposed method on the PASCAL VOC 2007 and 2012 datasets with performance comparable to state-of-the-art methods.

点赞 0