## 傅里叶变换和拉普拉斯变换的物理解释及区别

2018 年 2 月 5 日 算法与数学之美

Z变换和傅里叶变换之间有存在什么样的关系呢？傅里叶变换的物理意义非常清晰：将通常在时域表示的信号，分解为多个正弦信号的叠加。每个正弦信号用幅度、频率、相位就可以完全表征。傅里叶变换之后的信号通常称为频谱，频谱包括幅度谱和相位谱，分别表示幅度随频率的分布及相位随频率的分布。在自然界，频率是有明确的物理意义的，比如说声音信号，男同胞声音低沉雄浑，这主要是因为男声中低频分量更多；女同胞多高亢清脆，这主要是因为女声中高频分量更多。

Z变换可以说是针对离散信号和系统的拉普拉斯变换，由此我们就很容易理解Z变换的重要性，也很容易理解Z变换和傅里叶变换之间的关系。Z变换中的Z平面与拉普拉斯中的S平面存在映射的关系，z=exp(Ts)。在Z变换中，单位圆上的结果即对应离散时间傅里叶变换的结果。

∑编辑 | Gemini

The potential of graph convolutional neural networks for the task of zero-shot learning has been demonstrated recently. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, knowledge from distant nodes can get diluted when propagating through intermediate nodes, because current approaches to zero-shot learning use graph propagation schemes that perform Laplacian smoothing at each layer. We show that extensive smoothing does not help the task of regressing classifier weights in zero-shot learning. In order to still incorporate information from distant nodes and utilize the graph structure, we propose an Attentive Dense Graph Propagation Module (ADGPM). ADGPM allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants and an attention scheme is further used to weigh their contribution depending on the distance to the node. Finally, we illustrate that finetuning of the feature representation after training the ADGPM leads to considerable improvements. Our method achieves competitive results, outperforming previous zero-shot learning approaches.

We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).

Top