It is proven that encoding images and videos through Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, can lead to increased classification performance. Taking into account manifold geometry is typically done via embedding the manifolds in tangent spaces, or Reproducing Kernel Hilbert Spaces (RKHS). Recently, it was shown that embedding such manifolds into a Random Projection Spaces (RPS), rather than RKHS or tangent space, leads to higher classification and clustering performance. However, based on structure and dimensionality of the randomly generated hyperplanes, the classification performance over RPS may vary significantly. In addition, fine-tuning RPS is data expensive (as it requires validation-data), time consuming, and resource demanding. In this paper, we introduce an approach to learn an optimized kernel-based projection (with fixed dimensionality), by employing the concept of subspace clustering. As such, we encode the association of data points to the underlying subspace of each point, to generate meaningful hyperplanes. Further, we adopt the concept of dictionary learning and sparse coding, and discriminative analysis, for the optimized kernel-based projection space (OPS) on SPD manifolds. We validate our algorithm on several classification tasks. The experiment results also demonstrate that the proposed method outperforms state-of-the-art methods on such manifolds.
翻译:事实已经证明,通过Symit正偏偏偏(SPD)矩阵的编码图像和视频,以及考虑到随机生成的超高平面的结构和广度,可以提高分类性能。考虑到多重几何方法通常是通过嵌入正向空间或复制凯尔内尔·希尔伯特空间(RKHS)来完成的。最近,事实证明,将这些元数据嵌入随机投影空间(RPS)而不是RKHS或正向空间,导致更高的分类和组合性能。然而,根据随机生成的超高平面的结构和广度,RPS的分类性能可能大不相同。此外,微调RPS的数据成本昂贵(因为它需要验证数据数据)、耗时费和资源要求。在本文中,我们采用了一种方法来学习最佳的内核预测(有固定的维度),而不是RKHSMS,因此,我们把数据点与每个点的基子空间基空间的子空间相连接,从而产生有意义的超平面。此外,我们还采用了基于字典学习和深层空间演算的方法,我们对SDVL的模型的模型进行最佳化。