Dimensionality reduction is a main step in the learning process which plays an essential role in many applications. The most popular methods in this field like SVD, PCA, and LDA, only can be applied to data with vector format. This means that for higher order data like matrices or more generally tensors, data should be fold to the vector format. So, in this approach, the spatial relations of features are not considered and also the probability of over-fitting is increased. Due to these issues, in recent years some methods like Generalized low-rank approximation of matrices (GLRAM) and Multilinear PCA (MPCA) are proposed which deal with the data in their own format. So, in these methods, the spatial relationships of features are preserved and the probability of overfitting could be fallen. Also, their time and space complexities are less than vector-based ones. However, because of the fewer parameters, the search space in multilinear approach is much smaller than the search space of the vector-based approach. To overcome this drawback of multilinear methods like GLRAM, we proposed a new method which is a general form of GLRAM and by preserving the merits of it have a larger search space. Experimental results confirm the quality of the proposed method. Also, applying this approach to the other multilinear dimensionality reduction methods like MPCA and MLDA is straightforward.
翻译:减少尺寸是学习过程中的一个主要步骤,在许多应用中发挥着必不可少的作用。这一领域的最流行方法,如SVD、CPA和LDA,只能适用于矢量格式的数据。这意味着对于诸如矩阵或更一般的Exronors等更高顺序的数据,数据应该折叠到矢量格式。因此,在这种方法中,不考虑地貌的空间关系,也增加了过分安装的可能性。由于这些问题,近年来提出了一些方法,例如通用的低级基质近似(GLRAM)和多线性五氯苯(MPCA),这些方法处理的是它们自己的格式的数据。因此,在这些方法中,保持了地貌的空间关系,而且超配的可能性可能下降。此外,它们的时间和空间复杂性应少于矢量格式格式。然而,由于参数较少,多线性方法的搜索空间比基于矢量方法的搜索空间要小得多。为了克服诸如GLRAM这样的多线性方法的缺陷,我们提出了一种新的方法,这是GLRAM的通用形式,通过保存这种直截面方法的搜索方法的更大程度,因此,还确认了这种SLRAMA的实验性方法的其他结果。