We introduce two nonlinear sufficient dimension reduction methods for regressions with tensor-valued predictors. Our goal is two-fold: the first is to preserve the tensor structure when performing dimension reduction, particularly the meaning of the tensor modes, for improved interpretation; the second is to substantially reduce the number of parameters in dimension reduction, thereby achieving model parsimony and enhancing estimation accuracy. Our two tensor dimension reduction methods echo the two commonly used tensor decomposition mechanisms: one is the Tucker decomposition, which reduces a larger tensor to a smaller one; the other is the CP-decomposition, which represents an arbitrary tensor as a sequence of rank-one tensors. We developed the Fisher consistency of our methods at the population level and established their consistency and convergence rates. Both methods are easy to implement numerically: the Tucker-form can be implemented through a sequence of least-squares steps, and the CP-form can be implemented through a sequence of singular value decompositions. We investigated the finite-sample performance of our methods and showed substantial improvement in accuracy over existing methods in simulations and two data applications.
翻译:本文针对张量值预测变量的回归问题,提出了两种非线性充分降维方法。我们的目标具有双重性:第一是在执行降维时保持张量结构,特别是保留张量模态的语义信息,以提升模型的可解释性;第二是显著减少降维过程中的参数数量,从而实现模型简约性并提高估计精度。我们提出的两种张量降维方法对应两种常用的张量分解机制:一种是Tucker分解,将较大张量降维为较小张量;另一种是CP分解,将任意张量表示为一系列秩一张量的序列。我们在总体水平上证明了方法的Fisher相合性,并建立了其相合性与收敛速率。两种方法均易于数值实现:Tucker形式可通过一系列最小二乘步骤实现,CP形式可通过一系列奇异值分解实现。我们研究了方法的有限样本性能,在模拟实验和两个实际数据应用中均显示出相较于现有方法的显著精度提升。