Despite their widespread success, the application of deep neural networks to functional data remains scarce today. The infinite dimensionality of functional data means standard learning algorithms can be applied only after appropriate dimension reduction, typically achieved via basis expansions. Currently, these bases are chosen a priori without the information for the task at hand and thus may not be effective for the designated task. We instead propose to adaptively learn these bases in an end-to-end fashion. We introduce neural networks that employ a new Basis Layer whose hidden units are each basis functions themselves implemented as a micro neural network. Our architecture learns to apply parsimonious dimension reduction to functional inputs that focuses only on information relevant to the target rather than irrelevant variation in the input function. Across numerous classification/regression tasks with functional data, our method empirically outperforms other types of neural networks, and we prove that our approach is statistically consistent with low generalization error. Code is available at: \url{https://github.com/jwyyy/AdaFNN}.
翻译:尽管取得了广泛成功,但深神经网络对功能数据的应用今天仍然很少。功能数据的无限维度意味着标准学习算法只有在适当减少维度后才能应用,通常通过基础扩展实现。目前,这些基点是事先选择的,没有手头任务的信息,因此可能无法有效完成指定的任务。我们提议以端到端的方式适应性地学习这些基点。我们引入了使用新基础层的神经网络,其隐藏单位每个基点功能都作为微型神经网络自行运行。我们的建筑学会对仅侧重于目标相关信息的功能投入应用模糊的维度削减,而不是输入功能功能的无关的变异。在众多的分类/递增任务中,我们的方法在实验上超越了其他类型的神经网络,我们证明我们的方法在统计上与低一般化错误一致。代码可以查到:https://github.com/jwyyy/AdaFNNN}。