在贝叶斯统计中,超参数是先验分布的参数; 该术语用于将它们与所分析的基础系统的模型参数区分开。

VIP内容

领域适应(DA)提供了重用数据和模型用于新问题领域的有价值的方法。然而,对于具有不同数据可用性的时间序列数据,还没有考虑到健壮的技术。在本文中,我们做出了三个主要贡献来填补这一空白。我们提出了一种新的时间序列数据卷积深度域自适应模型(CoDATS),该模型在现实传感器数据基准上显著提高了最先进的DA策略的准确性和训练时间。通过利用来自多个源域的数据,我们增加了CoDATS的有用性,从而进一步提高了与以前的单源方法相比的准确性,特别是在域之间具有高度可变性的复杂时间序列数据集上。其次,我们提出了一种新的弱监督域自适应(DA-WS)方法,利用目标域标签分布形式的弱监督,这可能比其他数据标签更容易收集。第三,我们对不同的真实数据集进行了综合实验,以评估我们的域适应和弱监督方法的有效性。结果表明,用于单源DA的CoDATS比最先进的方法有了显著的改进,并且我们使用来自多个源域和弱监督信号的数据实现了额外的准确性改进。

成为VIP会员查看完整内容
0
16

最新内容

Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.

0
0
下载
预览

最新论文

Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.

0
0
下载
预览
Top