Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
翻译:常规超参数优化方法在计算上是密集的,很难概括到需要动态调整超参数的假设情景,例如终身学习。在这里,我们提议在线超参数优化算法,在理论上和实践上都是不时精确的,在计算上是可移动的。我们的框架利用了超参数优化和经常性神经网络(RNN)参数学习之间的类比。它为RNN调出一套经过充分研究的在线学习算法,以同时调和超参数和网络参数,而不反复推出迭代优化。这个程序比标准方法有系统化地产生比标准方法更好的概括性功能,在几分钟钟的时段。