Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance and the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.
翻译:尽管自动化超光度优化(HPO)带来种种好处,但大多数现代HPO算法本身都是黑盒子,因此很难理解导致选定配置的决策过程,降低了对HPO的信任,从而阻碍了其广泛采用。在这里,我们研究HPO与可解释的机器学习方法(IML)的结合,例如部分依赖性地块。然而,如果这种方法被天真地应用到HPO过程的实验数据中,那么优化者的基本抽样偏差可能会扭曲解释。我们提出了一个修改的HPO方法,以有效地平衡全球最佳 W.r.t.预测性能的搜索和对IML解释的可靠估计,通过将Bayesian优化和Bayesian Algorithm执行相结合。关于合成目标和神经网络的HPO的基准案例,我们证明我们的方法在不丧失优化性能的情况下,对基本黑盒子进行更可靠的解释。