This paper introduces a class of mixed-integer formulations for trained ReLU neural networks. The approach balances model size and tightness by partitioning node inputs into a number of groups and forming the convex hull over the partitions via disjunctive programming. At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node. For fewer partitions, we develop smaller relaxations that approximate the convex hull, and show that they outperform existing formulations. Specifically, we propose strategies for partitioning variables based on theoretical motivations and validate these strategies using extensive computational experiments. Furthermore, the proposed scheme complements known algorithmic approaches, e.g., optimization-based bound tightening captures dependencies within a partition.
翻译:本文为经过培训的 ReLU 神经网络引入了一组混合内插配方。 这种方法通过将节点输入分解成若干组和通过脱线编程形成隔断区块的螺旋体,平衡模型大小和紧凑性。 在一个极端, 每个输入的分流会恢复结结壳, 即每个节点的最紧凑的配方。 对于较少的分隔, 我们开发了更小的松动, 接近结壳, 并显示它们比现有配方更完善。 具体而言, 我们根据理论动机提出分解变量的战略, 并运用广泛的计算实验来验证这些战略。 此外, 拟议的计划还补充了已知的算法方法, 例如, 优化的捆绑紧性套紧捕在分割区内的依赖性。