Matrix preconditioning is a critical technique to accelerate the solution of linear systems, where performance heavily depends on the selection of preconditioning parameters. Traditional parameter selection approaches often define fixed constants for specific scenarios. However, they rely on domain expertise and fail to consider the instance-wise features for individual problems, limiting their performance. In contrast, machine learning (ML) approaches, though promising, are hindered by high inference costs and limited interpretability. To combine the strengths of both approaches, we propose a symbolic discovery framework-namely, Symbolic Matrix Preconditioning (SymMaP)-to learn efficient symbolic expressions for preconditioning parameters. Specifically, we employ a neural network to search the high-dimensional discrete space for expressions that can accurately predict the optimal parameters. The learned expression allows for high inference efficiency and excellent interpretability (expressed in concise symbolic formulas), making it simple and reliable for deployment. Experimental results show that SymMaP consistently outperforms traditional strategies across various benchmarks.
翻译:矩阵预条件是加速线性方程组求解的关键技术,其性能在很大程度上依赖于预条件参数的选择。传统的参数选择方法通常为特定场景定义固定常数,但这些方法依赖于领域专业知识,未能考虑个体问题的实例级特征,从而限制了其性能。相比之下,机器学习方法虽然前景广阔,但受到高推理成本和有限可解释性的制约。为结合两种方法的优势,我们提出了一种符号发现框架——即符号矩阵预条件(SymMaP)——以学习预条件参数的高效符号表达式。具体而言,我们采用神经网络在高维离散空间中搜索能够准确预测最优参数的表达式。学习到的表达式具有高推理效率和出色的可解释性(以简洁的符号公式表达),使其部署简单可靠。实验结果表明,SymMaP在多种基准测试中均持续优于传统策略。