Seminar第2929讲 基于GPU的Halpern Peaceman-Rachford (HPR)凸优化求解方法

创建时间:  2025/10/30  谭福平   浏览次数:   返回

报告题目 (Title):A GPU Based Halpern Peaceman-Rachford (HPR) Method for Solving Convex Programming(基于GPU的Halpern Peaceman-Rachford (HPR)凸优化求解方法)

报告人 (Speaker):赵欣苑 教授(北京工业大学)

报告时间 (Time):2025年10月31日 (周五) 13:30

报告地点 (Place):校本部F309

邀请人(Inviter):徐姿 教授


报告摘要:We study the acceleration of a preconditioned alternating direction method of multipliers (pADMM), where the proximal terms are convex quadratic functions, for solving linearly constrained convex optimization problems. Our approach begins by reformulating pADMM as a proximal point method (PPM) with a positive semidefinite preconditioner, which may be degenerate due to the lack of strong convexity in the original proximal terms. We then accelerate pADMM by accelerating the resulting degenerate PPM (dPPM). In particular, we first develop an accelerated dPPM by incorporating Halpern iteration, achieving non-asymptotic convergence guarantees. Building upon this result, we further design an accelerated pADMM that enjoys non-asymptotic and nonergodic convergence rates under practical stopping criteria, including the Karush–Kuhn–Tucker (KKT) residual and the primal objective gap. Extensive GPU-based experiments on large-scale linear programming and convex composite quadratic programming benchmarks demonstrate the substantial advantages of our Halpern Peaceman–Rachford (HPR) method—a special case of the Halpern-accelerated pADMM applied to the dual formulation—over state-of-the-art solvers, including the award-winning PDLP, as well as PDQP, SCS, CuClarabel, and Gurobi, in delivering high-accuracy solutions.

上一条:Seminar第2930讲 Descent-Net:面向约束优化的下降方向学习网络

下一条:Seminar第2928讲 二维奇性依赖的抛物-椭圆型的Keller-Segel方程和其平均场极限推导

  版权所有 © 上海大学   沪ICP备09014157   沪公网安备31009102000049号  地址:上海市宝山区上大路99号    邮编:200444   电话查询
 技术支持:上海大学信息化工作办公室   联系我们