Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor.
翻译:近期研究 \\cite{arifgroup} 提出了联邦近端梯度算法 \\textbf{(\\texttt{FedProxGrad})},用于解决群体公平联邦学习中的非凸复合优化问题。然而,原始分析仅证明了算法收敛至\\textit{噪声主导的平稳性邻域},且明确依赖于方差诱导的噪声底限。本文针对具有不精确局部近端解和显式公平正则化的广义\\texttt{FedProxGrad}型分析框架,提出了改进的渐近收敛性分析。我们将此扩展分析框架称为\\textbf{DS \\texttt{FedProxGrad}}(衰减步长联邦近端梯度算法)。在Robbins-Monro步长调度 \\cite{robbins1951stochastic} 及局部不精确性满足温和衰减条件的假设下,我们证明了 $\\liminf_{r\\to\\infty} \\mathbb{E}[\\|\\nabla F(\\mathbf{x}^r)\\|^2] = 0$,即该算法具有渐近平稳性,且收敛速率不依赖于方差诱导的噪声底限。