We develop a general framework for estimating function-valued parameters under equality or inequality constraints in infinite-dimensional statistical models. Such constrained learning problems are common across many areas of statistics and machine learning, where estimated parameters must satisfy structural requirements such as moment restrictions, policy benchmarks, calibration criteria, or fairness considerations. To address these problems, we characterize the solution as the minimizer of a penalized population risk using a Lagrange-type formulation, and analyze it through a statistical functional lens. Central to our approach is a constraint-specific path through the unconstrained parameter space that defines the constrained solutions. For a broad class of constraint-risk pairs, this path admits closed-form expressions and reveals how constraints shape optimal adjustments. When closed forms are unavailable, we derive recursive representations that support tractable estimation. Our results also suggest natural estimators of the constrained parameter, constructed by combining estimates of unconstrained components of the data-generating distribution. Thus, our procedure can be integrated with any statistical learning approach and implemented using standard software. We provide general conditions under which the resulting estimators achieve optimal risk and constraint satisfaction, and we demonstrate the flexibility and effectiveness of the proposed method through various examples, simulations, and real-data applications.
翻译:暂无翻译