We revisit the problem of mean estimation in the Gaussian sequence model with $\ell_p$ constraints for $p \in [0, \infty]$. We demonstrate two phenomena for the behavior of the maximum likelihood estimator (MLE), which depend on the noise level, the radius of the (quasi)norm constraint, the dimension, and the norm index $p$. First, if $p$ lies between $0$ and $1 + \Theta(\tfrac{1}{\log d})$, inclusive, or if it is greater than or equal to $2$, the MLE is minimax rate-optimal for all noise levels and all constraint radii. On the other hand, for the remaining norm indices -- namely, if $p$ lies between $1 + \Theta(\tfrac{1}{\log d})$ and $2$ -- here is a more striking behavior: the MLE is minimax rate-suboptimal, despite its nonlinearity in the observations, for essentially all noise levels and constraint radii for which nonlinear estimates are necessary for minimax-optimal estimation. Our results imply that when given $n$ independent and identically distributed Gaussian samples, the MLE can be suboptimal by a polynomial factor in the sample size. Our lower bounds are constructive: whenever the MLE is rate-suboptimal, we provide explicit instances on which the MLE provably incurs suboptimal risk. Finally, in the non-convex case -- namely when $p < 1$ -- we develop sharp local Gaussian width bounds, which may be of independent interest.
翻译:暂无翻译