The conditional sampling model, introduced by Cannone, Ron and Servedio (SODA 2014, SIAM J.\ Comput.\ 2015) and independently by Chakraborty, Fischer, Goldhirsh and Matsliah (ITCS 2013, SIAM J.\ Comput.\ 2016), is a common framework for a number of studies (and variant models) into strengthened models of distribution testing. A core task in these investigations is that of estimating the mass of individual elements. The above works, as well as the improvement of Kumar, Meel and Pote (AISTATS 2025), have all yielded polylogarithmic algorithms. In this work we shatter the polylogarithmic barrier, and provide an estimator for individual elements that uses only $O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$ conditional samples. This in particular provides an improvement (and in some cases a unifying framework) for a number of related tasks, such as testing by learning of any label-invariant property, and distance estimation between two (unknown) distribution. The work of Chakraborty, Chakraborty and Kumar (SODA 2024) contains lower bounds for some of the above tasks. We derive from their work a nearly matching lower bound of $\tilde\Omega(\log\log N)$ for the estimation task. We also show that the full power of the conditional model is indeed required for the double-logarithmic bound. For the testing of label-invariant properties, we exponentially improve the previous lower bound from double-logarithmic to $\Omega(\log N)$ conditional samples, whereas our testing by learning algorithm provides an upper bound of $O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$.
翻译:暂无翻译