We introduce the problem of learning mixtures of $k$ subcubes over $\{0,1\}^n$, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising $n^{O(\log k)}$-time learning algorithm based on higher-order multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error $\epsilon$ on $k$-leaf decision trees with at most $s$ stochastic transitions on any root-to-leaf path in $n^{O(s + \log k)}\cdot\text{poly}(1/\epsilon)$ time. In this stochastic setting, the classic Occam algorithms for learning decision trees with zero stochastic transitions break down, while the low-degree algorithm of Linial et al. inherently has a quasipolynomial dependence on $1/\epsilon$. In contrast, as we will show, mixtures of $k$ subcubes are uniquely determined by their degree $2 \log k$ moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on $1/\epsilon$ of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of Feldman et al. for the related but harder problem of learning mixtures of binary product distributions.
翻译:我们引入了以美元为单位的子立方的学习混合物问题, 以0. 0, 1 美元为单位, 其中包括许多经典的学习理论问题, 作为特殊案例( 其本身也是其他特殊案例 ) 。 我们给出了一个令人惊讶的$O( log k) $- 美元基于高阶多线性瞬间的时间学习算法。 我们无法学习参数, 因为相同的分布可以用非常不同的模型来代表。 相反, 我们开发了一个推理框架, 如何用多线性时间点来定位混合物的基本特征, 比如, 数个元的立方( ) 。 我们还使用我们的算法应用来学习决定树( ) 。 以美元为单位的立方( ) 立方( 立方) 立方( 立方( 立方) 立方( 立方( 立方) 立方( 立方( 立方) 立方( 立方( 立方) 立方( 立方) 立方( 立方) 立方( 立方) 立方( 立方( 立方) 立方( 立方) 立方) 立方( 立方) 立方( 立方) 立方( 立方) 立( 立方) 立方( 立方) 立方) 立方) 立方( 立方) 立方) 立) 立( 立) 立方( 立方( 立) 立) 立) 立( 立方( 立) 立( 立) 立) 立) 立( 立( 立( 立方) 立) 立) 立( 立方( 立方) 立方( 立) 立) 立) 立) 立) ) 立( 立( 立( 立( 立) 立) 立) 立) 立( 立( 立) 立) 立( 立( 立) 立) 立) 立) 立) 立) 立) 立) 立( 立( 立( 立( 立) 立( 立) 立