We study computable probably approximately correct (CPAC) learning, where learners are required to be computable functions. It had been previously observed that the Fundamental Theorem of Statistical Learning, which characterizes PAC learnability by finiteness of the Vapnik-Chervonenkis (VC-)dimension, no longer holds in this framework. Recent works recovered analogs of the Fundamental Theorem in the computable setting, for instance by introducing an effective VC-dimension. Guided by this, we investigate the connection between CPAC learning and recursively enumerable representable (RER) classes, whose members can be algorithmically listed. Our results show that the effective VC-dimensions can take arbitrary values above the traditional one, even for RER classes, which creates a whole family of (non-)examples for various notions of CPAC learning. Yet the two dimensions coincide for classes satisfying sufficiently strong notions of CPAC learning. We then observe that CPAC learnability can also be characterized via containment of RER classes that realize the same samples. Furthermore, it is shown that CPAC learnable classes satisfying a unique identification property are necessarily RER. Finally, we establish that agnostic learnability can be guaranteed for RER classes, by considering the relaxed notion of nonuniform CPAC learning.
翻译:我们研究可计算概率近似正确(CPAC)学习,其中要求学习器为可计算函数。先前已有研究指出,统计学习基本定理——该定理通过Vapnik-Chervonenkis(VC)维度的有限性来刻画PAC可学习性——在此框架下不再成立。近期研究通过引入有效VC维度等方法,在可计算设定下恢复了基本定理的类比形式。受此启发,我们探究CPAC学习与递归可枚举可表示(RER)类之间的联系,后者的成员可通过算法方式枚举。我们的结果表明,即使对于RER类,有效VC维度也可在传统维度之上取任意值,这为各类CPAC学习概念构建了一整族(非)示例。然而,对于满足足够强CPAC学习概念的类,两种维度是一致的。我们进一步观察到,CPAC可学习性也可通过包含能实现相同样本的RER类来刻画。此外,研究证明满足唯一标识性质的CPAC可学习类必然是RER类。最后,通过考虑非均匀CPAC学习的宽松概念,我们证实对RER类可保证不可知学习性。