We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs. As generative models, these programs capture compositional structures in a naturally explainable form. To tackle the challenge of performing program induction as an 'inner-loop' to learning, we propose the Memoised Wake-Sleep (MWS) algorithm, which extends Wake Sleep by explicitly storing and reusing the best programs discovered by the inference network throughout training. We use MWS to learn accurate, explainable models in three challenging domains: stroke-based character modelling, cellular automata, and few-shot learning in a novel dataset of real-world string concepts.
翻译:我们研究一组神经-共振基因模型,其中神经网络既用于推断,也用作象征性的、数据生成程序的前身。作为基因模型,这些方案以自然可解释的形式捕捉了组成结构。为了应对作为“内环”学习的“内环”执行程序诱导的挑战,我们建议采用“记忆休眠算法”,通过明确储存和重新使用在整个培训过程中由推论网络发现的最佳程序来延长休眠期。我们利用MWS在三个挑战性领域学习准确、可解释的模式:中风字符建模、蜂窝自动成型和在现实世界弦概念的新数据集中少见的学习。