While Joint-Embedding Predictive Architecture (JEPA) has emerged as a powerful architecture for learning rich latent representations, it fundamentally lacks generative abilities. Meanwhile, latent space reasoning attempts for Transformer models like COCONUT do improve performance, but they ultimately rely on token-by-token generation, which still accumulates compounding error and relies on context information to gain reasoning insights. To address these limitations, we propose JEPA-Reasoner, a novel JEPA model enhanced with generative ability that reasons in latent space. We augment it with a separate action-taker model, Talker, to produce human-readable sentences. Our approach demonstrates that decoupling latent space reasoning and token generation enables JEPA-Reasoner to produce mixed latent vectors that might lay the foundation for multi-threaded reasoning, while performing autoregressive generation with superior robustness to compounding error.
翻译:尽管联合嵌入预测架构(JEPA)已成为学习丰富潜在表示的有力架构,但其本质上缺乏生成能力。与此同时,针对Transformer模型(如COCONUT)的潜在空间推理尝试确实提升了性能,但它们最终依赖于逐词元生成,这仍会累积复合误差并依赖上下文信息来获取推理洞察。为解决这些局限性,我们提出了JEPA-Reasoner,一种增强生成能力、在潜在空间中进行推理的新型JEPA模型。我们为其配备了一个独立的行动执行模型Talker,以生成人类可读的句子。我们的方法表明,将潜在空间推理与词元生成解耦使得JEPA-Reasoner能够生成混合潜在向量,这可能为多线程推理奠定基础,同时以对复合误差更强的鲁棒性执行自回归生成。