Learning disentangled representations is important in representation learning, aiming to learn a low dimensional representation of data where each dimension corresponds to one underlying generative factor. Due to the possibility of causal relationships between generative factors, causal disentangled representation learning has received widespread attention. In this paper, we first propose new flows that can incorporate causal structure information into the model, called causal flows. Based on the variational autoencoders(VAE) commonly used in disentangled representation learning, we design a new model, CF-VAE, which enhances the disentanglement ability of the VAE encoder by utilizing the causal flows. By further introducing the supervision of ground-truth factors, we demonstrate the disentanglement identifiability of our model. Experimental results on both synthetic and real datasets show that CF-VAE can achieve causal disentanglement and perform intervention experiments. Moreover, CF-VAE exhibits outstanding performance on downstream tasks and has the potential to learn causal structure among factors.
翻译:摘要:在表示学习中,学习到分解表示是很重要的。分解表示的目标是学习到一种低维度的表示,使得每个维度对应于一个潜在的生成因素。由于潜在的生成因素之间可能存在因果关系,因此因果分解表示学习受到了广泛关注。本文首先提出了一种新的流动方法,可以将因果结构信息纳入到模型中,并称之为因果流。基于常用的分解表示学习的变分自编码器(VAE)模型,我们设计了一种新的模型——CF-VAE,通过利用因果流增强了VAE编码器的分解能力。通过进一步对基础真实因素进行监督,我们展示了模型的分解可识别性。对合成和真实数据集进行的实验结果表明,CF-VAE 可以实现因果分解,并进行干预实验。此外,CF-VAE在下游任务中表现出色,并具有学习因果结构的潜力。