Sampling tasks have been successful in establishing quantum advantages both in theory and experiments. This has fueled the use of quantum computers for generative modeling to create samples following the probability distribution underlying a given dataset. In particular, the potential to build generative models on classically hard distributions would immediately preclude classical simulability, due to theoretical separations. In this work, we study quantum generative models from the perspective of output distributions, showing that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage. In contrast, models outputting data from sparse distributions can be trained. We consider special cases to enhance trainability, and observe that this opens the path for classical algorithms for surrogate sampling. This observed trade-off is linked to verification of quantum processes. We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
翻译:采样任务在理论和实验上均已成功确立了量子优势。这推动了利用量子计算机进行生成建模,以生成遵循给定数据集基础概率分布的样本。特别地,由于理论上的分离性,在经典困难分布上构建生成模型的潜力将直接排除经典可模拟性。在本工作中,我们从输出分布的角度研究量子生成模型,表明具有反集中特性的模型在平均意义上不可训练,包括那些展现出量子优势的模型。相反,输出稀疏分布数据的模型可以被训练。我们考虑特殊情况以增强可训练性,并观察到这为替代采样的经典算法开辟了道路。这种观察到的权衡与量子过程的验证相关联。我们得出结论,量子优势仍可在生成模型中找到,尽管其来源必须区别于反集中性。