The resurgence of self-supervised learning, whereby a deep learning model generates its own supervisory signal from the data, promises a scalable way to tackle the dramatically increasing size of real-world data sets without human annotation. However, the staggering computational complexity of these methods is such that for state-of-the-art performance, classical hardware requirements represent a significant bottleneck to further progress. Here we take the first steps to understanding whether quantum neural networks could meet the demand for more powerful architectures and test its effectiveness in proof-of-principle hybrid experiments. Interestingly, we observe a numerical advantage for the learning of visual representations using small-scale quantum neural networks over equivalently structured classical networks, even when the quantum circuits are sampled with only 100 shots. Furthermore, we apply our best quantum model to classify unseen images on the ibmq\_paris quantum computer and find that current noisy devices can already achieve equal accuracy to the equivalent classical model on downstream tasks.
翻译:自我监督学习的死灰复燃,即深层次学习模式从数据中产生自己的监督信号,从而带来一个可伸缩的方法,在没有人类注解的情况下解决实际世界数据集规模急剧扩大的问题。然而,这些方法的惊人计算复杂性是,对于最先进的性能来说,古典硬件要求是进一步进步的重大瓶颈。我们在这里采取第一步来了解量子神经网络能否满足对更强大的结构的需求,并在原则的混合实验中测试其有效性。有趣的是,我们观察到在利用小型量子神经网络对同等结构的古典网络进行视觉展示方面,即使量子电路的取样只有100个镜头。此外,我们运用我们最好的量子模型来对ibmq ⁇ ⁇ paris量子计算机上的不可见图像进行分类,并发现目前的噪声装置已经实现了与下游任务同等的经典模型相同的精确度。