Quantum machine learning (QML) promises compact and expressive representations, but suffers from the measurement bottleneck - a narrow quantum-to-classical readout that limits performance and amplifies privacy risk. We propose a lightweight residual hybrid architecture that concatenates quantum features with raw inputs before classification, bypassing the bottleneck without increasing quantum complexity. Experiments show our model outperforms pure quantum and prior hybrid models in both centralized and federated settings. It achieves up to +55% accuracy improvement over quantum baselines, while retaining low communication cost and enhanced privacy robustness. Ablation studies confirm the effectiveness of the residual connection at the quantum-classical interface. Our method offers a practical, near-term pathway for integrating quantum models into privacy-sensitive, resource-constrained settings like federated edge learning.
翻译:量子机器学习(QML)有望提供紧凑且富有表现力的表示,但受限于测量瓶颈——即一个狭窄的量子到经典读出通道,这限制了性能并放大了隐私风险。我们提出了一种轻量级的残差混合架构,在分类前将量子特征与原始输入进行拼接,从而在不增加量子复杂性的情况下绕过该瓶颈。实验表明,我们的模型在集中式和联邦式设置下均优于纯量子模型及先前的混合模型。与量子基线相比,其准确率提升最高可达+55%,同时保持了较低的通信成本和增强的隐私鲁棒性。消融研究证实了量子-经典接口处残差连接的有效性。我们的方法为将量子模型集成到隐私敏感、资源受限的场景(如联邦边缘学习)中提供了一条实用且近期的途径。