Quantum machine learning (QML) promises compact and expressive representations, but suffers from the measurement bottleneck - a narrow quantum-to-classical readout that limits performance and amplifies privacy risk. We propose a lightweight residual hybrid architecture that concatenates quantum features with raw inputs before classification, bypassing the bottleneck without increasing quantum complexity. Experiments show our model outperforms pure quantum and prior hybrid models in both centralized and federated settings. It achieves up to +55% accuracy improvement over quantum baselines, while retaining low communication cost and enhanced privacy robustness. Ablation studies confirm the effectiveness of the residual connection at the quantum-classical interface. Our method offers a practical, near-term pathway for integrating quantum models into privacy-sensitive, resource-constrained settings like federated edge learning.
翻译:量子机器学习(QML)有望提供紧凑且富有表达力的表示,但受限于测量瓶颈——即狭窄的量子到经典读出通道,这会限制性能并放大隐私风险。我们提出一种轻量级残差混合架构,在分类前将量子特征与原始输入进行拼接,从而在不增加量子复杂性的情况下绕过该瓶颈。实验表明,我们的模型在集中式和联邦式设置中均优于纯量子模型及先前的混合模型。相较于量子基线,其准确率提升最高可达+55%,同时保持了较低的通信成本和增强的隐私鲁棒性。消融研究证实了量子-经典接口处残差连接的有效性。我们的方法为将量子模型集成到隐私敏感、资源受限的场景(如联邦边缘学习)提供了一条实用、近期的路径。