Split inference (SI) partitions deep neural networks into distributed sub-models, enabling collaborative learning without directly sharing raw data. However, SI remains vulnerable to Data Reconstruction Attacks (DRAs), where adversaries exploit exposed smashed data to recover private inputs. Despite substantial progress in attack-defense methodologies, the fundamental quantification of privacy risks is still underdeveloped. This paper establishes an information-theoretic framework for privacy leakage in SI, defining leakage as the adversary's certainty and deriving both average-case and worst-case error lower bounds. We further introduce Fisher-approximated Shannon information (FSInfo), a new privacy metric based on Fisher Information (FI) that enables operational and tractable computation of privacy leakage. Building on this metric, we develop FSInfoGuard, a defense mechanism that achieves a strong privacy-utility tradeoff. Our empirical study shows that FSInfo is an effective privacy metric across datasets, models, and defense strengths, providing accurate privacy estimates that support the design of defense methods outperforming existing approaches in both privacy protection and utility preservation. The code is available at https://github.com/SASA-cloud/FSInfo.
翻译:拆分推理(SI)将深度神经网络划分为分布式子模型,实现无需直接共享原始数据的协作学习。然而,SI仍易受数据重构攻击(DRAs)的影响,攻击者可利用暴露的中间数据恢复隐私输入。尽管攻击防御方法已取得显著进展,隐私风险的基础量化研究仍不完善。本文建立了SI中隐私泄露的信息论框架,将泄露定义为攻击者的确定性,并推导了平均情况与最坏情况下的误差下界。我们进一步提出费希尔近似香农信息(FSInfo),这是一种基于费希尔信息(FI)的新型隐私度量方法,可实现隐私泄露的可操作化与可计算化。基于该度量,我们开发了FSInfoGuard防御机制,实现了强隐私-效用权衡。实证研究表明,FSInfo在不同数据集、模型和防御强度下均为有效的隐私度量,其提供的精确隐私估计支持设计的防御方法在隐私保护和效用保持方面均优于现有方案。代码发布于https://github.com/SASA-cloud/FSInfo。