In this paper we investigate the problem of stochastic multi-armed bandits (MAB) in the (local) differential privacy (DP/LDP) model. Unlike previous results that assume bounded/sub-Gaussian reward distributions, we focus on the setting where each arm's reward distribution only has $(1+v)$-th moment with some $v\in (0, 1]$. In the first part, we study the problem in the central $\epsilon$-DP model. We first provide a near-optimal result by developing a private and robust Upper Confidence Bound (UCB) algorithm. Then, we improve the result via a private and robust version of the Successive Elimination (SE) algorithm. Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal. In the second part, we study the problem in the $\epsilon$-LDP model. We propose an algorithm that can be seen as locally private and robust version of SE algorithm, which provably achieves (near) optimal rates for both instance-dependent and instance-independent regret. Our results reveal differences between the problem of private MAB with bounded/sub-Gaussian rewards and heavy-tailed rewards. To achieve these (near) optimal rates, we develop several new hard instances and private robust estimators as byproducts, which might be used to other related problems. Finally, experiments also support our theoretical findings and show the effectiveness of our algorithms.
翻译:在本文中,我们调查了(当地)差异隐私(DP/LDP)模式中多武装盗匪(MAB)问题(MAB)问题。与以前接受约束性/su-Gau-Gausian奖赏分配结果的结果不同,我们侧重于每个手臂的奖赏分配只有(1+v)美元(美元,0,1美元)第一刻,每个手臂的奖赏分配只有(1+v)美元(美元)第一刻(美元,0,1美元)。在第一部分,我们研究中央的美元-DP模式中的问题。我们首先通过开发一个私密和稳健的超信任ound(UCB)算法(UCB)算法,提供近乎最佳的结果。然后,我们通过一个私密的和实证性的取消(SE)算法(SEB)的可靠版本来改进结果。最后,我们为显示我们改进的算法的实性(美元-LDP模式)的遗憾,我们研究了美元-DP模式中的问题。我们建议一种可视为本地的私有和稳健的SEV算法版本,这可以实现(近于)最优的、最可靠率和最优的和最优比率(我们最接近的)两种情况下的理论-结果。我们之间的结果。我们展示的结果,我们用的是,我们用的是,我们用这些结果,我们用的是,用到最优和最优的逻辑-最优的比-结果,我们之间的结果,我们的结果,用到最优的实验性/最优的比。