主题： Locally Differentially Private (Contextual) Bandits Learning
首先，我们提出了一种简单的黑盒归约框架，该框架可以解决带有LDP保证的大量无背景的bandits学习问题。根据我们的框架，我们可以通过单点反馈（例如 private bandits凸优化等）改善private bandits学习的最佳结果，并在LDP下获得具有多点反馈的BCO的第一结果。 LDP保证和黑盒特性使我们的框架在实际应用中比以前专门设计的和相对较弱的差分专用（DP）上下文无关强盗算法更具吸引力。此外，我们还将算法扩展到在(ε，δ)-LDP下具有遗憾约束ō（T~3/4 /ε）的广义线性bandits，这被认为是最优的。注意，给定DP上下文线性bandits的现有Ω（T）下界，我们的结果表明LDP和DP上下文bandits之间的根本区别。
This paper experiments with the number of fully-connected layers in a deep convolutional neural network as applied to the classification of fundus retinal images. The images analysed corresponded to the ODIR 2019 (Peking University International Competition on Ocular Disease Intelligent Recognition) , which included images of various eye diseases (cataract, glaucoma, myopia, diabetic retinopathy, age-related macular degeneration (AMD), hypertension) as well as normal cases. This work focused on the classification of Normal, Cataract, AMD and Myopia. The feature extraction (convolutional) part of the neural network is kept the same while the feature mapping (linear) part of the network is changed. Different data sets are also explored on these neural nets. Each data set differs from another by the number of classes it has. This paper hence aims to find the relationship between number of classes and number of fully-connected layers. It was found out that the effect of increasing the number of fully-connected layers of a neural networks depends on the type of data set being used. For simple, linearly separable data sets, addition of fully-connected layer is something that should be explored and that could result in better training accuracy, but a direct correlation was not found. However as complexity of the data set goes up(more overlapping classes), increasing the number of fully-connected layers causes the neural network to stop learning. This phenomenon happens quicker the more complex the data set is.