北京大学创建于1898年,初名京师大学堂,是中国近代第一所国立大学,也是最早以“大学”身份和名称建立的学校,其成立标志着中国近代高等教育的开端。北大是中国近代唯一以最高学府身份创立的学校,最初也是国家最高教育行政机关,行使教育部职能,统管全国教育。北大开创了中国高校中最早的文科、理科、法政科、商科、农科、医科等学科的大学教育,是近代以来中国高等教育的奠基者。

VIP内容

主题: Locally Differentially Private (Contextual) Bandits Learning

摘要:

首先,我们提出了一种简单的黑盒归约框架,该框架可以解决带有LDP保证的大量无背景的bandits学习问题。根据我们的框架,我们可以通过单点反馈(例如 private bandits凸优化等)改善private bandits学习的最佳结果,并在LDP下获得具有多点反馈的BCO的第一结果。 LDP保证和黑盒特性使我们的框架在实际应用中比以前专门设计的和相对较弱的差分专用(DP)上下文无关强盗算法更具吸引力。此外,我们还将算法扩展到在(ε,δ)-LDP下具有遗憾约束ō(T~3/4 /ε)的广义线性bandits,这被认为是最优的。注意,给定DP上下文线性bandits的现有Ω(T)下界,我们的结果表明LDP和DP上下文bandits之间的根本区别。

成为VIP会员查看完整内容
0
7

最新论文

This paper experiments with the number of fully-connected layers in a deep convolutional neural network as applied to the classification of fundus retinal images. The images analysed corresponded to the ODIR 2019 (Peking University International Competition on Ocular Disease Intelligent Recognition) [9], which included images of various eye diseases (cataract, glaucoma, myopia, diabetic retinopathy, age-related macular degeneration (AMD), hypertension) as well as normal cases. This work focused on the classification of Normal, Cataract, AMD and Myopia. The feature extraction (convolutional) part of the neural network is kept the same while the feature mapping (linear) part of the network is changed. Different data sets are also explored on these neural nets. Each data set differs from another by the number of classes it has. This paper hence aims to find the relationship between number of classes and number of fully-connected layers. It was found out that the effect of increasing the number of fully-connected layers of a neural networks depends on the type of data set being used. For simple, linearly separable data sets, addition of fully-connected layer is something that should be explored and that could result in better training accuracy, but a direct correlation was not found. However as complexity of the data set goes up(more overlapping classes), increasing the number of fully-connected layers causes the neural network to stop learning. This phenomenon happens quicker the more complex the data set is.

0
1
下载
预览
Top