最新|DeepMind获得人工智能会议UAI2018最佳论文,一种神经网络可扩展验证的对偶方法

【导读】UAI大会全称为Conference on Uncertainty in Artificial Intelligence,立足于不确定性人工智能领域,主要侧重于不确定性人工智能的知识表达、获取以及推理等问题。UAI2018在美国于8月6号到10号在美国蒙特雷举行。


DeepMind论文关于人工智能安全性的研究,一种神经网络可扩展验证的对偶方法,获得大会的最佳论文奖,恭喜!

论文地址:

http://www.zhuanzhi.ai/paper/3f9af35857a91073012f6c4c6bce1186



DeepMind CEO Demis Hassabis 也发表祝贺, 这是对通用模型鲁棒性的可证明保证的重要步骤!



附论文:

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.


-END-

专 · 知


人工智能领域主题知识资料查看与加入专知人工智能服务群

专知AI知识技术服务会员群加入人工智能领域26个主题知识资料全集获取欢迎微信扫一扫加入专知人工智能知识星球群,获取专业知识教程视频资料和与专家交流咨询


请PC登录www.zhuanzhi.ai或者点击阅读原文,注册登录专知,获取更多AI知识资料

请加专知小助手微信(扫一扫如下二维码添加),加入专知主题群(请备注主题类型:AI、NLP、CV、 KG等)交流~


 AI 项目技术 & 商务合作:bd@zhuanzhi.ai, 或扫描上面二维码联系!


关注专知公众号,获取人工智能的专业知识!

点击“阅读原文”,使用专知

展开全文
Top
微信扫码咨询专知VIP会员