在科学,计算和工程学中,黑盒是一种设备,系统或对象,可以根据其输入和输出(或传输特性)对其进行查看,而无需对其内部工作有任何了解。 它的实现是“不透明的”(黑色)。 几乎任何事物都可以被称为黑盒:晶体管,引擎,算法,人脑,机构或政府。为了使用典型的“黑匣子方法”来分析建模为开放系统的事物,仅考虑刺激/响应的行为,以推断(未知)盒子。 该黑匣子系统的通常表示形式是在该方框中居中的数据流程图。黑盒的对立面是一个内部组件或逻辑可用于检查的系统,通常将其称为白盒(有时也称为“透明盒”或“玻璃盒”)。

VIP内容

主题: Imitation Attacks and Defenses for Black-box Machine Translation Systems

摘要: 我们考虑一个寻求窃取黑盒机器翻译(MT)系统的对手,以获取经济利益或排除模型错误。我们首先表明,黑盒机器翻译系统可以通过使用单语句子和训练模型来模拟它们的输出来窃取。通过模拟实验,我们证明了即使模仿模型的输入数据或架构与受害者不同,MTmodel的窃取也是可能的。应用这些思想,我们在高资源和低资源语言对上训练了三个生产MT系统的0.6 BLEU以内的模仿模型。然后,我们利用模仿模型的相似性将对抗性示例转移到生产系统。我们使用基于梯度的攻击,这些攻击会暴露输入,从而导致语义错误的翻译,内容丢失和庸俗的模型输出。为了减少这些漏洞,我们提出了一种防御措施,该防御措施会修改翻译输出,从而误导了模仿模型优化的防御措施。这种防御降低了仿真模型BLEU的性能,并降低了BLEU的攻击传输速率和推理速度。

成为VIP会员查看完整内容
0
4

最新内容

A new modification of the Neural Additive Model (NAM) called SurvNAM and its modifications are proposed to explain predictions of the black-box machine learning survival model. The method is based on applying the original NAM to solving the explanation problem in the framework of survival analysis. The basic idea behind SurvNAM is to train the network by means of a specific expected loss function which takes into account peculiarities of the survival model predictions and is based on approximating the black-box model by the extension of the Cox proportional hazards model which uses the well-known Generalized Additive Model (GAM) in place of the simple linear relationship of covariates. The proposed method SurvNAM allows performing the local and global explanation. A set of examples around the explained example is randomly generated for the local explanation. The global explanation uses the whole training dataset. The proposed modifications of SurvNAM are based on using the Lasso-based regularization for functions from GAM and for a special representation of the GAM functions using their weighted linear and non-linear parts, which is implemented as a shortcut connection. A lot of numerical experiments illustrate the SurvNAM efficiency.

0
0
下载
预览

最新论文

A new modification of the Neural Additive Model (NAM) called SurvNAM and its modifications are proposed to explain predictions of the black-box machine learning survival model. The method is based on applying the original NAM to solving the explanation problem in the framework of survival analysis. The basic idea behind SurvNAM is to train the network by means of a specific expected loss function which takes into account peculiarities of the survival model predictions and is based on approximating the black-box model by the extension of the Cox proportional hazards model which uses the well-known Generalized Additive Model (GAM) in place of the simple linear relationship of covariates. The proposed method SurvNAM allows performing the local and global explanation. A set of examples around the explained example is randomly generated for the local explanation. The global explanation uses the whole training dataset. The proposed modifications of SurvNAM are based on using the Lasso-based regularization for functions from GAM and for a special representation of the GAM functions using their weighted linear and non-linear parts, which is implemented as a shortcut connection. A lot of numerical experiments illustrate the SurvNAM efficiency.

0
0
下载
预览
Top