State-of-the-art convolutional neural networks excel in machine learning tasks such as face recognition, and object classification but suffer significantly when adversarial attacks are present. It is crucial that machine critical systems, where machine learning models are deployed, utilize robust models to handle a wide range of variability in the real world and malicious actors that may use adversarial attacks. In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness. Specifically, we focus on adversarial attacks in this application domain, but emphasize that the methodology can be applied to many other domains. We develop two models to detect eye closedness: first model on eye images and a second model on face images. We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate. We also study the effect of training data augmentation. Finally, we adversarially train the same models on perturbed images and report the success rate for the defense against these attacks. We hope our study sets up the work to prevent potential vehicle accidents by capturing drivers' face images and alerting them in case driver's eyes are closed due to drowsiness.
翻译:最先进的进化神经网络在脸部识别和物体分类等机器学习任务方面表现卓越,但当出现对抗性攻击时却深受其害。至关重要的是,机器关键系统,即安装机器学习模型的机器关键系统,使用强大的模型来处理现实世界中多种多样的变异,以及可能使用对抗性攻击的恶意行为者。在本研究中,我们调查眼部闭塞检测,以防止与驾驶员脱离和驾驶员潜伏有关的车辆事故。具体地说,我们侧重于这个应用领域的对抗性攻击,但强调该方法可以应用于许多其他领域。我们开发了两种模型,以探测眼部闭闭塞性:第一个眼图象模型和第二个图像模型。我们用预测的梯子发型、快重力信号和深福勒方法对模型进行对抗性攻击,并报告对抗性成功率。我们还研究了培训数据增强的效果。最后,我们用同样的模型对过敏图像进行对抗性训练,并报告防范这些攻击的成功率。我们希望我们的研究能够通过捕捉司机的脸图和在驾驶员眼中提醒防止潜在的车辆事故发生意外事故。