As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models' robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.
翻译:由于互联网上仍然有新的装置和新兴技术,攻击面正在成倍增长。技术正在向利润驱动的Tings市场互联网转移,而安全则是事后考虑的。传统的防御方法已经不足以探测出已知和未知攻击的高度精确性。机器学习入侵探测系统已经证明它们成功地以高精度识别了未知攻击。然而,机器学习模型也很容易受到攻击。在部署之前,也可以使用反向例子来评价设计模型的稳健性。此外,使用对抗性例子对于为对抗性环境设计一个强大的模型至关重要。我们的工作评估了传统机器学习和深层学习模型的强健性,使用Bot-IoT数据集。我们的方法包括两个主要方法。首先,标签中毒,用来导致模型错误的分类。第二,快速梯度标志方法,用来逃避探测措施。实验表明,攻击者可以极有可能操纵或绕过探测。