With deep neural networks (DNNs) increasingly embedded in modern society, ensuring their safety has become a critical and urgent issue. In response, substantial efforts have been dedicated to the red-blue adversarial framework, where the red team focuses on identifying vulnerabilities in DNNs and the blue team on mitigating them. However, existing approaches from both teams remain computationally intensive, constraining their applicability to large-scale models. To overcome this limitation, this thesis endeavours to provide time-efficient methods for the evaluation and enhancement of adversarial robustness in DNNs.
翻译:随着深度神经网络(DNNs)在现代社会中的日益普及,确保其安全性已成为一项至关重要且紧迫的议题。为此,学术界与工业界投入了大量精力于红蓝对抗框架中,其中红队专注于发现DNNs中的脆弱性,而蓝队则致力于缓解这些脆弱性。然而,现有来自双方团队的方法在计算上仍然非常密集,这限制了它们在大规模模型上的适用性。为了克服这一局限,本论文致力于为深度神经网络的对抗鲁棒性评估与增强提供高效的时间方法。