中文题名: | 针对目标检测器的对抗训练的研究和实现 |
姓名: | |
保密级别: | 公开 |
论文语种: | chi |
学科代码: | 080901 |
学科专业: | |
学生类型: | 学士 |
学位: | 理学学士 |
学位年度: | 2023 |
校区: | |
学院: | |
第一导师姓名: | |
第一导师单位: | |
第二导师姓名: | |
提交日期: | 2023-06-18 |
答辩日期: | 2023-05-16 |
外文题名: | Adversarial Training for Object Detectors |
中文关键词: | |
外文关键词: | Adversarial training ; Object detection ; Adversarial robustness |
中文摘要: |
目标检测是计算机视觉中的一项基本任务,在包括自动驾驶、监控和机器人技术等许多视觉应用中起到了关键作用。然而,随着对抗攻击的兴起,目标探测器的安全性面临着严峻的挑战。通过对模型进行对抗训练,尽量降低对抗攻击对模型的干扰,提高模型的对抗鲁棒性,可以有效防御对抗攻击。但使用传统方法对目标检测器进行对抗训练时,目标检测器在干净样本上的准确率大幅下降,而相应获得的对抗鲁棒性却极为有限,也即存在“鲁棒性瓶颈”现象。为解决上述问题,本文提出一种改良的对抗训练方法。具体地,采取一种渐进式提高对抗强度的对抗训练,并去掉目标检测器中的BN层,有效缓解“鲁棒性瓶颈”现象。在目标检测数据集PASCAL VOC上,本文的方法可以显著提高SSD检测器对各种对抗攻击的鲁棒性,同时保持该检测器在干净样本上的较高的准确率。 |
外文摘要: |
Object detection is a fundamental task in computer vision, playing a critical role in many visual applications such as autonomous driving, surveillance, and robotics. However, the rise of adversarial attacks poses a severe challenge to the security of object detectors. Adversarial training is commonly used to enhance the adversarial robustness of models against attacks. However, traditional adversarial training methods for object detectors suffer from a "robustness bottleneck" phenomenon, where the detection accuracy on clean samples drops dramatically, while the adversarial robustness gained is limited. To address this issue, we propose an improved adversarial training method for object detectors. Specifically, we first remove the batch normalization layer in the object detector and gradually increase the strength of the adversarial attacks during training to alleviate the "robustness bottleneck" phenomenon. Our experimental results on the PASCAL VOC dataset demonstrate that the proposed method significantly improves the adversarial robustness of the SSD detector against various types of attacks while maintaining a high accuracy on clean samples. |
参考文献总数: | 35 |
馆藏号: | 本080901/23063 |
开放日期: | 2024-06-17 |