Abstract
Deep neural networks (DNNs) find extensive applications, including object detection in various security domains. However, these DNN models are susceptible to backdoor attacks. While significant research has been conducted on backdoor attacks in classified models, limited attention has been given to object detection models. Previous studies have predominantly focused on backdoor attacks in digital environments, overlooking real-world implications. Notably, the efficacy of backdoor attacks in real-world scenarios can be significantly influenced by physical factors such as distance and illumination. In this article, we introduce a variable-size backdoor trigger designed to accommodate objects of different sizes, mitigating disruptions arising from varying distances between the viewing point and the targeted object. Additionally, we propose malicious adversarial training for backdoor training, enabling the backdoor object detector to learn trigger features amidst physical noise. Experimental results demonstrate that our robust backdoor attack (RBA) enhances the success rate of attacks in real-world settings.
Get full access to this article
View all access options for this article.
