Abstract
Recently, computer vision systems, for example, smart traffic surveillance systems, facial recognition systems, etc., have significantly changed our daily life. Even though the neural networks in such systems are known to suffer from backdoor attacks, causing the backdoored models to behave well on benign samples but maliciously on controlled samples (with triggers applied to activate the backdoor), it is generally believed that most of the triggers, when used in physical attacks, are noticeable to victim users and not robust in various settings, such as different angles, distances, lighting conditions, etc. In this paper, we leverage electromagnetic interference (EMI) to produce a specific pattern distortion in images captured by the camera system and utilize the pattern distortion as the backdoor trigger. To avoid the overhead of manually collecting poisoned images, we introduce a simulation sample generation approach, converting clean images to poisoned ones by simulating the distortion caused by EMI against the camera system. Additionally, we propose a contrast loss function to enhance the generalization of backdoor features, improving triggers’ capability to activate the embedded backdoors. We conduct extensive physical experiments using diverse deep neural networks across various camera systems in different practical environments, achieving a 92.54% average backdoor success rate.
Keywords
Get full access to this article
View all access options for this article.
