Abstract
The security of intelligent transportation systems is a critical research focus. Deep neural networks (DNNs) are widely used in intelligent transportation systems. However, DNNs, as a “black-box” intelligence, have long been threatened by adversarial attacks. Whether adversarial attacks will further affect the security of intelligent transportation systems through DNNs is still an open question. We found that existing adversarial attack methods focus too much on attack success rate and ignore attack forms and scenarios. Therefore, we propose a flexible adversarial attack method against the multi-object tracking model, named the Free Tracker Hijacking Attack (FTHA) method. It integrates three basic attack forms—adding trajectories, deleting trajectories, and moving trajectories—causing a clearer potential threat to intelligent transportation systems. Our experiment results show that the FTHA method has achieved significant and stable attacks in multiple scenarios, and all frames have been successfully attacked. Moreover, the confidence score change of false trajectories generally reaches around 0.8, which is on par with, or even exceeds, the real trajectories. In the long run, our research provides support for the “black-box intelligence security research” of intelligent transportation systems. It will create conditions for the development of targeted defense technologies. Our code is available at: https://github.com/fffqh/FTHA.
Keywords
Get full access to this article
View all access options for this article.
