Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking

View through CrossRef
In recent years, backdoor attacks have posed a significant threat to the security of deep models. Attackers can induce erroneous behavior in victim models through carefully designed triggers. However, existing backdoor attacks primarily originate from image classification tasks and are designed for the digital world, while research on physical backdoor attacks against multi-object tracking (MOT) is scarce. Moreover, existing methods typically implant a single trigger in the victim model, limiting the diversity of attacks. To fill this gap, we propose a robust dual-trigger physical backdoor attack framework against MOT under a poison-only scenario. Specifically, we do not require knowledge of the training components of the model. We only need to implant two triggers with different backdoor effects (disappearance and resizing) in a few video frames. To enhance attack effectiveness and prevent confusion between these two backdoor effects, we introduce contrastive loss to optimize the triggers and apply various transformations to the triggers to simulate physical world scenarios, thereby further improving the robustness of the attacks. Extensive experiments in both digital and physical worlds demonstrate that our attacks significantly degrade the performance of state-of-the-art MOT trackers. Additionally, we discuss the transferability and imperceptibility of our attacks and their robustness against various potential backdoor defenses.
Title: Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking
Description:
In recent years, backdoor attacks have posed a significant threat to the security of deep models.
Attackers can induce erroneous behavior in victim models through carefully designed triggers.
However, existing backdoor attacks primarily originate from image classification tasks and are designed for the digital world, while research on physical backdoor attacks against multi-object tracking (MOT) is scarce.
Moreover, existing methods typically implant a single trigger in the victim model, limiting the diversity of attacks.
To fill this gap, we propose a robust dual-trigger physical backdoor attack framework against MOT under a poison-only scenario.
Specifically, we do not require knowledge of the training components of the model.
We only need to implant two triggers with different backdoor effects (disappearance and resizing) in a few video frames.
To enhance attack effectiveness and prevent confusion between these two backdoor effects, we introduce contrastive loss to optimize the triggers and apply various transformations to the triggers to simulate physical world scenarios, thereby further improving the robustness of the attacks.
Extensive experiments in both digital and physical worlds demonstrate that our attacks significantly degrade the performance of state-of-the-art MOT trackers.
Additionally, we discuss the transferability and imperceptibility of our attacks and their robustness against various potential backdoor defenses.

Related Results

Backdoor DNFs
Backdoor DNFs
We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas. Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable ...
CSP beyond tractable constraint languages
CSP beyond tractable constraint languages
AbstractThe constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zh...
Sub-Band Backdoor Attack in Remote Sensing Imagery
Sub-Band Backdoor Attack in Remote Sensing Imagery
Remote sensing datasets usually have a wide range of spatial and spectral resolutions. They provide unique advantages in surveillance systems, and many government organizations use...
AdaPT: Adaptive Position Trigger for Improving Backdoors Attacks in Transfer Learning
AdaPT: Adaptive Position Trigger for Improving Backdoors Attacks in Transfer Learning
Backdoor attacks in neural networks have emerged as one of the most critical and dangerous threats to AI security, attracting extensive research attention in recent years. Most exi...
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods fac...
Deteksi dan Mitigasi Serangan Backdoor Menggunakan Python Watchdog
Deteksi dan Mitigasi Serangan Backdoor Menggunakan Python Watchdog
The number of cyber attacks is increasing. This happens thoroughly, both at the international and national levels. Technology, techniques, and methods of carrying out cyber attacks...
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Abstract The Physical Activity Guidelines for Americans (Guidelines) advises older adults to be as active as possible. Yet, despite the well documented benefits of physical a...
A Stealthy Backdoor Attack for Code Models
A Stealthy Backdoor Attack for Code Models
Abstract Recent studies have shown that code models are susceptible to backdoor attacks. When injected with a backdoor, the victim code model can function normally on benig...

Back to Top