Javascript must be enabled to continue!
Sub-Band Backdoor Attack in Remote Sensing Imagery
View through CrossRef
Remote sensing datasets usually have a wide range of spatial and spectral resolutions. They provide unique advantages in surveillance systems, and many government organizations use remote sensing multispectral imagery to monitor security-critical infrastructures or targets. Artificial Intelligence (AI) has advanced rapidly in recent years and has been widely applied to remote image analysis, achieving state-of-the-art (SOTA) performance. However, AI models are vulnerable and can be easily deceived or poisoned. A malicious user may poison an AI model by creating a stealthy backdoor. A backdoored AI model performs well on clean data but behaves abnormally when a planted trigger appears in the data. Backdoor attacks have been extensively studied in machine learning-based computer vision applications with natural images. However, much less research has been conducted on remote sensing imagery, which typically consists of many more bands in addition to the red, green, and blue bands found in natural images. In this paper, we first extensively studied a popular backdoor attack, BadNets, applied to a remote sensing dataset, where the trigger was planted in all of the bands in the data. Our results showed that SOTA defense mechanisms, including Neural Cleanse, TABOR, Activation Clustering, Fine-Pruning, GangSweep, Strip, DeepInspect, and Pixel Backdoor, had difficulties detecting and mitigating the backdoor attack. We then proposed an explainable AI-guided backdoor attack specifically for remote sensing imagery by placing triggers in the image sub-bands. Our proposed attack model even poses stronger challenges to these SOTA defense mechanisms, and no method was able to defend it. These results send an alarming message about the catastrophic effects the backdoor attacks may have on satellite imagery.
Title: Sub-Band Backdoor Attack in Remote Sensing Imagery
Description:
Remote sensing datasets usually have a wide range of spatial and spectral resolutions.
They provide unique advantages in surveillance systems, and many government organizations use remote sensing multispectral imagery to monitor security-critical infrastructures or targets.
Artificial Intelligence (AI) has advanced rapidly in recent years and has been widely applied to remote image analysis, achieving state-of-the-art (SOTA) performance.
However, AI models are vulnerable and can be easily deceived or poisoned.
A malicious user may poison an AI model by creating a stealthy backdoor.
A backdoored AI model performs well on clean data but behaves abnormally when a planted trigger appears in the data.
Backdoor attacks have been extensively studied in machine learning-based computer vision applications with natural images.
However, much less research has been conducted on remote sensing imagery, which typically consists of many more bands in addition to the red, green, and blue bands found in natural images.
In this paper, we first extensively studied a popular backdoor attack, BadNets, applied to a remote sensing dataset, where the trigger was planted in all of the bands in the data.
Our results showed that SOTA defense mechanisms, including Neural Cleanse, TABOR, Activation Clustering, Fine-Pruning, GangSweep, Strip, DeepInspect, and Pixel Backdoor, had difficulties detecting and mitigating the backdoor attack.
We then proposed an explainable AI-guided backdoor attack specifically for remote sensing imagery by placing triggers in the image sub-bands.
Our proposed attack model even poses stronger challenges to these SOTA defense mechanisms, and no method was able to defend it.
These results send an alarming message about the catastrophic effects the backdoor attacks may have on satellite imagery.
Related Results
Backdoor DNFs
Backdoor DNFs
We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas. Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable ...
CSP beyond tractable constraint languages
CSP beyond tractable constraint languages
AbstractThe constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zh...
Comparison of Single-channel and Split-window Methods for Estimating Land Surface Temperature from Landsat 8 Data
Comparison of Single-channel and Split-window Methods for Estimating Land Surface Temperature from Landsat 8 Data
Abstract: Landsat 8 is the eighth satellite in the Landsat program, which provides images at 11 spectral channels, including 2 thermal infrared bands at a spatial resolution of 100...
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods fac...
Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking
Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking
In recent years, backdoor attacks have posed a significant threat to the security of deep models. Attackers can induce erroneous behavior in victim models through carefully designe...
IMAGERY IN JULIANNE MACLEAN’S THE COLOR OF HEAVEN
IMAGERY IN JULIANNE MACLEAN’S THE COLOR OF HEAVEN
Imagery is a mental picture imagined by a reader. This research discusses imagery that existed in Julianne MacLean's novel The Color of Heaven. The Color of Heaven is a novel that ...
Citraan Dalam Buku Puisi Tantrum Karya Adhan Akram
Citraan Dalam Buku Puisi Tantrum Karya Adhan Akram
The purpose of this research is to describe the visual imagery, auditory imagery, tactile imagery, olfactory imagery, gustatory imagery, and kinetic imagery found in the poetry boo...
A Stealthy Backdoor Attack for Code Models
A Stealthy Backdoor Attack for Code Models
Abstract
Recent studies have shown that code models are susceptible to backdoor attacks. When injected with a backdoor, the victim code model can function normally on benig...

