Javascript must be enabled to continue!
Backdoor DNFs
View through CrossRef
We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas. Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable class of CNF formulas. Each conjunctive term of a backdoor DNF defines a partial assignment that moves the input CNF formula into the base class. Backdoor DNFs are more expressive and potentially smaller than their predecessors backdoor sets and backdoor trees. We establish the fixed-parameter tractability of the backdoor DNF detection problem. Our results hold for the fundamental base classes Horn and 2CNF, and their combination. We complement our theoretical findings by an empirical study. Our experiments show that backdoor DNFs provide a significant improvement over their predecessors.
International Joint Conferences on Artificial Intelligence Organization
Title: Backdoor DNFs
Description:
We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas.
Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable class of CNF formulas.
Each conjunctive term of a backdoor DNF defines a partial assignment that moves the input CNF formula into the base class.
Backdoor DNFs are more expressive and potentially smaller than their predecessors backdoor sets and backdoor trees.
We establish the fixed-parameter tractability of the backdoor DNF detection problem.
Our results hold for the fundamental base classes Horn and 2CNF, and their combination.
We complement our theoretical findings by an empirical study.
Our experiments show that backdoor DNFs provide a significant improvement over their predecessors.
Related Results
CSP beyond tractable constraint languages
CSP beyond tractable constraint languages
AbstractThe constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zh...
Sub-Band Backdoor Attack in Remote Sensing Imagery
Sub-Band Backdoor Attack in Remote Sensing Imagery
Remote sensing datasets usually have a wide range of spatial and spectral resolutions. They provide unique advantages in surveillance systems, and many government organizations use...
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods fac...
Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking
Towards Robust Dual-Trigger Physical Backdoor Attacks against Multi-Object Tracking
In recent years, backdoor attacks have posed a significant threat to the security of deep models. Attackers can induce erroneous behavior in victim models through carefully designe...
A Stealthy Backdoor Attack for Code Models
A Stealthy Backdoor Attack for Code Models
Abstract
Recent studies have shown that code models are susceptible to backdoor attacks. When injected with a backdoor, the victim code model can function normally on benig...
Financial Performance Analysis of Backdoor Listed Companies
Financial Performance Analysis of Backdoor Listed Companies
IPO listing threshold requirements are high, many enterprises have chosen to backdoor listing due to the restrictions of objective factors. In order to study the impact of backdoor...
Deteksi dan Mitigasi Serangan Backdoor Menggunakan Python Watchdog
Deteksi dan Mitigasi Serangan Backdoor Menggunakan Python Watchdog
The number of cyber attacks is increasing. This happens thoroughly, both at the international and national levels. Technology, techniques, and methods of carrying out cyber attacks...
AdaPT: Adaptive Position Trigger for Improving Backdoors Attacks in Transfer Learning
AdaPT: Adaptive Position Trigger for Improving Backdoors Attacks in Transfer Learning
Backdoor attacks in neural networks have emerged as one of the most critical and dangerous threats to AI security, attracting extensive research attention in recent years. Most exi...

