当前位置:   article > 正文

对抗攻击与防御(2022年顶会顶刊AAAI、ACM、 ECCV、NIPS、ICLR、CVPR)adversarial attack and defense汇总_adversarial attacks for object detection

adversarial attacks for object detection

AAAI’ 2022 论文汇总

  • attack

Learning to Learn Transferable Attack

Towards Transferable Adversarial Attacks on Vision Transformers

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks

Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds

Adversarial Attack for Asynchronous Event-Based Data

CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text

Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Attacking Video Recognition Models with Bullet-Screen Comments

Context-Aware Transfer Attacks for Object Detection

A Fusion-Denoising Attack on InstaHide with Data Augmentation

FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack

Backdoor Attacks on the DNN Interpretation System

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

Synthetic Disinformation Attacks on Automated Fact Verification Systems

Adversarial Bone Length Attack on Action Recognition

Improved Gradient Based Adversarial Attacks for Quantized Networks

Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification

Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Learning Universal Adversarial Perturbation by Adversarial Example

Making Adversarial Examples More Transferable and Indistinguishable

Vision Transformers are Robust Learners

  • defense

Certified Robustness of Nearest Neighbors Against Data Poisoning and Backdoor Attacks

Preemptive Image Robustification for Protecting Users Against Man-in-the-Middle Adversarial Attacks

Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs

When Can the Defender Effectively Deceive Attackers in Security Games?

Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Consistency Regularization for Adversarial Robustness

Adversarial Robustness in Multi-Task Learning: Promises and Illusions

LogicDef: An Interpretable Defense Framework Against Adversarial Examples via Inductive Scene Graph Reasoning

Efficient Robust Training via Backward Smoothing

Input-Specific Robustness Certification for Randomized Smoothing

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

CVPR‘2022论文汇总

Adversarial Texture for Fooling Person Detectors in the Physical World

Adversarial Eigen Attack on Black-Box Models

Bounded Adversarial Attack on Deep Content Features

Backdoor Attacks on Self-Supervised Learning

Bandits for Structure Perturbation-Based Black-Box Attacks To Graph Neural Networks With Theoretical Guarantees

Boosting Black-Box Attack With Partially Transferred Conditional Adversarial Distribution

BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Cross-Modal Transferable Adversarial Attacks From Images to Videos

Can You Spot the Chameleon? Adversarially Camouflaging Images From Co-Salient Object Detection

DTA: Physical Camouflage Attacks using Differentiable Transformation Network

DST: Dynamic Substitute Training for Data-Free Black-Box Attack

Dual Adversarial Adaptation for Cross-Device Real-World Image Super-Resolution

DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors

Exploring Effective Data for Surrogate Training Towards Black-Box Attack

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Fairness-Aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

FIBA: Frequency-Injection Based Backdoor Attack in Medical Image Analysis

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness

Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

Shape-Invariant 3D Adversarial Point Clouds

Stereoscopic Universal Perturbations Across Different Architectures and Datasets

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

Label-Only Model Inversion Attacks via Boundary Repulsion

Improving Adversarial Transferability via Neuron Attribution-Based Attacks

Improving the Transferability of Targeted Adversarial Examples Through Object-Based Diverse Input

Investigating Top-k White-Box and Transferable Black-Box Attack

Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network

Zero-Query Transfer Attacks on Context-Aware Object Detectors

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

Towards Efficient Data Free Blackbox Adversarial Attack

Transferable Sparse Adversarial Attack

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints

Exploring Frequency Adversarial Attacks for Face Forgery Detection

360-Attack: Distortion-Aware Perturbations From Perspective-Views

  • defense
    Enhancing Adversarial Training With Second-Order Statistics of Weights

Enhancing Adversarial Robustness for Deep Metric Learning

Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching

Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations

Subspace Adversarial Training

Segment and Complete: Defending Object Detectors Against Adversarial Patch Attacks With Robust Patch Detection

Self-Supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Towards Practical Certifiable Patch Defense with Vision Transformer

Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

LAS-AT: Adversarial Training with Learnable Attack Strategy

Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks With Implicit Gradients

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond

Defensive Patches for Robust Recognition in the Physical World

Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training

On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

EyePAD++: A Distillation-Based Approach for Joint Eye Authentication and Presentation Attack Detection Using Periocular Images

  • 其他
    Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart

Robust Combination of Distributed Gradients Under Adversarial Perturbations

WarpingGAN: Warping Multiple Uniform Priors for Adversarial 3D Point Cloud Generation

Leveraging Adversarial Examples To Quantify Membership Information Leakage

ACM’2022论文汇总

Imitated Detectors: Stealing Knowledge of Black-box Object Detectors

Generating Transferable Adversarial Examples against Vision Transformers

ECCV’2022论文汇总

  • attack
    Frequency Domain Model Augmentation for Adversarial Attack

Adversarially-Aware Robust Object Detector

A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow

Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches

Shape Matters: Deformable Patch Attack

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks

Adaptive Image Transformations for Transfer-based Adversarial Attack

AdvDO: Realistic Adversarial Attacks for Trajectory Prediction

Triangle Attack: A Query-efficient Decision-based Adversarial Attack

Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation

Exploiting the local parabolic landscapes of adversarial losses to accelerate black-box adversarial attack

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal

GradAuto: Energy-oriented Attack on Dynamic Neural Networks

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations

Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack

  • defense

Improving Robustness by Enhancing Weak Subnets

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

Prior-Guided Adversarial Initialization for Fast Adversarial Training

Enhanced Accuracy and Robustness via Multi-Teacher Adversarial Distillation

Learning Robust and Lightweight Model through Separable Structured Transformations

All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines

Robustness and Beyond: Unleashing Efficient Adversarial Training

One Size Does NOT Fit All: Data-Adaptive Adversarial Training

Revisiting Outer Optimization in Adversarial Training

Scaling Adversarial Training to Large Perturbation Bounds

ViP: Unified Certified Detection and Recovery for Patch Attack with Vision Transformers

Effective Presentation Attack Detection Driven by Face Related Task

Adversarially-Aware Robust Object Detector

Towards Efficient Adversarial Training on Vision Transformers

Revisiting Outer Optimization in Adversarial Training

  • 其他

RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN

An Invisible Black-box Backdoor Attack through Frequency Domain

Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks

Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

Zero-Shot Attribute Attacks on Fine-Grained Recognition Models

An Impartial Take to the CNN vs Transformer Robustness Contest

ICLR’2022论文汇总

ICLR 2022 Conference | OpenReview

  • attack

On Improving Adversarial Transferability of Vision Transformers

Online Adversarial Attacks

Attacking deep networks with surrogate-based adversarial black-box methods is easy

Rethinking Adversarial Transferability from a Data Distribution Perspective

Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models

Data Poisoning Won’t Save You From Facial Recognition

Transferable Adversarial Attack based on Integrated Gradients

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Provably Robust Adversarial Examples

  • defense

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Reverse Engineering of Imperceptible Adversarial Image Perturbations

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Towards Evaluating the Robustness of Neural Networks Learned by Transduction

Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios

Backdoor Defense via Decoupling the Training Process

Adversarial Unlearning of Backdoors via Implicit Hypergradient

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Towards Understanding the Robustness Against Evasion Attack on Categorical Data

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis

Adversarial Robustness Through the Lens of Causality

Fast AdvProp

Self-ensemble Adversarial Training for Improved Robustness

Trigger Hunting with a Topological Prior for Trojan Detection

Provably Robust Adversarial Examples

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

On the Certified Robustness for Ensemble Models and Beyond

Defending Against Image Corruptions Through Adversarial Augmentations

Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness

On the Convergence of Certified Robust Training with Interval Bound Propagation

Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100

Exploring Memorization in Adversarial Training

Adversarially Robust Conformal Prediction

NIPS’2022论文汇总

NeurIPS 2022

  • attack
    On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

GAMA: Generative Adversarial Multi-Object Scene Attacks

BadPrompt: Backdoor Attacks on Continuous Prompts

VoiceBox: Privacy through Real-Time Adversarial Attacks with Audio-to-Audio Models

Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias

Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

Revisiting Injective Attacks on Recommender Systems

Perceptual Attacks of No-Reference lmage Quality Models with Human-in-the-Loop

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Adv-Attribute: lnconspicuous and Transferable Adversarial Attack on Face Recognition

Black box Attacks via Surrogate Ensemble Search

Natural Color Fool : Towards Boosting Black-box Unrestricted Attacks

Towards Lightweight Black-Box Attack Against Deep Neural Networks

Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models

One-shot Neural Backdoor Erasing via Adversarial Weight Masking Pre-trained Adversarial Perturbations

lsometric 3D Adversarial Examples in the Physical World

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

  • defense

MORA: Improving Ensemble Robustness Evaluation with Model Reweighing Attack

Adversarial Robustness is at Odds with Lazy Training

Defending Against Adversarial Attacks via Neural Dynamic System

A2: Efficient Automated Attacker for Boosting Adversarial Training

Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets

Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning

Formulating Robustness Against Unforeseen Attacks

Alleviating Adversarial Attacks on Variational Autoencoders with MCMC

Adversarial training for high-stakes reliability

Phase Transition from Clean Training to Adversarial Training

Why Do Artificially Generated Data Help Adversarial Robustness

Toward Robust Spiking Neural Network Against Adversarial Perturbation

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

SNN-RAT:Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training

A CloserLook at the Adversarial Robustness of Deep Equilibrium Models

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

CalFAT: Calibrated Federated Adversarial Training with Label Skewness

Enhance the Visual Representation via Discrete Adversarial Training

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfittingview

Adversarialy Robust Learning: A Generic Minimax Optimal Learner and Characterization

Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness

Stability Analysis and Generalization Bounds of Adversarial Training

Efficient and Effective Augmentation Strategy for Adversarial Training

lmproving Adversarial Robustness of Vision Transformers

Random Normalization Aggregation for Adversarial Defense

DISCo: Adversarial Defense with Local lmplicit Functions

Synergy-of-Experts : Collaborate to Improve Adversarial Robustness

ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints

Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness

  • 其他
    Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples

Can AdversarialTraining Be Manipulated By Non-Robust Features?

A Characterization of Semi-Supervised Adversarially Robust PAC Learnability

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

On the Adversarial Robustness of Mixture of Experts

Increasing Confidence in Adversarial Robustness Evaluations

What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?

后续

后续会持续更新 2022 -2023年看过的论文

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/280807
推荐阅读
相关标签
  

闽ICP备14008679号