当前位置:   article > 正文

科研篇(12):CVPR20 分类整理-对抗样本_understanding adversarial examples from the mutua

understanding adversarial examples from the mutual influence of images and

文章目录

一、对抗样本-附代码

1.1Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.

PAPER LINK
CODE

1.2 One Man’s Trash Is Another Man’s Treasure: Resisting Adversarial Examples by Adversarial Examples

PAPER LINK
CODE

1.3 ColorFool: Semantic Adversarial Colorization

PAPER LINK
CODE

1.4 Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

PAPER LINK
CODE

1.5 Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

PAPER LINK
CODE

1.6 Efficient Adversarial Training with Transferable Adversarial Examples

PAPER LINK
CODE

1.7 Modeling Biological Immunity to Adversarial Examples

PAPER LINK
CODE

1.8 Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

PAPER LINK
CODE

1.9 (Oral)A Self-supervised Approach for Adversarial Robustness

PAPER LINK
CODE

1.10 When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

PAPER LINK
CODE

1.11 Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

PAPER LINK
CODE

1.12 Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

PAPER LINK
CODE

1.13 Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm

PAPER LINK
CODE

1.14 LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks

PAPER LINK
CODE

二、对抗样本-无代码

2.1Polishing Decision-Based Adversarial Noise With a Customized Sampling.

PAPER LINK

2.2 Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations

PAPER LINK

2.3 Single-Step Adversarial Training With Dropout Scheduling.

PAPER LINK

2.4 Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

PAPER LINK

2.5 Boosting the Transferability of Adversarial Samples via Attention

PAPER LINK

2.6 Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness

PAPER LINK

2.7 On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks

PAPER LINK

2.8 Adversarial Examples Improve Image Recognition

PAPER LINK

2.9 Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction

PAPER LINK

2.10 Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

PAPER LINK

2.11 Benchmarking Adversarial Robustness on Image Classification

PAPER LINK

2.11 DaST: Data-Free Substitute Training for Adversarial Attacks

PAPER LINK

2.12 Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks

PAPER LINK

2.13 Exploiting Joint Robustness to Adversarial Perturbations

PAPER LINK

2.14 GeoDA: A Geometric Framework for Black-Box Adversarial Attacks

PAPER LINK

2.15 What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images

PAPER LINK

2.16 Physically Realizable Adversarial Examples for LiDAR Object Detection

PAPER LINK

2.17 One-Shot Adversarial Attacks on Visual Tracking With Dual Attention

PAPER LINK

2.18 Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

PAPER LINK

2.19 Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations

PAPER LINK

2.20 Robust Superpixel-Guided Attentional Adversarial Attack

PAPER LINK

2.21 ILFO: Adversarial Attack on Adaptive Neural Networks

PAPER LINK

2.22 PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving

PAPER LINK

2.23 Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

PAPER LINK

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/884088
推荐阅读
  

闽ICP备14008679号