Adversarial examples are a type of attack on machine learning (ML) systems which cause misclassification of inputs. 《Adversarial Examples and Metrics》
An adversarial sample is an input crafted to cause deep learning algorithms to misclassify. 《The Limitations of Deep Learning in Adversarial Settings》
Deep neural networks (DNNs) are challenged by their vulnerability to adversarial examples, which are crafted by adding small, human-imperceptible noises to legitimate examples, but make a model output attackerdesired inaccurate predictions. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deploye. 《Boosting Adversarial Attacks with Momentum》
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. 《Towards Deep Learning Models Resistant to Adversarial》
Convolutional neural networks can be easily attacked by adversarial examples, which are computed by adding small perturbations to clean inputs. 《Smooth Adversarial Training》
An adversarial sample can be defined as one which appears to be drawn from a particular class by humans (or advanced cognitive systems) but fall into a different class in the feature space. Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. 《Towards Crafting Text Adversarial Samples》
The adversarial examples are always constructed by adding vicious perturbations to the original images, and the added perturbations are not easily perceived by human. 《Generative Networks for Adversarial Examples with Weighted Perturbations》