This was part of The Multifaceted Complexity of Machine Learning

The Mysteries of Adversarial Robustness for Non-parametric Methods and Neural Networks

Kamalika Chaudhuri, University of California, San Diego

Friday, April 16, 2021



Abstract: Adversarial examples are small imperceptible perturbations to legitimate test inputs that cause machine learning classifiers to misclassify. While recent work has proposed many attacks and defenses, why exactly they arise still remains a mystery. In this talk, we’ll take a closer look at this question. We will look at non-parametric methods, and define a large sample limit for adversarially robust classification that is analogous to the Bayes optimal. We will show then that adversarial robustness in non-parametric methods is mostly a consequence of the training method. If time permits, we will then look at what these findings mean for neural networks