jm + dnns   2

Universal adversarial perturbations
in today’s paper Moosavi-Dezfooli et al., show us how to create a _single_ perturbation that causes the vast majority of input images to be misclassified.
adversarial-classification  spam  image-recognition  ml  machine-learning  dnns  neural-networks  images  classification  perturbation  papers 
13 days ago by jm
When DNNs go wrong – adversarial examples and what we can learn from them
Excellent paper.
[The] results suggest that classifiers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occuring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution.
ai  deep-learning  dnns  neural-networks  adversarial-classification  classification  classifiers  machine-learning  papers 
february 2017 by jm

Copy this bookmark:



description:


tags: