jm + deep-learning + models 1
Fooling Neural Networks in the Physical World with 3D Adversarial Objects · labsix
november 2017 by jm
This is amazingly weird stuff. Fooling NNs with adversarial objects:
ai
deep-learning
3d-printing
objects
security
hacking
rifles
models
turtles
adversarial-classification
classification
google
inceptionv3
images
image-classification
Here is a 3D-printed turtle that is classified at every viewpoint as a “rifle” by Google’s InceptionV3 image classifier, whereas the unperturbed turtle is consistently classified as “turtle”.
We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle. Our process works for arbitrary 3D models - not just turtles! We also made a baseball that classifies as an espresso at every angle! The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt.
november 2017 by jm
Copy this bookmark: