jm + neural-networks   9

A history of the neural net/tank legend in AI, and other examples of reward hacking
@gwern: "A history of the neural net/tank legend in AI: https://t.co/2s4AOGMS3a (Feel free to suggest more sightings or examples of reward hacking!)"
gwern  history  ai  machine-learning  ml  genetic-algorithms  neural-networks  perceptron  learning  training  data  reward-hacking 
2 days ago by jm
Universal adversarial perturbations
in today’s paper Moosavi-Dezfooli et al., show us how to create a _single_ perturbation that causes the vast majority of input images to be misclassified.
adversarial-classification  spam  image-recognition  ml  machine-learning  dnns  neural-networks  images  classification  perturbation  papers 
5 weeks ago by jm
AI Movie Posters - mickstorm.com
Neural-network generative movie posters. "What would you do to gave you?"
fun  generators  neural-networks  funny  movies  posters 
july 2017 by jm
When DNNs go wrong – adversarial examples and what we can learn from them
Excellent paper.
[The] results suggest that classifiers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occuring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution.
ai  deep-learning  dnns  neural-networks  adversarial-classification  classification  classifiers  machine-learning  papers 
february 2017 by jm
Fast Forward Labs: Fashion Goes Deep: Data Science at Lyst
this is more than just data science really -- this is proper machine learning, with deep learning and a convolutional neural network. serious business
lyst  machine-learning  data-science  ml  neural-networks  supervised-learning  unsupervised-learning  deep-learning 
december 2015 by jm
jwz on Inceptionism
"Shoggoth ovipositors":
So then they reach inside to one of the layers and spin the knob randomly to fuck it up. Lower layers are edges and curves. Higher layers are faces, eyes and shoggoth ovipositors. [....] But the best part is not when they just glitch an image -- which is a fun kind of embossing at one end, and the "extra eyes" filter at the other -- but is when they take a net trained on some particular set of objects and feed it static, then zoom in, and feed the output back in repeatedly. That's when you converge upon the platonic ideal of those objects, which -- it turns out -- tend to be Giger nightmare landscapes. Who knew. (I knew.)


This stuff is still boggling my mind. All those doggy faces! That is one dog-obsessed ANN.
neural-networks  ai  jwz  funny  shoggoths  image-recognition  hr-giger  art  inceptionism 
june 2015 by jm
Inceptionism: Going Deeper into Neural Networks
This is amazing, and a little scary.
If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

An enlightening comment from the G+ thread:

This is the most fun we've had in the office in a while. We've even made some of those 'Inceptionistic' art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or 'wants' to see simply by following gradients. This 'guided hallucination' technique is very powerful and the gorgeous visualizations it generates are very evocative of what's really going on in the network.
art  machine-learning  algorithm  inceptionism  research  google  neural-networks  learning  dreams  feedback  graphics 
june 2015 by jm
Forecast Blog
Forecast.io are doing such a great job of applying modern machine-learning to traditional weather data. "Quicksilver" is their neural-net-adjusted global temperature geodata, and here's how it's built
quicksilver  forecast  forecast.io  neural-networks  ai  machine-learning  algorithms  weather  geodata  earth  temperature 
august 2013 by jm
_Building High-level Features Using Large Scale Unsupervised Learning_ [paper, PDF]
"We consider the problem of building highlevel, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images using unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8% accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art."
algorithms  machine-learning  neural-networks  sgd  labelling  training  unlabelled-learning  google  research  papers  pdf 
june 2012 by jm

Copy this bookmark:



description:


tags: