Feature Visualization


93 bookmarks. First posted by danbri november 2017.


We did something similar in Feature Visualization () with this diagra…
from twitter_favs
9 weeks ago by shakeel
How neural networks build up their understanding of images
ai  deeplearning  machinelearning  visualization  primer  learning  neuralnetworks 
february 2018 by lidel
How neural networks build up their understanding of images
machinelearning  visualization  deeplearning  ai  explainable 
february 2018 by gnuf
Very well-written survey on new neural network visualization techniques.
ai 
january 2018 by alexbecker
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.

Feature visualization answers questions about what a network — or parts of a network — are looking for by generating examples.

Attribution studies what part of an example is responsible for the network activating a particular way.

This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
ai  visualization 
december 2017 by mdimmic
Edges -> Textures -> Patterns -> Parts -> Objects
deeplearning 
december 2017 by davewsmith
Neural feature visualization has made great progress over the last few years. As a community, we’ve developed principled ways to create compelling visualizations. We’ve mapped out a number of important challenges and found ways of a addressing them.

In the quest to make neural networks interpretable, feature visualization stands out as one of the most promising and developed research directions. By itself, feature visualization will never give a completely satisfactory understanding. We see it as one of the fundamental building blocks that, combined with additional tools, will empower humans to understand these systems.
ai  visualization  research  Emergence 
december 2017 by janpeuker
nice format for online publications too
ml 
november 2017 by smmaurer
How neural networks build up their understanding of images.

"These patterns seem to be the images kind of cheating, finding ways to activate neurons that don’t occur in real life. If you optimize long enough, you’ll tend to see some of what the neuron genuinely detects as well, but the image is dominated by these high frequency patterns."
ai 
november 2017 by hanyu
How neural networks build up their understanding of images
machinelearning  visualization 
november 2017 by vrt
"This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process."
neural-net  analysis  visualization 
november 2017 by arsyed
This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
machinelearning  deeplearning  ai  visualization  features 
november 2017 by drmeme
This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
neural-networks  inverse-problems  generative-art  to-write-about 
november 2017 by Vaguery
How neural networks build up their understanding of images
deeplearning  machinelearning  visualisation  neuralnetworks  google  science  computing 
november 2017 by garrettc
How neural networks build up their understanding of images
deep-learning  visualization  machine-learning  neural-networks 
november 2017 by mark.larios
How neural networks build up their understanding of images
visualization  ai  deeplearning  media  images 
november 2017 by peterb
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.

This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
visualization  neural-network 
november 2017 by Finkregh
I was reading latest and realized I longer needed to open up tab after tab goog…
from twitter
november 2017 by mchung
RT : Feature Visualization - a new Distill article by & .
from twitter_favs
november 2017 by amitkaps
How neural networks build up their understanding of images
november 2017 by cwilkes
How neural networks build up their understanding of images
november 2017 by martinbalfanz
How neural networks build up their understanding of images
visualization  ai  neuralnetwork 
november 2017 by cothrun
RT @ch402: What do neural nets see? You may be surprised. @zzznah @ludwigschubert & I explore.
machine_learning  TensorFlow  visualization 
november 2017 by amy
How neural networks build up their understanding of images
hackernews  machinelearning 
november 2017 by briandk
New (absolutely beautiful) paper on feature visualization, by , &
from twitter_favs
november 2017 by randallr