Feature Visualization


85 bookmarks. First posted by danbri 11 weeks ago.


Very well-written survey on new neural network visualization techniques.
ai 
19 days ago by alexbecker
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.

Feature visualization answers questions about what a network — or parts of a network — are looking for by generating examples.

Attribution studies what part of an example is responsible for the network activating a particular way.

This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
ai  visualization 
5 weeks ago by mdimmic
Edges -> Textures -> Patterns -> Parts -> Objects
deeplearning 
6 weeks ago by davewsmith
Neural feature visualization has made great progress over the last few years. As a community, we’ve developed principled ways to create compelling visualizations. We’ve mapped out a number of important challenges and found ways of a addressing them.

In the quest to make neural networks interpretable, feature visualization stands out as one of the most promising and developed research directions. By itself, feature visualization will never give a completely satisfactory understanding. We see it as one of the fundamental building blocks that, combined with additional tools, will empower humans to understand these systems.
ai  visualization  research  Emergence 
7 weeks ago by janpeuker
nice format for online publications too
ml 
7 weeks ago by smmaurer
How neural networks build up their understanding of images.

"These patterns seem to be the images kind of cheating, finding ways to activate neurons that don’t occur in real life. If you optimize long enough, you’ll tend to see some of what the neuron genuinely detects as well, but the image is dominated by these high frequency patterns."
ai 
9 weeks ago by hanyu
How neural networks build up their understanding of images
machinelearning  visualization 
10 weeks ago by vrt
"This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process."
neural-net  analysis  visualization 
10 weeks ago by arsyed
This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
machinelearning  deeplearning  ai  visualization  features 
10 weeks ago by drmeme
This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
neural-networks  inverse-problems  generative-art  to-write-about 
10 weeks ago by Vaguery
How neural networks build up their understanding of images
deeplearning  machinelearning  visualisation  neuralnetworks  google  science  computing 
10 weeks ago by garrettc
How neural networks build up their understanding of images
deep-learning  visualization  machine-learning  neural-networks 
10 weeks ago by mark.larios
How neural networks build up their understanding of images
visualization  ai  deeplearning  media  images 
10 weeks ago by peterb
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.

This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
visualization  neural-network 
10 weeks ago by Finkregh
I was reading latest and realized I longer needed to open up tab after tab goog…
from twitter
10 weeks ago by mchung
How neural networks build up their understanding of images
10 weeks ago by cwilkes
How neural networks build up their understanding of images
10 weeks ago by martinbalfanz
How neural networks build up their understanding of images
visualization  ai  neuralnetwork 
10 weeks ago by cothrun
RT @ch402: What do neural nets see? You may be surprised. @zzznah @ludwigschubert & I explore.
machine_learning  TensorFlow  visualization 
10 weeks ago by amy
How neural networks build up their understanding of images
hackernews  machinelearning 
10 weeks ago by briandk
New (absolutely beautiful) paper on feature visualization, by , &
from twitter_favs
10 weeks ago by randallr
“Examples like these suggest neurons are not necessarily the right semantic units for understanding neural nets” 🤔
from twitter
11 weeks ago by nirum
latest piece by on feature visualization in neural networks is AMAZINGGGG
from twitter_favs
11 weeks ago by unthinkingly
What do neural nets see? You may be surprised. & I explore.
from twitter_favs
11 weeks ago by hustwj