The Building Blocks of Interpretability


55 bookmarks. First posted by jwtulp march 2018.


This article about how neural networks interpret things is really interesting
from twitter_favs
6 weeks ago by bob
On making DNNs legible
ml  ai  nn  critical  visibility 
september 2018 by gerwitz
Interpretability techniques are normally studied in isolation.
We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space.
june 2018 by rexarski
Lunch reading:



from twitter
may 2018 by vruba
Lunch reading:



from twitter_favs
may 2018 by lalavalse
With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces [1] . With a few exceptions [2, 3, 4] , existing work on interpretability fails to do these in concert.
neural  networks  ai  machine  learning 
march 2018 by starrjulie
Making sense of neural networks ()
from twitter
march 2018 by jamescampbell
Beautiful use of deepdreams visualisations to explore how image-recognition neural nets work - https://t.co/bk4nXeV5Zc

— 𝕄𝕚𝕜𝕖 𝕃𝕪𝕟𝕔𝕙 (@bombinans) March 9, 2018
twitter 
march 2018 by mikelynch
Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them -- and the rich structure of this combinatorial space.
visualization 
march 2018 by Pasanpr
In our view, features do not need to be flawless detectors for it to be useful for us to think about them as such. In fact, it can be interesting to identify when a detector misfires.

With regards to attribution, recent work suggests that many of our current techniques are unreliable. One might even wonder if the idea is fundamentally flawed, since a function’s output could be the result of non-linear interactions between its inputs. One way these interactions can pan out is as attribution being “path-dependent”. A natural response to this would be for interfaces to explicitly surface this information: how path-dependent is the attribution? A deeper concern, however, would be whether this path-dependency dominates the attribution.
ai  documentation  Emergence 
march 2018 by janpeuker
Interesting visualisation on neural networks
deep_learning  ai  visualisation 
march 2018 by edzard
RT : Beyond excited to share the latest article: Building Blocks of Interpretability
from twitter_favs
march 2018 by LowellRobi
RT : Beyond excited to share the latest article: Building Blocks of Interpretability
from twitter
march 2018 by amitkaps
Beyond excited to share the latest article: Building Blocks of Interpretability
from twitter_favs
march 2018 by jwtulp