Do neural networks have explainability like decision trees do?



In Decision Trees, we can understand the output of the tree structure and we can also visualize how the Decision Tree makes decisions. So decision trees have explainability (their output can be explained easily.)

Do we have explainability in Neural Networks like with Decision Trees?


Posted 2017-05-22T10:29:21.267

Reputation: 131


A recent model-agnostic framework is the LIME model.

– Emre – 2017-05-22T16:56:54.260

In the field of object recognition/classification using neural networks, heatmaps are popular to visualize/explain a decision such as in Tutorials and interactive demonstrations are available.

– Nikolas Rieble – 2017-05-23T08:10:30.400



I disagree with the previous answer and with your suggestion for two reasons:

1) Decision trees are based on simple logical decisions which combined together can make more complex decisions. BUT if your input has 1000 dimensions, and the features learned are highly non linear, you get a really big and heavy decision tree which you won't be able to read/understand just by looking at the nodes.

2) Neural networks are similar to that in the sens that the function they learn is understandable only if they are very small. When getting big, you need other tricks to understand them. As @SmallChess suggested, you can read this article called Visualizing and Understanding Convolutional Networks which explains for the particular case of convolutional neural networks, how you can read the weights to understand stuff like "it detected a car in this picture, mainly because of the wheels, not the rest of the components".

These visualizations helped a lot of researchers to actually understand weaknesses in their neural architectures and helped to improve the training algorithms.


Posted 2017-05-22T10:29:21.267

Reputation: 1 267

:-) I found the paper itself harder to understand than the deep convolutional network itself. It's a very mathematical paper. – SmallChess – 2017-05-22T12:40:56.207

1Sorry, I cited the wrong article :-) I just changed it, this one is more graphical, the idea of reversing the convnet is not really hard if you know how convnets work. In the same way, Google deep dream use back propagation to project a particular output in the input space. – Robin – 2017-05-22T12:47:13.943

There is a video where Matt Zeiler expains many of these ideas, called Deconconvolution networks – Alex – 2018-05-08T13:30:42.490


No. Neural network is generally difficult to understand. You trade predictive power for model complexity. While it's possible to visualize the NN weights graphically, they don't tell you exactly how a decision is made. Good luck trying to understanding a deep network.

There is a popular Python package (and it has a paper) that can model a NN locally with a simpler model. You may want to take a look.


Posted 2017-05-22T10:29:21.267

Reputation: 3 050

1haha. I know what it feels like. hugs :D – Dawny33 – 2017-05-22T11:25:26.050

0 provide a NN specific local explanation tool : deep lift. It works by propagating the difference in activation between the instance you want to explain and a reference instance. Getting a reference is a bit tricky, but the tool appears to be interpretable and scalable overall. It can be used on tabular data.


Posted 2017-05-22T10:29:21.267

Reputation: 1 874