I disagree with the previous answer and with your suggestion for two reasons:
1) Decision trees are based on simple logical decisions which combined together can make more complex decisions. BUT if your input has 1000 dimensions, and the features learned are highly non linear, you get a really big and heavy decision tree which you won't be able to read/understand just by looking at the nodes.
2) Neural networks are similar to that in the sens that the function they learn is understandable only if they are very small. When getting big, you need other tricks to understand them. As @SmallChess suggested, you can read this article called Visualizing and Understanding Convolutional Networks which explains for the particular case of convolutional neural networks, how you can read the weights to understand stuff like "it detected a car in this picture, mainly because of the wheels, not the rest of the components".
These visualizations helped a lot of researchers to actually understand weaknesses in their neural architectures and helped to improve the training algorithms.