Convolutional Neural Networks have consistently outperformed other methods for image recognition and related tasks. In fact, beyond a certain image size it is not practical to train a fully-connected network on raw pixels due to the enormous number of parameters that would be required. The number of parameters in a `CNN`

however grows much more slowly as you increase the input image size, since they usually do not require larger filters, and the number of parameters per filter is independent of input size.

With that said, there is a much more remarkable demonstration of the power of the architecture of a `CNN`

for image processing in the recent paper Deep Image Prior. The abstract reads:

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a **randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting**. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.

Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.

So they were able to demonstrate that the raw network *prior to any training* was powerful enough for sophisticated tasks, which is the strongest and most direct illustration of the power of the specific architecture itself that I have seen.

The answer is not so clear for time series prediction, since `CNNs`

with various tricks have performed on par with `RNNs`

for some datasets, but in general `RNNs`

seem much more suited to the problem, since they can handle inputs of arbitrary length, and they can store state.

However, either method will significantly outperform previous methods such as passing in a sliding window of fixed size into a fully-connected network. Even when a fully-connected network is combined with some state-preserving model such as `ARIMA`

it still cannot match the performance of `CNNs`

or `RNNs`

. See Deep Learning for Time-Series Analysis.

Awesome; are there other classes/applications? – Open Food Broker – 2017-12-03T09:36:38.230

1

Yes there are many. Types of ANNs and ANN applications on Wikipedia are good starting points. The list is far too long to enumerate here. You can also follow /r/MachineLearning for lots of news on the latest and most exciting applications of neural networks.

– Imran – 2017-12-03T09:51:29.200still I miss sth like a matrix (ann type x best performing domain for applications). – Open Food Broker – 2017-12-03T10:31:42.147