In theory, most neural networks can approximate any continuous function on compact subsets of $\mathbb{R}^n$, provided that the activation functions satisfy certain mild conditions. This is known as the *universal approximation theorem* (UAT), but that should *not* be called *universal*, given that there are a lot more discontinuous functions than continuous ones, although certain discontinuous functions can be approximated by continuous ones. The UAT shows the theoretical powerfulness of neural networks and their purpose. They represent and approximate functions. If you want to know more about the details of the UAT, for different neural network architectures, see this answer.

However, in practice, neural networks trained with gradient descent and backpropagation face several issues and challenges, some of which are due to the training procedure and not just the architecture of the neural network or available data.

For example, it is well known that neural networks are prone to catastrophic forgetting (or interference), which means that they aren't particularly suited for **incremental learning** tasks, although some more sophisticated incremental learning algorithms based on neural networks have already been developed.

Neural networks can also be sensitive to their inputs, i.e. a small change in the inputs can drastically change the output (or answer) of the neural network. This is partially due to the fact that they learn a function that isn't really the function you expect them to learn. So, a system based on such a neural network can potentially be hacked or fooled, so they are probably not well suited for **safety-critical applications**. This issue is related to the low interpretability and explainability of neural networks, i.e. they are often denoted as black-box models.

Bayesian neural networks (BNNs) can potentially mitigate these problems, but they are unlikely to be the ultimate or complete solution. Bayesian neural networks maintain a distribution for each of the units (or neurons), rather than a point estimate. In principle, this can provide more uncertainty guarantees, but, in practice, this is not yet the case.

Furthermore, neural networks often require a lot of data in order to approximate the desired function accurately, so in cases where **data is scarce** neural networks may not be appropriate. Moreover, the training of neural networks (especially, deep architectures) also **requires a lot of computational resources**. Inference can also be sometimes problematic, when you need real-time predictions, as it can also be expensive.

To conclude, neural networks are just function approximators, i.e. they approximate a *specific* function (or set of functions, in the case of Bayesian neural networks), given a specific configuration of the parameters. They can't do more than that. They cannot magically do something that they have not been trained to do, and it is usually the case that you don't really know the specific function the neural network is representing (hence the expression *black-box model*), apart from knowing your training dataset, which can also contain spurious information, among other issues.

1Can you explain what kind of answer you're exactly looking for and why the existing answers are not sufficient? Is it because you're looking for more specific applications where neural networks are not yet state-of-the-art? – nbro – 2020-03-22T02:34:54.710

Precisely, I'm looking for a regression task with no time-dependent/sequential component where neural nets are not state-of-the-art. A lot of the answers below are classification problems. – AIM_BLB – 2020-03-22T02:42:02.940

2

Originally, you were looking for general problems where neural networks don't do well. That's a question. Another question is the one you opened the bounty for. When you have a different

– nbro – 2020-03-23T01:23:18.157specificquestion, which is the case now, you should create a new post (I think), so that people provide specific answers. For example, this new answer addresses your original question, but not your new specific question. Now that you opened the bounty, it's too late, so, hopefully, someone will provide a specific answer to the question you opened the bounty for.2So, please, do not change the original question, otherwise, you invalidate the current answers. It's ok to have that specification now, because a potential answer will also answer your original question, but, please, next time, bear in mind that, if you have a new different (although very similar) question, I think you should create a new post and ask there the new question. – nbro – 2020-03-23T01:31:53.110

Thanks for the tip nbro. I was looking for both types of answers but only had recieved the general type; but its my fault for not being clearer in my original formulation. – AIM_BLB – 2020-03-23T07:58:48.773