First of all, there is no real 'intelligence' innate to artificial Neural Networks (NNs).
All they do is trying to approximate a mathematical function with a certain degree of generalization (hopefully without learning a given dataset by heart, i.e. hopefully without overfitting).
The more nodes (or neurons) you include into the network, the more complex a function can be that a network can learn to approximate. It's similar to high-school math: The higher the degree of some polynomial, the better the polynomial can be adjusted to fit some observation to be modeled; with the only difference being that NNs commonly include non-linearities and are trained via some kind of stochastic gradient descent.
So, yes. The more nodes a model possesses, the higher the so-called model capacity, i.e. the higher the degree of freedom a NN-model has to fit some function. After all, NN are said to be universal function approximators - given they have enough internal nodes in their hidden layer(s) to fit some given function.
In practice, however, you don't want to blow up a model architecture unnecessarily, since this commonly results in overfitting if it doesn't cause some instabilities of the training procedure instead.
Generally, the larger the model to be trained, the higher the computational cost to train the network.
A common suggestion is to reduce the number of nodes in a network at the expense of increasing a network's depth, i.e. the number of hidden layers. Often, that can help reduce the demand for excessively many nodes.