NEAT uses genetic algorithms both to search for improved connection weights and for improved architectures.
Whilst it is possible to train a NEAT-generated neural network using backpropagation of error gradients, libraries implementing "original" NEAT will not implement that.
There are a couple of reasons:
There is often no training data, in a supervised learning sense. The fitness function of a NEAT system can be measured arbitrarily by performance at some task, and in the general case this consists of some environment simulation that interacts with an agent controlled by the NN, and not training data per se.
Evolved network topologies do not conform to stacked layers models preferred by ML frameworks designed to run typical deep learning architectures.
Both of these issues can be resolved with a little effort (for instance for the second issue there are frameworks which will work with arbitrary feed forward connection graphs, they are just a little more niche than e.g. Keras). However, NEAT is often used because it can solve problems without needing to frame them as supervised learning or reinforcement learning.
Other than the hard work of putting it together, there is nothing stopping you creating a train-on-data stage which could alternate between the two approaches, perhaps with controllable weighting from evolution to gradient-based training. To add the gradient-based training, then either:
a) Your original problem is one of fitting to a classifier or regression problem. In which case your fitness function and training loss function could be the same.
b) Your original problem is one of controlling an agent in an environment, and you could potentially use something like the REINFORCE algorithm based on most-recent assessments, in order to provide gradients to train the NN. Other policy gradient methods could also be a good fit, as the NEAT network typically outputs a policy as opposed to a value prediction.
I have never tried these (nor much with NEAT at all other than demos and a bit of theory). For (a) I would expect the combination to be successful, but wonder why you were bothering with NEAT in the first place. For (b) I am less sure whether you would get useful results, because REINFORCE relies on multiple runs with the same network, whilst NEAT relies on stochastic search across multiple networks. Applying REINFORCE training across a whole population could be very CPU intensive.