Yes, it may. In machine-learning there is an approach called early stop. In that approach you plot the error rate on training and validation data. The horizontal axis is the `number of epochs`

and the vertical axis is the `error rate`

. You should stop training when the error rate of validation data is minimum. Consequently if you increase the number of epochs, you will have an over-fitted model.

In deep-learning era, it is not so much customary to have early stop. There are different reasons for that but one of them is that deep-learning approaches need so much data and plotting the mentioned graph would be so much wavy because these approach use stochastic-gradient-like optimizations. In deep-learning again you may have an over-fitted model if you train so much on the training data. To deal with this problem, another approaches are used for avoiding the problem. Adding noise to different parts of models, like drop out or somehow batch normalization with a moderated batch size, help these learning algorithms not to over-fit even after so many epochs.

In general too many epochs may cause your model to over-fit the training data. It means that your model does not *learn* the data, it memorizes the data. You have to find the accuracy of validation data for each epoch or maybe iteration to investigate whether it over-fits or not.