Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.
I assume you mean too many features when you say 'too much input'
If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.
Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.
Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.
For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.