To answer your question it is important to understand the frame of reference you are looking for, if you are looking for what philosophically you are trying to achieve in model fitting, check out Rubens answer he does a good job of explaining that context.
However, in practice your question is almost entirely defined by business objectives.
To give a concrete example, lets say that you are a loan officer, you issued loans that are \$3,000 and when people pay you back you make \$50. Naturally you are trying to build a model that predicts how if a person defaults on their loan. Lets keep this simple and say that the outcomes are either full payment, or default.
From a business perspective you can sum up a models performance with a contingency matrix:
When the model predicts someone is going to default, do they? To determining the downsides of over and under fitting I find it helpful to think of it as an optimization problem, because in each cross section of predicted verses actual model performance there is either a cost or profit to be made:
In this example predicting a default that is a default means avoiding any risk, and predicted a non-default which doesn't default will make \$50 per loan issued. Where things get dicey is when you are wrong, if you default when you predicted non-default you lose the entire loan principal and if you predict default when a customer actually would not have you suffer \$50 of missed opportunity. The numbers here are not important, just the approach.
With this framework we can now begin to understand the difficulties associated with over and under fitting.
Over fitting in this case would mean that your model works far better on you development/test data then it does in production. Or to put it another way, your model in production will far underperform what you saw in development, this false confidence will probably cause you to take on far more risky loans then you otherwise would and leaves you very vulnerable to losing money.
On the other hand, under fitting in this context will leave you with a model that just does a poor job of matching reality. While the results of this can be wildly unpredictable, (the opposite word you want to describe your predictive models), commonly what happens is standards are tightened up to compensate for this, leading to less overall customers leading to lost good customers.
Under fitting suffers a kind of opposite difficulty that over fitting does, which is under fitting gives you lower confidence. Insidiously, the lack of predictability still leads you to take on unexpected risk, all of which is bad news.
In my experience the best way to avoid both of these situations is validating your model on data that is completely outside the scope of your training data, so you can have some confidence that you have a representative sample of what you will see 'in the wild'.
Additionally, it is always a good practice to revalidate your models periodically, to determine how quickly your model is degrading, and if it is still accomplishing your objectives.
Just to some things up, your model is under fitted when it does a poor job of predicting both the development and production data.