There are a few reasons.
From a purely pragmatic perspective, it's due to time constraints. The requisite time to solve a model increases far, far faster than the level of precision, and whichever level is adopted is subjective, anyway.
This is also affected by the fact that excessive accuracy is mostly useless. After all, your model might be 99.999% accurate for the given input values, but the real world is imprecise. Steel's modulus of elasticity has a tolerance of $\pm5$-$15\%$, for example. So why bother with a super accurate model if one of your key inputs can be off by 10%? (it goes without saying that the margins of error for other materials such as concrete or soil and for other variables such as loading are significantly higher).
Due to this, there is no point in being too precise. But indeed, it may be beneficial to not even try to be too precise. The reasons for this are mostly psychological, however. Mainly, you don't want your model to be too precise, and you don't want to output your results with seven decimal places, because you don't want to evoke a false sense of confidence.
The human brain is hardwired to think that 1.2393532697 is a more accurate value than 1.2. But that's actually not the case. Due to all the real-world uncertainties your model cannot possibly take into consideration (especially given current hardware limitations), 1.2 is almost certainly just as valid a result as 1.2393532697. So don't ilude yourself or whoever sees your model. Just output 1.2, which transparently indicates that you don't really know what's going on after that second digit.
2PLease define "accuracy" and "too much" here. You could have a model which predicts the uncertainty range to extremely high accuracy, or a model which reduces said uncertainty itself to a very small value. And so on. – Carl Witthoft – 2017-11-29T18:48:04.043
1“Everything should be made as simple as possible, but no simpler.” Einstein. – Eric Duminil – 2017-11-30T11:22:04.033
1"besides time (or computing power)" It seems all the answers missed this point.. – agentp – 2017-11-30T13:44:21.200
1@agentp On the contrary, the question answers itself by trying to exclude that. It's a silly thing to be in the question in the first place. – jpmc26 – 2017-11-30T19:40:40.650
Accuracy != Precision. It's the first thing I was taught in physics class. 3 is a more accurate representation of Pi than 3.5794.
Given this differentiation, I don't think you are correct in assuming that an over accurate model is ever detrimental. Accurate means close to ground truth. – user247243 – 2017-12-01T07:49:58.107
@user247243 I don't think you are correct in assuming that an over accurate model is ever detrimental. If one statistical model tells us that we need 11.5 cup coffee maker and another takes ten times longer to tell us we need an 11.46124 cup coffee maker because our cups are slightly smaller than the norm we've wasted a bunch of time coming to the same conclusion (that we will buy a 12 cup machine). – Myles – 2017-12-01T13:39:23.947
@Myles The problem is, the detrimental case you have listed is purely a time/computing power issue. There is no other detriment to using a model like that. OP has also specifically said that time and computation aren't being considered here. – JMac – 2017-12-01T19:42:50.810
@JMac Which is why it is a comment rather than an answer. – Myles – 2017-12-01T19:51:03.763
2this is seriously the worst "highly up voted" question I've ever seen. It is flat out confusing. – agentp – 2017-12-02T01:42:06.307