There are a few reasons.
From a purely pragmatic perspective, it's due to time constraints. The requisite time to solve a model increases far, far faster than the level of precision, and whichever level is adopted is subjective, anyway.
This is also affected by the fact that excessive accuracy is mostly useless. After all, your model might be 99.999% accurate for the given input values, but the real world is imprecise. Steel's modulus of elasticity has a tolerance of $\pm5$-$15\%$, for example. So why bother with a super accurate model if one of your key inputs can be off by 10%? (it goes without saying that the margins of error for other materials such as concrete or soil and for other variables such as loading are significantly higher).
Due to this, there is no point in being too precise. But indeed, it may be beneficial to not even try to be too precise. The reasons for this are mostly psychological, however. Mainly, you don't want your model to be too precise, and you don't want to output your results with seven decimal places, because you don't want to evoke a false sense of confidence.
The human brain is hardwired to think that 1.2393532697 is a more accurate value than 1.2. But that's actually not the case. Due to all the real-world uncertainties your model cannot possibly take into consideration (especially given current hardware limitations), 1.2 is almost certainly just as valid a result as 1.2393532697. So don't ilude yourself or whoever sees your model. Just output 1.2, which transparently indicates that you don't really know what's going on after that second digit.