It's a numerical artifact that is due to finite precision of computer calculations. Indeed, when one computes

```
SetAccuracy[12.30*12, 15]
```

147.60000000000002

It's notable that the input is in decimal system (at least the user thinks of it as such), but the computer uses its binary representation. For example, $\frac{1}{10}$ is an exact number, but 0.1 has an infinite expansion when expressed in binary (i.e., $0.1_{10}=0.000110011001100110011001100110011..._2$; see also here). Hence the computer never uses exactly $\frac{1}{10}$ but a rounded number:

```
SetAccuracy[0.1, 20]
```

0.1000000000000000056

What you see above in the case of `12.30*12`

is most likely a result of such *rounding*. It should be also noted that in most published materials, when some result is given to e.g. 20 decimal digits, in most cases it is not reasonable: in fact, every machine will give a slightly different output. The computation on two different computers (or different software) can give results different on the 15th or 12th or some other decimal place.

4It looks like it is an artifact of how numbers are represented in binary internally. – Per Alexandersson – 2016-09-06T13:20:17.853

`Rationalize[12.30]*12`

– Feyre – 2016-09-06T13:28:32.977