Why is finite precision a problem in machine learning?



Can you explain what is finite precision? Why is finite precision a problem in machine learning?


Posted 2015-12-08T16:37:35.857

Reputation: 1 608



Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $\pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.


To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.

These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).



In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.



Posted 2015-12-08T16:37:35.857


Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier? – GeorgeOfTheRF – 2015-12-09T02:53:17.627

These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: http://arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: http://blog.datumbox.com/machine-learning-tutorial-the-naive-bayes-text-classifier/

– None – 2015-12-09T03:37:40.210


One single simple example: Vanishing Gradient problem in Deep Learning. It's not mainly a finite precision problem, but that is also part of the problem.

Martin Thoma

Posted 2015-12-08T16:37:35.857

Reputation: 15 590