Why is arithmetic faster for inexact arithmetic?


I have been trying to compute eigenvalues of a rather sizable matrix A, about $500 \times 500$ (but sparse). I asked Mathematica to compute Eigenvalues[A], and left it to work. After a night of computation, Mathematica still failed to produce the answer. Just out of curiousity, I tried to see what happens if I replace A by it's numeric approximation (before, A had integer entries, hence exact). (I didn't quite expect it to make a difference, but I was trying some random things). Much to my surprise, the answer appeared immediately. Hence my question: How does it happen that Mathematica produces the answer so much faster for inexact input? To what extent can this output be trusted?

Jakub Konieczny

Posted 2013-05-07T07:22:08.927

Reputation: 443

2A simple answer is that inexact values make use of built-in hardware support for arithmetic. – image_doctor – 2013-05-07T07:28:23.837

4For a matrix that large, it is often infeasible to ask for exact solutions. "To what extent can this output be trusted?" - I don't trust output entirely, but there is the procedure of using Mathematica's arbitrary-precision capabilities, asking for the eigenvalues to 20 digits (Eigenvalues[N[A, 20]]), 30 digits, ... and so on. If there is some agreement between these results, then I trust the results a little bit more. – J. M.'s ennui – 2013-05-07T07:42:22.253

1In addition to the complexity of solving roots, manipulation of exact values, in general, is time-consuming. On efficiently vectorized code of machine-precision arithmetic, a CPU may perform up to something like 16 basic arithmetic operations per a clock cycle, thanks to direct machine representation of data. With exact non-integer values, cost of a single arithmetic operation probably starts at low dozens of clock cycles. So, unless you specifically require exact symbolic results, avoid them. Often finding them is even infeasible. Sometimes tools such as RootApproximant may save you though. – kirma – 2013-05-07T09:03:23.093

2Try a simple experiment. Do these computations on a smaller matrix, one for which the exact computation finishes in reasonable time. Then look at the results. Look very hard at the results. They will give a clue as to why one takes hugely longer than the other. – Daniel Lichtblau – 2013-05-07T14:00:07.840



Calculating eigenvalues involves solving for the roots of the characteristic polynomial, which is of degree equal to the order of the size of the matrix. When you input real numbers, it can search for the roots of the polynomial using numerical techniques. When you input exact integers (or rationals, probably) it tries to find exact answers for the roots of the polynomials. This is a much more difficult task, since closed form solutions only exist up to order 5 or so.

In terms of trusting the answers, J.M.'s suggestions are good. It would also make sense to check the answers using alternative methods. For instance, the product of the eigenvalues is equal to the determinant. So you could take the answer you get for the eigenvalues and multiply them together; if they are close to the Det then this provides some confidence. If they are not the same, then the answer can be discarded.

bill s

Posted 2013-05-07T07:22:08.927

Reputation: 62 963

2Actually, for quintics and higher degrees, Mathematica is certainly forced to express the results in terms of Root[], whereupon the effort is in determining the (coefficients of the) minimal polynomial corresponding to the eigenvalue, plus root isolation. All this takes a nontrivial amount of work. – J. M.'s ennui – 2013-05-07T11:45:27.087

2At least for inexact arithmetic, I'm pretty sure Mathematica never looks for the roots of a polynomial (because that's not numerically stable). It probably uses an iterative method like Jacobi's. – Niki Estner – 2013-05-07T12:34:13.953

I remember taking a class on numerical methods. We read a paper called "18 Ways not to Calculate the Matrix Exponential" -- the various complexities of eigenvector calculations also played a large role. So yes, I'm sure they are doing all sorts of clever things. – bill s – 2013-05-07T12:44:55.807

@nikie, at least for eigenvalues, it uses the QR algorithm in the inexact case. – J. M.'s ennui – 2013-05-07T12:46:43.237

Nice summary of the QR algorithm at wikipedia: http://en.wikipedia.org/wiki/QR_algorithm

– bill s – 2013-05-07T13:35:43.730

@bill, nineteen ways, actually. ;) See this and its update.

– J. M.'s ennui – 2013-05-07T20:43:42.260

@J.M. Thanks -- makes one wonder what is going on inside MatrixExp. Is there an easy way to find this kind of thing out? For instance -- how did you know that the answer for eigenvalues is the QR decomposition? – bill s – 2013-05-08T04:56:58.857

Oh, for MatrixExp[], that depends on whether the matrix given is exact or inexact. For eigenvalues in inexact arithmetic, Mathematica uses LAPACK internally, and the QR algorithm is the method used for dense problems in that package. (Sparse matrices are a totally different matter.)

– J. M.'s ennui – 2013-05-08T05:25:39.050

(P.S. the QR decomposition and the QR algorithm are two different things; the former is a finite algorithm, while the latter is iterative, and uses the QR decomposition as a building block.) – J. M.'s ennui – 2013-05-08T05:26:53.007