## Compiling LinearSolve[] or creating a compilable procedural version of it

17

8

Earlier today I had a discussion with a representative at Premier Support about the 2 questions I've asked here over the past couple of days:

Neither the conversation nor the answers to the above questions have produced a solution as good as compiling my function to C code. This leaves me with some problems as the function I need to deploy uses 4 high-level Mathematica functions which don't (or maybe don't readily) compile:

CholeskyDecomposition[]
IdentityMatrix[]
LinearSolve[]
Simplify[]


Yesterday I made some progress developing/adapting solutions for procedural versions of CholeskyDecomposition[] and IdentityMatrix[], so these at least should compile.

Given the anticipated use of my function, maybe I can get away without using Simplify[].

This would leave me still needing a procedural equivalent to LinearSolve[] or some means of compiling it.

Maybe simpler because, I don't need a myLinearSolve[] to do everything Mathematica's version does.

Interestingly, in my conversation with the guy from support, he suggested that given the limited nature of what I wanted to do, he thought that setting options on LinearSolve[] should give me a reduced scope version of LinearSolve[] that Mathematica could compile. He couldn't specify which options.

So let's explore this.

A typical use of LinearSolve[] in my current function looks like this:

c = {0.516344, 0.287671, 0.216344, 0.176796, 1};
A = {{0, 1, 1/2, 1/2, 1/2}, {0, 0, Sqrt/2, 1/(2 Sqrt), 1/(2 Sqrt)}, {0, 0, 0, Sqrt[2/3], 1/(2 Sqrt)}, {0, 0, 0, 0, Sqrt[5/2]/2}, {1, 1, 1, 1, 1}};

LinearSolve[A, c]

{0.173339, 0.206027, 0.187944, 0.209058, 0.223631}


No symbolic processing. No imaginary numbers. No SparseArray objects. A always a square matrix; c always a vector; and its usage always outputs a real vector.

Question 1 -- Does anyone have any insight into how to set options for the above use of LinearSolve[] so it will compile or if this is even possible?

Note: Oleksandr's answer in Why is MainEvaluate being used when LinearSolve can be compiled? may have some bearing on this.

...

If no way forward comes from the above, I may still have a chance to implement a limited procedurally based proceduralLinearSolve[].

If very distant memory serves me, one can use the inverse of a square matrix for some problems addressable with LinearSolve[]. This might give me a more specific way forward except Mathematica's implementation of it, Inverse[], doesn't fall on the list of compilable functions either.

I have found some code for calculating the inverse of a square matrix. Not even certain of the language, but I can probably follow its logic and port it to a proceduralInverse[] function in Mathematica.

This background takes me to my second question...

Question 2 -- Does creating a proceduralLinearSolve[] or proceduralInverse[] seem like a viable way to go (has anyone tried this kind of thing and succeeded) and/or can anyone point me in the direction of resources or solutions that could help me do this?

9

These functions are not compilable because they are calls to the Intel MKL. Don't waste your time coding them youself, and for a small function there is no reason to use a high-performance BLAS library either. Use LAPACK/CLAPACK (example) or Eigen. The response of the support guy is curious and, I think, incorrect.

– Oleksandr R. – 2013-01-14T20:27:37.947

5Depending on how systematic the elements you are passing to LinearSolve, you could just solve it once and compile the resulting formula for the solution instead of trying to compile the function itself. – jVincent – 2013-01-14T21:51:51.640

4Using Inverse is not advisable. Typically, one uses LinearSolve to avoid inverting matrices. – asim – 2013-01-14T23:06:39.613

@OleksandrR. -- Thanks for the suggestions. I have hoped that I could rely on Mathematica for all my development needs. It just seems odd that with all the amazing things Mathematica does, it gets tripped up on deploying solutions. – Jagra – 2013-01-14T23:58:41.560

@jVincent -- Interesting idea. Not certain how to apply it in my immediate case as the LinearSolve returns a vector of reals, but I'll give it more thought. Thx – Jagra – 2013-01-15T00:02:53.113

1@Jagra You shouldn't be concerned with the return value. The question is what your input matrix looks like, for instance if it was {{a,b},{0,d} with rhs {r1,r2} where you know that a and d are non-zero, then you can call it once to see that your solution will always be: {(d r1 - b r2)/(a d), r2/d}. Now this method of cause will not work if you have an arbitrary matrix, but it you have a pattern and only some inputs, you can just solve once and compile the resulting definition. – jVincent – 2013-01-15T00:14:36.403

@jVincent -- Ah! I think I see how that could work. If I follow you, in my case the size of the matrix would determine the formula. Seems like I could precompute and compile a range of formulas for different size matrices and select the one to use in the function. I think this deserves consideration as a possible answer if you want to flesh it out a bit. – Jagra – 2013-01-15T01:02:00.123

@Jagra could you provide an update about what the general shape of your matrices is? – jVincent – 2013-01-15T01:14:06.063

@jVincent -- Already in there. "A" always a square matrix. In the example shown 5 x 5, but in typical use it might run from 2 x 2 to 20 x 20. – Jagra – 2013-01-15T02:11:37.907

@Jagra But you never change any elements of it? Then it's trivial to just solve it, since you don't even need to check that pivots are non-zero. – jVincent – 2013-01-15T10:17:24.077

@jVincent -- The structure as descried in my comment remains the same, square matrix and an equal length vector. The values within the matrix and vector change constantly. I don't think I should ever have a 0 value in the matrix. – Jagra – 2013-01-15T13:30:29.600

@Jagra What I was asking was how it changes. You posted a constant matrix and a constant array. They per definition never change. If you could for instance provide a matrix on the form {{a,b},{0,4}} and say with a and b having different values. That would be a useful description. Just saying "the values within the matrix and vector change constantly" is not useful, how do they change, do they all change?, which ones change etc. This is the information needed in order to judge whether you just need a single solution or a complete framework like LAPACK. – jVincent – 2013-01-15T13:44:33.300

@jVincent -- Let me take another try at explaining the structure of the matrices (A above) as I mis spoke and confused things in my last comment. In all cases the following holds: A[[1,1]] = 0, A[[2,1;;2]]={0,0}, A[[3,1;;3]]= {0,0,0}. This pattern continues excepting for the last row in the matrix where all values will equal 1. All other vales in the matrix may change. For the vector, the last element will always equal 1, all others can vary. Hope this helps. – Jagra – 2013-01-15T15:56:44.650

9

In many cases you don't really need to have LinearSolve compiles, since you are dealing for instance with only a specific type of matrices, or maybe just want to solve the same system with different right hand sides. In such cases you can evaluate the definiions once and compile the resulting expressions. Here is a rough example where I just injected two unkowns into your matrix.

c = {0.516344, 0.287671, 0.216344, 0.176796, 1};
A = {{0, b, 1/2, 1/2, 1/2}, {0, 0, Sqrt/2, a/(2 Sqrt),
1/(2 Sqrt)}, {0, 0, 0, Sqrt[2/3], 1/(2 Sqrt)}, {0, 0, 0, 0,
Sqrt[5/2]/2}, {1, 1, 1, 1, 1}};

fun[b_, a_] = LinearSolve[A, c]
funC = Compile[{{a, _Real}, {b, _Real}}, Evaluate[fun[a, b]]]

CompiledFunction[{a,b},
Block[{Compile$2,Compile$9},
Compile$2=1/a; Compile$9=1. b;
{0.0696861 Compile$2 (-2.4565+4.44393 a-0.5 b+1. a b), 0.0348431 Compile$2 (4.913 +Compile$9), -0.0696861 (-3.69701+Compile$9),
0.209058,
0.223631}] ,-CompiledCode-]


Now however when doing this it's important to remember that you need to check your inputs yourself, for instance in the case of A={{p1,0},{0,p2}}, you would need to check that neither p1 or p2 are zero.

13

Rob Knapp has some excellent publicly available notebooks for some of the things you might want. Here are three in particular that may be useful.

http://www.mathematica-journal.com/issue/v7i4/features/knapp/

http://library.wolfram.com/infocenter/Conferences/288/

http://library.wolfram.com/infocenter/Conferences/7968/

Among other things they give explicit implementations of Cholesky decomposition and linear solving, very likely suitable to run through Compile with C as target.

Thanks or the links. I've started to breath easier. – Jagra – 2013-01-14T22:05:41.997

Daniel -- I found Rob's Cholesky Decomposition and it looks much more robust than what I've come up with so far. He runs through a number of solving methods, did you have an idea of which of the one's he demonstrates would apply? – Jagra – 2013-01-15T01:04:47.637

Not offhand. Probably will depend on details of your intended application, typical inputs, etc. – Daniel Lichtblau – 2013-01-15T14:26:16.190

10

Using as basis the great resource for core numerical algorithms below, I managed to implement a compiled linear solve which doesn't call MainEvaluate (so quite fast).

I needed a linear solve for an optimization where the objective function requires inverting matrices, I was hesitating to use C++, but I preferred to stay in Mathematica.

Resources

Example

A=RandomReal[UniformDistribution[],{3,3}];
b={1,2,3};
A.LinearSolveCompiled[A,b]


Implementation

LUCompiled =
Compile[{{A0,_Real,2}},
Module[{i,j,k,p,r,A,n,L,c,U, P},

A=A0;
n=Length@A;
r=Table[j,{j,n}];

For[p=1,p<=n-1,p++,
For[j=p+1,j<=n,j++,
If[Abs[A[[r[[j]],p]]]>Abs[A[[r[[p]],p]]],
r[[{j,p}]]=r[[{p,j}]];
];
];

For[k=p+1,k<=n,k++,
A[[r[[k]],p]]=A[[r[[k]],p]]/A[[r[[p]],p]];

For[c=p+1,c<=n,c++,
A[[r[[k]],c]]=A[[r[[k]],c]]-A[[r[[k]],p]]*A[[r[[p]],c]];
];
];
];

L=Table[0.,{n},{n}];
For[i=1,i<=n,i++,
L[[i,i]]=1.;
];

P = L;
P = P[[r]];

U=A[[r]];

For[i=1,i<=n,i++,
For[j=1,j<=i-1,j++,
L[[i,j]]=A[[r[[i]],j]];
U[[i,j]]=0.;
];
];

Join[L,U,P]
]
];

BackSubCompiled=
Compile[{{a,_Real,2},{b,_Real,1}},
Module[{i,j,x,sum,n},

n=Length@a;
x=Table[0.,{n}];

For[i=n,1<=i,i--,

sum=0.;
For[j=i+1,j<=n,j++,
sum+=a[[i,j]]*x[[j]];
];

x[[i]]=(b[[i]]-sum)/a[[i,i]];
];

x
]
];

ForeSubCompiled=
Compile[{{a,_Real,2},{b,_Real,1}},
Module[{i,j,x,sum,n},

n=Length@a;
x=Table[0.,{n}];

For[i = 1, i <= n, i++,

sum=0.;
For[j=1,j<=i-1,j++,
sum+=a[[i,j]]*x[[j]];
];

x[[i]] = (b[[i]] - sum)/a[[i,i]]
];

x
]
];

(*Solves x so that A.x = b*)
LinearSolveCompiled=
Compile[{{A,_Real,2},{b,_Real,1}},
Module[{n,luResult, L, U, P, x1, x2},

n=Length@A;
luResult=LUCompiled[A];

L=luResult[[;;n]];
U=luResult[[n+1;;2n]];
P=luResult[[2n+1;;3n]];

x1=ForeSubCompiled[L,P.b];
x2=BackSubCompiled[U,x1];

x2
]
,
CompilationOptions->{"InlineCompiledFunctions"->True,"InlineExternalDefinitions"->True}
];


Checking for calls to MainEvaluate

FastCompiledFunctionQ[function_CompiledFunction]:=
(
Needs["CompiledFunctionTools"];
StringFreeQ[CompiledFunctionToolsCompilePrint@function, "MainEvaluate"]
);

LinearSolveCompiled // FastCompiledFunctionQ (*True*)