17

8

Earlier today I had a discussion with a representative at Premier Support about the 2 questions I've asked here over the past couple of days:

- Seeking strategies to deploy a function securely without a front-end
- Can on launch a player pro kernel independently of the front front end

Neither the conversation nor the answers to the above questions have produced a solution as good as compiling my function to C code. This leaves me with some problems as the function I need to deploy uses 4 high-level Mathematica functions which don't (or maybe don't readily) compile:

```
CholeskyDecomposition[]
IdentityMatrix[]
LinearSolve[]
Simplify[]
```

Yesterday I made some progress developing/adapting solutions for procedural versions of `CholeskyDecomposition[]`

and `IdentityMatrix[]`

, so these at least should compile.

Given the anticipated use of my function, maybe I can get away without using `Simplify[]`

.

This would leave me still needing a procedural equivalent to `LinearSolve[]`

or some means of compiling it.

Maybe simpler because, I don't need a `myLinearSolve[]`

to do everything Mathematica's version does.

Interestingly, in my conversation with the guy from support, he suggested that given the limited nature of what I wanted to do, he thought that setting options on `LinearSolve[]`

should give me a reduced scope version of `LinearSolve[]`

that Mathematica could compile. He couldn't specify which options.

So let's explore this.

A typical use of `LinearSolve[]`

in my current function looks like this:

```
c = {0.516344, 0.287671, 0.216344, 0.176796, 1};
A = {{0, 1, 1/2, 1/2, 1/2}, {0, 0, Sqrt[3]/2, 1/(2 Sqrt[3]), 1/(2 Sqrt[3])}, {0, 0, 0, Sqrt[2/3], 1/(2 Sqrt[6])}, {0, 0, 0, 0, Sqrt[5/2]/2}, {1, 1, 1, 1, 1}};
LinearSolve[A, c]
{0.173339, 0.206027, 0.187944, 0.209058, 0.223631}
```

No symbolic processing. No imaginary numbers. No `SparseArray`

objects. `A`

always a square matrix; `c`

always a vector; and its usage always outputs a real vector.

**Question 1** -- Does anyone have any insight into how to set options for the above use of LinearSolve[] so it will compile or if this is even possible?

*Note: Oleksandr's answer in Why is MainEvaluate being used when LinearSolve can be compiled? may have some bearing on this.*

...

If no way forward comes from the above, I may still have a chance to implement a limited procedurally based `proceduralLinearSolve[]`

.

If very distant memory serves me, one can use the inverse of a square matrix for some problems addressable with `LinearSolve[]`

. This might give me a more specific way forward except Mathematica's implementation of it, `Inverse[]`

, doesn't fall on the list of compilable functions either.

I have found some code for calculating the inverse of a square matrix. Not even certain of the language, but I can probably follow its logic and port it to a proceduralInverse[] function in Mathematica.

This background takes me to my second question...

**Question 2** -- Does creating a `proceduralLinearSolve[]`

or `proceduralInverse[]`

seem like a viable way to go (has anyone tried this kind of thing and succeeded) and/or can anyone point me in the direction of resources or solutions that could help me do this?

9

These functions are not compilable because they are calls to the Intel MKL. Don't waste your time coding them youself, and for a small function there is no reason to use a high-performance BLAS library either. Use LAPACK/CLAPACK (example) or Eigen. The response of the support guy is curious and, I think, incorrect.

– Oleksandr R. – 2013-01-14T20:27:37.9475Depending on how systematic the elements you are passing to LinearSolve, you could just solve it once and compile the resulting formula for the solution instead of trying to compile the function itself. – jVincent – 2013-01-14T21:51:51.640

4Using Inverse is not advisable. Typically, one uses LinearSolve to avoid inverting matrices. – asim – 2013-01-14T23:06:39.613

@OleksandrR. -- Thanks for the suggestions. I have hoped that I could rely on Mathematica for all my development needs. It just seems odd that with all the amazing things Mathematica does, it gets tripped up on deploying solutions. – Jagra – 2013-01-14T23:58:41.560

@jVincent -- Interesting idea. Not certain how to apply it in my immediate case as the LinearSolve returns a vector of reals, but I'll give it more thought. Thx – Jagra – 2013-01-15T00:02:53.113

1@Jagra You shouldn't be concerned with the return value. The question is what your input matrix looks like, for instance if it was

`{{a,b},{0,d}`

with rhs`{r1,r2}`

where you know that`a`

and`d`

are non-zero, then you can call it once to see that your solution will always be:`{(d r1 - b r2)/(a d), r2/d}`

. Now this method of cause will not work if you have an arbitrary matrix, but it you have a pattern and only some inputs, you can just solve once and compile the resulting definition. – jVincent – 2013-01-15T00:14:36.403@jVincent -- Ah! I think I see how that could work. If I follow you, in my case the size of the matrix would determine the formula. Seems like I could precompute and compile a range of formulas for different size matrices and select the one to use in the function. I think this deserves consideration as a possible answer if you want to flesh it out a bit. – Jagra – 2013-01-15T01:02:00.123

@Jagra could you provide an update about what the general shape of your matrices is? – jVincent – 2013-01-15T01:14:06.063

@jVincent -- Already in there. "A" always a square matrix. In the example shown 5 x 5, but in typical use it might run from 2 x 2 to 20 x 20. – Jagra – 2013-01-15T02:11:37.907

@Jagra But you never change any elements of it? Then it's trivial to just solve it, since you don't even need to check that pivots are non-zero. – jVincent – 2013-01-15T10:17:24.077

@jVincent -- The structure as descried in my comment remains the same, square matrix and an equal length vector. The values within the matrix and vector change constantly. I don't think I should ever have a 0 value in the matrix. – Jagra – 2013-01-15T13:30:29.600

@Jagra What I was asking was how it changes. You posted a constant matrix and a constant array. They per definition never change. If you could for instance provide a matrix on the form {{a,b},{0,4}} and say with a and b having different values. That would be a useful description. Just saying "the values within the matrix and vector change constantly" is not useful, how do they change, do they all change?, which ones change etc. This is the information needed in order to judge whether you just need a single solution or a complete framework like LAPACK. – jVincent – 2013-01-15T13:44:33.300

@jVincent -- Let me take another try at explaining the structure of the matrices (A above) as I mis spoke and confused things in my last comment. In all cases the following holds: A[[1,1]] = 0, A[[2,1;;2]]={0,0}, A[[3,1;;3]]= {0,0,0}. This pattern continues excepting for the last row in the matrix where all values will equal 1. All other vales in the matrix may change. For the vector, the last element will always equal 1, all others can vary. Hope this helps. – Jagra – 2013-01-15T15:56:44.650