I have been encountering this problem a lot recently. Since many related questions are linked to this post and I will share my solution here.

**Advantages**

## Basic idea

Flatten all the tensors into one single list, and include enough information to reconstruct them.

The first element of my list is the number of tensors / variables to return.

The $2$nd to $2 + var - 1$ th element corresponds to the rank of each tensor

The $2 + var$ to the $2 + var + rank_i -1$ th elements corresponds to the dimension of each tensor

## Construction inside `Compile`

### 1. Multiple return with different-dimension tensors (of different types)

Note:In my example there is no `Complex`

or `True|False`

, but since `Re`

, `Im`

and `Boole`

are all compilable, they can be transformed to a real tensor and a integer tensor respectively.

This example illustrates returning 3 tensors with different dimensions.

```
cf1=Compile[{},
Module[{
m={{0,8,1,7},{1,9,2,6}},
n={0.301,0.98},
p={{{1,0},{2,7}},{{2,0},{0,0}}}},
Join[{3},
{TensorRank[m]},{TensorRank[n]},{TensorRank[p]},
Dimensions[m],Dimensions[n],Dimensions[p],
Flatten@m,Flatten@n,Flatten@p]
]
]
```

### 2. Return a ragged list (of arbitrary length)

This is a common case when a collection of `Position`

s should be returned. This example illustrates adding 1D list of arbitrary length to the result programmatically.

```
cf2=Compile[{},Module[{var=0,rank={},dim={},res={},temp},
Do[temp=RandomReal[{0,1},RandomInteger[{1,10}]];
var++;
AppendTo[rank,TensorRank[temp]];
dim=Join[dim,Flatten@Dimensions[temp]];
res=Join[res,Flatten@temp];
,{i,1,3}];
Join[{var},rank,dim,res]]]
```

Neither of the examples have `MainEvaluate`

when examining with `CompilePrint`

.

## Extracting the lists

```
extractLists[list_?VectorQ] :=
Module[{vars = Round@First@list, rank, dim},
rank = Round@list[[2 ;; 1 + vars]];
dim = Round@
Internal`PartitionRagged[
list[[2 + vars ;; 1 + vars + Total@rank]], rank];
MapThread[
ArrayReshape, {Internal`PartitionRagged[
list[[2 + vars + Total@rank ;;]], Times @@@ dim], dim}]]
```

The results (the result of `cf2`

is random):

```
extractLists[cf1[]]
(*{{{0., 8., 1., 7.}, {1., 9., 2., 6.}}, {0.301,
0.98}, {{{1., 0.}, {2., 7.}}, {{2., 0.}, {0., 0.}}}}*)
extractLists[cf2[]]
(*{{0.895086, 0.716247, 0.626751, 0.457065, 0.709812, 0.118539,
0.504491, 0.40369}, {0.2376}, {0.159539, 0.398285, 0.0233042,
0.246191, 0.351316, 0.580408}}*)
```

## Notes

The type of the result is not conserved (Integer is converted to Real). This can be implemented by adding extra parameter before the rank info (I did not include it because it's not useful for my cases). Also I am not sure whether the performance of the code inside `Compile`

is optimal. Feel free to edit if there are improvements.

I couldn't be more specific because your method is the one that I'm still using regularly, 4 years later : ) – István Zachar – 2016-07-21T17:23:57.687

2I think your code is going to break in this case: cFunc = Compile[{{a, _Integer, 1}}, Join @@ {1.*a, {a.a}}];It makes some assuptions about a, that may not be true. Also, a single call to MainEvaluate is not so much of an issue, as long as it is not in a loop. – None – 2012-02-15T15:04:54.803

@ruebenko what is your analysis of my methods? – Mr.Wizard – 2012-02-15T15:11:56.867

@ruebenko: Definitely it is in a loop... But then the problem with Rolf's code is that joining and retrieving partial results from

`z`

might not be that easy. – István Zachar – 2012-02-15T23:55:47.373why not? can you be more specific? – Rolf Mertig – 2012-02-16T00:16:14.657

@RolfMertig I am just learning how to use Compile. Why does the code

`f:=Compile[{{M,_Integer,2},{m,_Integer}}, Module[{l},l=ConstantArray[{},m]; l[[M[[1]]]]=Transpose@{M[[2]]}; l] ]; f[{{8,9,10},{2,3,1}},10]`

return`CompiledFunction::cfn: Numerical error encountered at instruction 8; proceeding with uncompiled evaluation.`

? – Leo – 2020-11-21T16:20:10.997`ConstantArray[{0},m]`

will work. But please ask a new question in general if you do have a new, unrelated, question. – Rolf Mertig – 2020-11-22T21:38:29.500