I call this ambiguous and I think you haven't thought this through. Let us assume it would be possible by such a direct method of calculating $\lambda$. First, we need to speed up the high-level function of yours that you probably have from Rosetta code. If the numbers grow larger and larger, `Split`

becomes a bottleneck, and I don't now how to speed it up except writing in in a low-level compiled function.

Therefore, here is an implementation calculating the run-length encoding in an ugly combination of two loops:

```
rl = Compile[{{l, _Integer, 1}},
Module[{b = Internal`Bag[Most[{0}]], n = Length[l], count, value,
i = 1},
While[i <= n,
count = 1;
value = l[[i++]];
While[i <= n && value == l[[i]],
count++;
i++
];
Internal`StuffBag[b, count];
Internal`StuffBag[b, value];
];
Internal`BagPart[b, All]
], CompilationTarget -> "C", RuntimeOptions -> "Speed"
]
```

As comparison let's take a slightly adapted version of the core of your function

```
RunLengthEncode[x_List] := Flatten[Through[{Length, First}[#1]] & /@ Split[x]]
```

While yours takes

```
res1 = Nest[RunLengthEncode, {1}, 60]; // AbsoluteTiming
(* {35.2265, Null} *)
```

the compiled version only needs

```
res2 = Nest[rl, {1}, 60]; // AbsoluteTiming
(* {0.92388, Null} *)
```

and the results are the same

```
res1 == res2
(* True *)
```

Speed, unfortunately, is not our biggest problem. Let me demonstrate the byte size of the results for all sequences up to the 50th.

```
sizes = Table[ByteCount[Nest[rl, {1}, n]], {n, 1, 50}];
```

and fit an exponential model on the data

```
nlm = NonlinearModelFit[sizes, a*Exp[b x], {a, b}, x]
```

Without quantifying the quality of the model fit, let us instead just look on the combined plots:

```
Show[
ListLinePlot[sizes, PlotStyle -> Thick],
Plot[nlm[x], {x, 0, 50}, PlotStyle -> Directive[Dotted, Red]]
]
```

That looks promising if you like to eat up all your system memory, but since we are a very optimistic bunch, let's estimate at which sequence we need 16GB of precious RAM

```
FindRoot[Normal[nlm]/2^30 - 16, {x, 60}]
(* {x -> 78.3621} *)
```

This means, that for 79 steps, we will cross the 16GB line. I like to remind you, that you are using memoization to store all previous results, so your computer is going to burst into flames sooner. I have 32GB, so mine is probably dying a bit later. Let's be cautious and calculate `n=75`

```
seq75 = Nest[rl, {1}, 75]; // AbsoluteTiming
(* {44.1891, Null} *)
```

This monster already uses 6.6GB of RAM and has 881.752.750 elements. Let's calculate $\lambda$ using the 76th sequence

```
N[Length[rl[seq75]]/Length[seq75], 15]
(* 1.30358560492156 *)
```

Still as bad as your approximation. Therefore, I don't think that speeding your approach up will help you. I believe a completely different method is required to get $\lambda$ with more precision.

1Possibly better suited to [Math.SE]? – Mr.Wizard – 2017-03-12T14:12:43.427

A minor point, but instead of reversing {First, Length} why not just do {Length, First} ? – Simon Woods – 2017-03-12T14:32:53.213

1If you're able to get $\lambda$ as a

`Root`

object, why not use`N`

to get the digits? – Chip Hurst – 2017-03-13T14:00:18.257You could use the continued fraction form. The sequence can be found here https://oeis.org/A014715. The following (Javascript) code works correctly up to 190 digits: c=[1,3,3,2,2...] // about 200 elements b=r=BigInt(10)*

– Peter Robinson – 2018-08-03T10:19:04.547BigInt(200); for (i=c.length-1; i>=0; i--) { r = BigInt(c[i])b + bb/r; } // r now contains 10*200 * Conway'c constant