3

I'd like a low-discrepancy sequence of points over a 3D-hypercube $[-1,1]^3$, but don't want to have to commit to a fixed number $n$ of points beforehand, that is just see how the numerical integration estimates develop with increasing numbers of low-discrepancy points.

I'd like to avoid have to start all over again, if the results with a fixed $n$ are unsatisfactory. Of course, one could just employ random numbers, but then the convergence behavior would be poorer.

Since in his answer below, Martin Roberts advances a very interesting, appealing approach to the open-ended low-discrepancy problem, I’d like to indicate an (ongoing) implementation of his approach I’ve just reported in https://arxiv.org/abs/1809.09040 . In sec. XI (p. 19) and Figs. 5 and 6 there, I analyze two problems—one with sampling dimension $d=36$ and one with $d=64$—both using the parameter $\bf{\alpha}_0$ set to 0 and also to $\frac{1}{2}$. To convert the quasi-uniformly distributed points yielded by the Roberts’ algorithm to quasi-uniformly distributed normal variates, I use the code developed by Henrik Schumacher in his answer to Can I use Compile to speed up InverseCDF?

Three years after my posting of the original question, I've been pursuing a problem which involves the implementation of the precise 3D procedure Martin Roberts gave in his extensive answer below. However, in the interest of increased accuracy/precision, I chose (using WorkingPrecision->20) for the "generalized golden ratio" constant the value $\phi_3=1.2207440846057594754$, rather than the indicated 1.2207440846. So, for the recurrence sequence $R:\bf{t}_{n+1} =\bf{t}_n+ \bf{\alpha}\mod 1$, rather than \begin{equation} \alpha= (0.819173,0.671044,0.549700), \end{equation} I employ \begin{equation} \alpha =(0.81917251339616443970, 0.6710436067037892084, 0.5497004779019702669). \end{equation} I've now run 1,800,000,000 iterations of the procedure, recording results at intervals of 100 million. I'm estimating nine quantities, for three of which I already know the exact values, that is \begin{equation} \left\{\frac{8 \pi }{27 \sqrt{3}},\frac{1}{81} \left(27+\sqrt{3} \log \left(97+56 \sqrt{3}\right)\right),\frac{2}{81} \left(4 \sqrt{3} \pi -21\right)\right\}= \end{equation} \begin{equation} \{0.53742203384717565944, 0.44597718463717723667, \ 0.018903515328657140917\}. \end{equation} Now, after 1,700,000,000 iterations, the ratios of my estimates to the three known values were \begin{equation} \{0.99999434335413677936,1.0000002215189648742,0.99998070678044792723\}, \end{equation} while after 1,800,000,000 iterations the ratios were \begin{equation} \{0.99999956436935409259,1.0000009222826007255,0.99995924493777645856\} \end{equation} So, the first of the three estimates improves, while the other two "deteriorate".

So, what is the relation of my initial choice of WorkingPrecision->20 to the quality of the estimates? In particular, my concern is that since at every iteration, the command FractionalPart is performed, that conceivably at some iteration, rather than obtaining, say 0.9999999999999999, one might obtain 0.0000000000000001--leading to subsequent "severe" (?) deterioration. If I had chosen WorkingPrecision->48 say instead, might the quality have possibly been different?--at least, have increased confidence. (Maybe I should redo the calculations--and see what happens.)

So, to what extent is the Martin Roberts procedure (and/or other quasirandom ones) subject to accuracy/precision issues? How "robust" is it?

Here's a ListPlot of the accuracy of the three estimates at intervals of hundred million points, up to twenty-one hundred million.

Looks like the procedure is working rather well.

What does "low-discrepancy" mean? – David G. Stork – 2017-04-12T20:23:39.650

"A sequence of n-tuples that fills n-space more uniformly than uncorrelated random points, sometimes also called a low-discrepancy sequence. Although the ordinary uniform random numbers and quasirandom sequences both produce uniformly distributed sequences, there is a big difference between the two." (http://mathworld.wolfram.com/QuasirandomSequence.html)

– Paul B. Slater – 2017-04-12T20:34:16.040`RandomReal[]`

supports the Sobol' and Niederreiter sequences via`"MKL"`

; see the docs for details. – J. M.'s ennui – 2017-04-12T23:46:08.713Thanks, J.M. Can you provide a more specific pointer to the docs in question? – Paul B. Slater – 2017-04-13T01:52:13.987

See this. (Also, entering "Sobol" or "Niederreiter" in the docs' search box would have returned this part of the docs among the results.)

– J. M.'s ennui – 2017-04-14T18:50:34.770Thanks, J. M. By the docs' search box, I take it you mean the search box on the Wolfram.com site. OK, one can generate fixed-length sequences, but what about open-ended ones? If one generates ten sequences of length 1,000 say, does the resultant 10,000-length sequence have the same desirable low-discrepancy properties of an ab initio 10,000 length sequence? – Paul B. Slater – 2017-04-15T14:40:43.537

as per private communication I think the comparison of errors between n=1.7B billion and n=1.8 billion may not tell the whole truth. Errors in monte carlo integration (even QMC) do not strictly monotonically decrease. – Martin Roberts – 2020-04-19T15:21:33.537