I'll give this a shot, since I'm sufficiently disturbed by the advice given in some of the other answers.

Let $\vec{X},\vec{Y}$ be infinite bit sequences generated by two RNGs (not necessarily PRNGs which are deterministic once initial state is known),
and we're considering the possibility of using the sequence $\vec{X} \oplus \vec{Y}$ with the hope of improving behavior in some sense.
There are lots of different ways in which $\vec{X} \oplus \vec{Y}$ could be considered better or worse compared to each of $\vec{X}$ and $\vec{Y}$;
here are a small handful that I believe are meaningful, useful, and consistent with normal usage of the words "better" and "worse":

- (0) Probability of true randomness of the sequence increases or decreases
- (1) Probability of observable non-randomness increases or decreases
(with respect to some observer applying some given amount of scrutiny, presumably)
- (2) Severity/obviousness of observable non-randomness increases or decreases.

First let's think about (0), which is the only one of the three that has any hope of being made precise.
Notice that if, in fact, either of the two input RNGs really is truly random, unbiased, and independent of the other,
then the XOR result will be truly random and unbiased as well.
With that in mind, consider the case when you believe $\vec{X},\vec{Y}$ to be truly random unbiased isolated bit streams, but you're not completely sure.
If $\varepsilon_X,\varepsilon_Y$ are the respective probabilities that you're wrong about each of them,
then the probability that $\vec{X} \oplus \vec{Y}$ is not-truly-random is then
$\leq \varepsilon_X \varepsilon_Y \lt min\{\varepsilon_X,\varepsilon_Y\}$,
in fact *much* less since $\varepsilon_X,\varepsilon_Y$ are assumed very close to 0 ("you believe them to be truly random").
And in fact it's even better than that, when we also take into account the possibility of $\vec{X},\vec{Y}$ being truly independent even when neither is truly
random:
$$
\begin{eqnarray*}
Pr(\vec{X} \oplus \vec{Y} \mathrm{\ not\ truly\ random}) \leq
\min\{&Pr(\vec{X} \mathrm{\ not\ truly\ random}), \\
&Pr(\vec{Y} \mathrm{\ not\ truly\ random}), \\
&Pr(\vec{X},\vec{Y} \mathrm{\ dependent})\}.
\end{eqnarray*}
$$
Therefore we can conclude that in sense (0), XOR can't hurt, and could potentially help a lot.

However, (0) isn't interesting for PRNGs, since in the case of PRNGs none of the sequences in question have any chance of being truly random.

Therefore for this question, which is in fact about PRNGs, we must be talking about something like (1) or (2).
Since those are in terms of properties and quantities like "observable", "severe", "obvious", "apparent",
we're now talking about Kolmogorov complexity, and I'm not going to try to make that precise.
But I will go so far as to make the hopefully uncontroversial assertion that, by such a measure,
"01100110..." (period=4) is worse than "01010101..." (period=2) which is worse than "00000000..." (constant).

Now, one might guess that (1) and (2) will follow the same trend as (0), and that therefore the conclusion "XOR can't hurt" might still hold.
However, note the significant possibility that neither $\vec{X}$ nor $\vec{Y}$ was observably non-random,
but that correlations between them cause $\vec{X} \oplus \vec{Y}$ to be observably non-random.
The most severe case of this, of course, is when $\vec{X} = \vec{Y}$ (or $\vec{X} = \mathrm{not}(\vec{Y})$), in which case $\vec{X} \oplus \vec{Y}$ is constant, the worst of all possible outcomes;
in general, it's easy to see that, regardless of how good $\vec{X}$ and $\vec{Y}$ are, $\vec{X}$ and $\vec{Y}$ need to be "close" to independent in order for their xor to be not-observably-nonrandom.
In fact, being not-observably-dependent can reasonably be defined as $\vec{X} \oplus \vec{Y}$ being not-observably-nonrandom.

Such surprise dependence turns out to be a really big problem.

## An example of what goes wrong

The question states "I'm excluding the common example of several linear feedback shift registers working together as they're from the same family".
But I'm going to exclude that exclusion for the time being,
in order to give a very simple clear real-life example of the kind of thing that can go wrong with XORing.

My example will be an old implementation of rand() that was on some version of Unix circa 1983.
IIRC, this implementation of the rand() function had the following properties:

- the value of each call to rand() was 15 pseudo-random bits, that is, an integer in range [0, 32767).
- successive return values alternated even-odd-even-odd; that is, the least-significant-bit alternated 0-1-0-1...
- the next-to-least-significant bit had period 4, the next after that had period 8, ... so the highest-order bit had period $2^{15}$.
- therefore the sequence of 15-bit return values of rand() was periodic with period $2^{15}$.

I've been unable to locate the original source code,
but I'm guessing from piecing together a couple of posts from
in https://groups.google.com/forum/#!topic/comp.os.vms/9k4W6KrRV3A that
it did precisely the following (C code), which agrees with my memory of the properties above:

```
#define RAND_MAX 32767
static unsigned int next = 1;
int rand(void)
{
next = next * 1103515245 + 12345;
return (next & RAND_MAX);
}
void srand(seed)
unsigned int seed;
{
next = seed;
}
```

As one might imagine, trying to use this rand() in various ways led to an assortment of disappointments.

For example, at one point I tried simulating a sequence of random coin flips by repeatedly taking:

```
rand() & 1
```

i.e. the least significant bit. The result was simple alternation heads-tails-heads-tails.
That was hard to believe at first (must be a bug in my program!), but after I convinced myself it was true, I tried
using the next-least-significant bit instead. That's not much better, as noted earlier-- that bit is periodic with period 4.
Continuing to explore successively higher bits revealed the pattern I noted earlier: that is, each next higher-order bit had twice the period of the previous,
so in this respect the highest-order bit was the most useful of all of them. Note however that there was no black-and-white threshold
"bit $i$ is useful, bit $i-1$ is not useful" here; all we can really say is the numbered bit positions had varying degrees of usefulness/uselessness.

I also tried things like scrambling the results further, or XORing together values returned from multiple calls to rand().
XORing pairs of successive rand() values was a disaster, of course-- it resulted in all odd numbers!
For my purposes (namely producing an "apparently random" sequence of coin flips), the constant-parity result of the XOR
was even worse than the alternating even-odd behavior of the original.

A slight variation puts this into the original framework: that is, let $\vec{X}$ be the sequence of 15-bit values returned by rand() with a given seed $s_X$,
and $\vec{Y}$ the sequence from a different seed $s_Y$. Again, $\vec{X} \oplus \vec{Y}$ will be a sequence of either all-even or all-odd numbers,
which is worse than the original alternating even/odd behavior.

In other words, this is an example where XOR made things worse in the sense of (1) and (2), by any reasonable interpretation.
It's worse in several other ways as well:

- (3) The XORed least-significant-bit is obviously biased, i.e. has unequal frequencies of 0's and 1's,
unlike any numbered bit position in either of the inputs which are all unbiased.
- (4) In fact, for
*every* bit position, there are pairs of seeds for which that bit position is biased in the XOR result,
and for every pair of seeds, there are (at least 5) bit positions that are biased in the XOR result.
- (5) The period of the entire sequence of 15-bit values in the XOR result is either 1 or $2^{14}$, compared to $2^{15}$ for the originals.

None of (3),(4),(5) is obvious, but they are all easily verifiable.

Finally, let's consider re-introducing the prohibition of PRNGs from the same family.
The problem here, I think, is that it's never really clear whether two PRNGs are "from the same family",
until/unless someone starts using the XOR and notices (or an attacker notices) things got worse in the sense of (1) and (2),
i.e. until non-random patterns in the output cross the threshold from not-noticed to noticed/embarrassing/disastrous, and at that point it's too late.

I'm alarmed by other answers here which give unqualified advice "XOR can't hurt" on the basis of theoretical measures which appear to me
to do a poor job of modelling what most people consider to be "good" and "bad" about PRNGs in real life.
That advice is contradicted by clear and blatant examples in which XOR makes things worse, such the rand() example given above.
While it's conceivable that relatively "strong" PRNGs could consistently display the opposite behavior when XORed to that of the toy PRNG that was rand(),
thereby making XOR a good idea for them, I've seen no evidence in that direction, theoretical or empirical, so it seems unreasonable to me to assume
that happens.

Personally, having been bitten by surprise by XORing rand()s in my youth, and by countless other assorted surprise correlations throughout my life,
I have little reason to think the outcome will be different if I try similar tactics again.
That is why I, personally, would be very reluctant to XOR together multiple PRNGs unless very extensive analysis and vetting has been done to give me some confidence that it might be safe to do so for the particular RNGs in question. As a potential cure for when I have low confidence in one or more of the individual PRNGs, XORing them is unlikely to increase my confidence, so I'm unlikely to use it for such a purpose. I imagine the answer to your question is that this is a widely held sentiment.

The answer might depend on the application. What do you want to use the pseudorandom sequence for? – Yuval Filmus – 2016-05-20T11:48:23.180

I think this is a good question. I guess that the XOR product of an "independent" set of non-trivial PRNGs will be at least as hard to predict as the result of any one of them. (But not necessarily any more random, and also dependent on your definition of "independent", and of course slower.) – mwfearnley – 2016-05-20T14:52:45.890

1

Have you found Fortuna (https://en.wikipedia.org/wiki/Fortuna_%28PRNG%29) it sounds like its close to what you describe that it aggregates various random sources into one.

– Little Code – 2016-05-20T18:05:26.687I don't want to compete with the good answers here, but one practical reason we don't do this is because doing so makes it

veryeasy to develop the illusion of better random behaviors. If one goes down this path, it becomes very easy to mess up and actually weaken your PRNG. Thus, budding developers are encouraged not to go down this path, simply to avoid putting themselves in difficult positions. – Cort Ammon – 2016-05-20T20:05:46.8971@LittleCode Actually it sounds different altogether. Fortuna outputs data from a single hash function. It just messes about with a lot of weak entropy collection mechanisms before (re)hashing it though a single output function. My question related to outputting from several functions (why not 10 of them)? If this is a fill device, speed is irrelevant anyway. – Paul Uszak – 2016-05-21T22:15:09.070

1

The late George Marsaglia, a noted researcher in the field of PRNGs who invented multiply new PRNG types such as multiply-with-carry and xor-shift, did precisely this when he proposed the KISS generator in the 1990s which is a combination of three PRNGs of different type. I have been using KISS successfully for the past twenty years, not for cryptography, of course. A useful secondary source with regard to KISS is this 2011 paper by Greg Rose in which he points out an issue with one of the constituent PRNGs, which doesn't invalidate the combining concept

– njuffa – 2016-05-21T22:17:48.3474Knuth relates the result of naively combining pseudorandom number generators (using one random number to choose which generator to use) resulted in a function which converges to a fixed value! So, back in the days just before the microcomputer revolution, he warned us to never mix random generators. – JDługosz – 2016-05-22T21:19:02.080