2

Many attempts to interpret quantum mechanics do so by looking at three nested systems. The largest system is essentially the universe or the environment. The smallest system is the one being observed and following the laws of the theory, and the middle system contains the measurement device or the observer.

At least Schrödinger remarked in his famous article with the cat that properties of bosons and fermions might make this setup questionable (even if he wasn't very committal about whether bosons and fermions are really the last word), because they don't seem to allow for such a clear separation.

But because already interpretation of probability theory can cause controversies, I wonder whether this same setup with three nested systems could also be used to interpret classical probability theory. And one might also try to interpret the classical static deterministic and causal theory with this setup, after all even the interpretation of causality might cause controversies. But why should one expect to learn anything at all about causal or probabilistic theories, by looking at three nested systems? Is this because we can see ourselves as being in the middle between the universe and what we observe, or has this setup something to do with quantum mechanics itself, or is this setup simply misguided?

Uare mistaken in thinking that quantum mechanics is a collection of

solidfacts who need interpretation. Facts themselves have to be organized differently following some great insight.Moreover quantum mechanics is not just small things with observer-QM is all around us.All macroscopic phenomena are related to QM closer or farEr.Look at superconductivity - pure QM effect. Macroscopic.There is also Superfluidity which is also macroscopic.And so so much more. QM has nothing to do with probability anymore. That are old days. QM is aboutQUANTIZATION, stability, discreteness and so so on. – Asphir Dom – 2014-10-06T10:51:59.633@AsphirDom You may be right about the old days. Schrödinger's article is from 1935, the two text from Everett which I read are from 1956 and 1957, Bohm's first development of decoherence dates back to 1952 (in his article

A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", which I didn't read), and Zeh's articleOn the Interpretation of Measurement in Quantum Theory, which finally lead to the general acceptance of decoherence, is from 1970. But some of the text books I read are newer than 2010, and still introduce these concepts by investigating nested systems. – Thomas Klimpel – 2014-10-06T11:42:04.923the question seems to ask about the process of scientific measurements....? which is an ancient concept related to "control" in science.

– vzn – 2014-12-12T19:30:17.273@vzn For the process of scientific measurements, I actually prefer the frequentist interpretation. It should be applicable to this case, and it is much easier for me to understand than any of the other interpretations of probability. – Thomas Klimpel – 2014-12-12T19:45:07.603

But if I have to predict the whether for tomorrow, then the frequentist interpretation is no longer helpful, because this experiment can neither be controlled, nor repeated. Hence I need a different interpretation of probability for that case. – Thomas Klimpel – 2014-12-12T19:48:02.677