07 Feb 2014 No Comments
The following question was put by a high school student on Physics Stack Exchange:
|I’m doing a research for my stats class in high school and I chose quantum mechanics as my subject. I narrowed down to electron localization in an atom and radial probability distribution. However, I can’t find any data to support my claims that probability and statistics are very important in quantum mechanics. Is there any data that I can analyse to prove my claims that is suitable for me?
I introduced the double slit experiment, Schrodinger’s equations, wave function and superposition.
and I answered thus:
I’d like to perhaps a slightly different viewpoint to your question and maybe turn it around a little. Probability is hard. Very hard. Defining the foundations of probability and statistics so that they are altogether sound and rigorous is actually a work in progress. It definitely is not complete. On the other hand Quantum mechanics is easy. Very easy! I’m being “slightly” tongue in cheek here of course but what I’m basically getting at is that:
In quantum mechanics you can always ask Nature for the answer by doing an experiment and seeing what the outcome is.
So many philosophers and mathematicians who think very hard about the foundations of probability these days actually use real physical examples from quantum mechanics to give them insight into their thought. The reason why this is a useful thing to do should be clear: quantum mechanical systems experimentally seem to be “probabilistic” at a very basic level. So it’s natural that someone should think of them as models to inspire abstract mathematical ideas and definitions.
The physicist Richard Feynman and Albert Hibbs showed a great deal of foresight in the 1960s when they expressed the view in their book “Quantum Mechanics and Path Integrals” that that quantum mechanics represents a replacement of probability theory itself. I like to think that what they were getting at was something like what I am saying here: we should henceforth think of Quantum Mechanics as the real world foundation whence to derive abstract probability notions.
So quantum mechanics is more basic because it is a concrete, real world behaviour whereon we can ground abstract mathematical notions.
For some references showing some of the open problems in defining “probability” rigorously, including examples from quantum mechanics used by philosophers, see how you go with the references below from one of my favourite websites: The Stanford Encyclopedia of Philosophy.
1. Interpretations of Probability; and
2. Chance versus Randomness; and
Do not worry if you do not understand everything. The interesting point is that probability and statistics are not as “cut and dried” as it is often presented in high school and if anyone tells you it’s easy, you can just say to them “pants on fire!” so that it is a highly useful thing to do to use a real world “model” like quantum mechanics to help philosophically and with questions of foundation. As I used to say to my daughter when she was learning to read, sometimes if you can’t understand everything, let the words wash over you and see what sense your mind makes of them as it mulls. I do this all the time: I think I understand about 5% of what I read at first reading when reading journals in my field and it would not be a stretch to say that I need to read papers on average of the order of twenty times to get the gist! Just a head start: you will come across the ideas “Frequentist” and “Subjectiveist” notions of probability: the former is the idea of defining probabilities as frequencies in an experiment as the number of trials $\to\infty$; the latter, subjectiveist notion is as an “intuitve” measure of likelihood not gleaned from experiment and often assigned by things like “symmetry”: for example, we hold up the notion of a “fair die” when “by symmetry” all its outcomes are equal and we beget this notion abstractly and independently from any experiment or even before we know such a thing might be possible.
That’s not to say at all that statistics is shaky or bunk. Far from it. Many, if not most, modern philosophers of science agree that one of the main things that defines science is its falsifiability – a concept introduced by Karl Popper who came up with the epistemology “Falsificationalism”. True science must foretell propositions about the world which can, even if only in priciple, be shown true or false by an experiment. The statistical notion of hypothesis testing is nothing less than the quantification of Popper’s revolutionary idea, and as such it is statistics which is the Queen of Sciences (rather than simply mathematics, as Carl Friedrich Gauss said). When applied properly, we have found experimentally that, notwithstanding the foundational issues, there is no more powerful way known to humanity to find truth out with. Given the importance of statistics and its foundational relationship with the scientific method, the rigorous underpinnings of probability must be an exciting area to work in, and absolutely vital research to further. Lastly: There is also quite a beautiful readable exposition of the subjectivist (Baysean) / frequentist “dichotomy” of interpretations in probability and statistics to be found in the opening section of E. T. Jaynes, “Information Theory and Statistical Mechanics”, Phys. Rev. 106, number 4, pp 620-630, 1965. Jaynes also shows how in Physics it is impossible to choose one over the other – both are fundamental and indispensable.