Digging further and further back now, for a very early paper on Bell’s Theorem. Honestly, I think the best reason I can come up with for reading this paper is that it just happens to be in the proper queue… so if you could suggest some better papers for me to read, please do! Everyone is free to comment here, or email me directly :)

Moving on to the paper! This paper is an experimental demonstration of Bell’s Inequality, the same issue that was raised by EPR (and was mentioned in the last QCJC). This inequality, and subsequent theorem, makes local hidden variable theory exclusive from quantum mechanics. Whenever Bell’s inequality holds, a local hidden variable theory is possible. However, quantum mechanics proposes a value that violates Bell’s inequality, and therefore, violates local hidden variable theory. When we speak of Bell states, we often speak of entangled particles that obey quantum mechanical laws and NOT local hidden variable theory.

A few more words about local hidden variable theories: Originally, in the EPR paper, the topic was dealing with just hidden variable theories in general. Hidden variable theories are just the idea our formulation of quantum mechanics is incomplete, and are actually reliant on some underlying “hidden” variables that we just don’t know about yet. This theory was proposed by Einstein, Polesky, and Rosen (yes got the name right this time!), and seemed to rely on common sense. After all, how would a pair of entangled particles transmit information faster than the speed of light? But, in time, this “spooky action at a distance” has shown to be in fact a real, verifiable fact through experiment.

This paper is one such demonstration of particles that violate Bell’s theorem, thus showing to be quantum mechanical and not local hidden variable theory dependent. But before we go further, we should probably state how Bell’s inequality is intended to be measured. (Aside: I thought I was quite familiar with the ideas of Bell’s Theorem when I started writing. But it turns out that I was more comprehensive of the *philosophical *implications, and wasn’t as thorough with the physics…)

Bell’s theorem by itself is simply derived from standard probabilities. If we take the original inequality, we have that

N(A, not B) + N(B, not C) >= N(A, not C)

We can prove this simply as the following:

N(A, not B, C) + N(not A, B, not C) >=0

N(A, not B, C) + N(not A, B, not C) + N(A, not B, not C) + N(A, B, not C) >= N(A, not B, not C) + N(A, B, not C)

N(A, not B) + N(B, not C) >= N(A, not C)

QED! Very simple, very direct. Supposing that A, B, and C are each independent variables, there shouldn’t be any variables that violate these rules. We have just shown it to be true simply by counting and through sets.

Now, we take A, B, and C, to be hidden variables corresponding to the spin of a photon, for instance. We might be interested in the spin of the photon along three different axes, such as a 0 degree axis, a 45 degree axis, and a 90 degree axis. We know that the spin can either be up or down in each of these directions, when measured. Furthermore, we know that a pair of antisymmetric photons is being produced. That is, if one of the photons has spin up in 0 degrees, the other photon must have spin down in 0 degrees, because at an earlier point, they had total spin 0.

Since we have two different antisymmetric particles, we can randomly test each individual particle with randomly chosen axis. Afterwards, we compile the information for each pair, and compute the inequality. Was it true that the number of pairs with spin up in 0 degrees, spin down in 45 degrees plus the number of pairs with spin up in 45 degrees and spin down in 90 degrees greater than the number of pairs with spin up in 0 degrees and spin down in 90 degrees? Or was this inequality violated, as quantum mechanics would predict?

After 650ish words, we have finally gotten to this paper! The paper provides a new way for experimentally checking Bell’s theorem. The authors first discuss the earlier tests of Bell’s theorem, which use positronium anihilation – when an electron meets a positron and annihilates each other. Then, they discuss a better way of conducting the tests using low-energy photons produced by **atomic radiative cascades**. They claim that the photons produced in this method is better for testing Bell’s theorem. I’m not entirely sure why this is the case, but it seems to have to do with detector efficiencies and/or efficient polarizers. The authors claim that it is able to not require “strong supplementary assumptions” that would otherwise apply.

The experimenters use the atomic radiative cascade of calcium, which yields two photons that have polarization correlations. They demonstrate how they set up their cascade, through irraddiating an atomic beam of calcium with a single-mode krypton ion laser, and a clockwise single-mode Rhodamine dye laser. The reason for choosing these two lasers is so that they have parallel polarizations, and have wavelengths that are corresponding to the different states of the Calcium, allowing “selective excitation”. By controlling these factors, the experimenters are able to very finely control the photon source, producing more data than previous experiments had done.

Next, the paper discusses the optical elements used in this experiment. They discuss the filters used to prevent photon reflections, as well as the two different polarizers that were constructed to perform the measurements. They have lots of specific information regarding how the piles of plates inclined near the Brewster’s angle (for polarization) would perform the polarization. They also provide data regarding the transmittances for each of the polarizers.

Then, the paper discusses the electronics that allow for coincidence counting to occur. It remarks on the TAC and Multichannel analysers that provide a time-delay spectrum, allowing for the monitoring of coincidences. This, of course, is crucial to the computation of Bell’s inequality and whether it is violated.

In the end, the group discovered a violation of Bell’s inequality (and a confirmation of quantum mechanics) by over 13 standard deviations, for both near and far (6.5 meter separation). This is great for quantum mechanics!

I think this blog post spent a bit too much time on the theory of Bell’s inequality, which is a shame given how interesting the experimental part is. But hopefully I will have a chance to explore other papers on Bell’s inequality and discover more there!

UPDATE 02/06/2018

So. It’s almost been a year since I have gone through these notes, but there is good reason to come back again and revisit this paper, as Professor Alan Aspect came to campus and delivered a talk about this exact work. And revisiting my own shoddily prepared notes, I feel shamed into adding on some crucial details that I may have missed out the first time I went through and learned this material.

To begin with, perhaps I was lacking in explaining the significance of this result, and some of the necessary background for this result. I had mentioned the EPR paradox earlier, but why was it such a big deal? The EPR paradox was perhaps the keystone of the Einstein-Bohr debates of the 1930s, where Einstein tried several times to show the contradictions of quantum mechanics. While he proposed several experiments (gedankenexperiments, to be more accurate) to show that it *was *possible to violate Heisenberg uncertainty or another aspect of quantum mechanics, Bohr would come back and demonstrate that the proposed experiment actually had a flaw that prohibited such a measurement. In the end, the EPR paradox stuck because of it’s violation of locality.

What Einstein proposes with the EPR paper was that there must be a contradiction between locality and quantum mechanics. His experiment adheres to a quantum mechanical explanation of measurements, yet such a measurement would appear to violate locality if there was not an additional hidden variable that was not yet known of. And while Bohr sputtered objections to such a theory, it was difficult to come up with a counterargument to an experiment built within a quantum framework.

It wasn’t until the 1960s, 30 years after the original Bohr-Einstein debates, that a resolution was found from John Bell. Why such a long period of wait, for an inequality that a smart high school student could possibly derive? Professor Aspect claims that it was a matter of interpretation and importance – or lack thereof. He points out that most modern physicists might have viewed this debate as merely abstruse minds arguing about the interpretation of esoteric points, rather than something that fundamentally changes the nature of quantum mechanics. After all, the Schrodinger model of QM was working quite well, thank you very much. Both Einstein and Bohr were already in their late 50s at the time, and perhaps that led researchers to avoid waddling into a hot debate between two renowned scientists.

After the derivation of the Bell Inequality (see above) in 1964, why did it take another 20 years to see the first experimental realization? One thing that I had not realized – the first laser was not fired until 1960, and even after then, it was still an emerging technology. To create a laser source that could produce the necessary photon pairs is not as trivial as it may be today!

Aspect’s experiment used a laser to excite photons in calcium atoms because of the transition levels that one can observe in calcium transitions. This transition allows an especially large number of accurate signals to be produced in a short time scale, such that statistical experiments could be completed on the order of minutes or hours rather than days or weeks. Professor Aspect shared an interesting note about the Polarizing Beam Splitters used to observe the photons, noting that they were ridiculously expensive at that time and their lab was unable to procure one. But he had a friend at an optical manufacturing facility who was able to provide him one to use, for a few bottles of good wine :)

Furthermore, the test proposed by Bell requires deciding randomly the angle of polarization of the two detectors. Optimally, this would be done during the flight of the photons at a speed faster than light-travel. This would close locality loopholes, or arguments that somehow the photons “know” the polarization angles for each of the detectors while they were still together, or were able to transmit information along that line during their flight. However, the photomultiplier tubes used for these experiments are not lightweight at all – they weighed upwards of 50 pounds, and were big and bulky. Instead of actually trying to move the detectors, the experiment implements a switch that allows the beam of light to be directed to two different photomultipliers, each at a different angle.

It’s a brilliant experiment. Even if the probabilities could be calculated and the optical diagram be drawn simply, there was a good reason why it took a long time before a good statistical validation for the Bell Inequality could be reached. Not only did the group need to devise a better entangled photon source, but they also needed to eliminate noise in many of the detection and coincidence channels. Just simply a fascinating experiment.

After his talk yesterday on his original 1981 experiment, he followed it up today with a fascinating discussion on the HBT effect and the HOM effect, both of which he is currently probing with atoms rather than photons. There is now a new possible experiment that could lead to a Bell Inequality test for atoms with the HOM effect, an especially fitting conclusion to a long scientific career.