# Polarization states as hidden variables?

This post explores the limits of the physical interpretation of the wavefunction we have been building up in previous posts. It does so by examining if it can be used to provide a hidden-variable theory for explaining quantum-mechanical interference. The hidden variable is the polarization state of the photon.

The outcome is as expected: the theory does not work. Hence, this paper clearly shows the limits of any physical or geometric interpretation of the wavefunction.

This post sounds somewhat academic because it is, in fact, a draft of a paper I might try to turn into an article for a journal. There is a useful addendum to the post below: it offers a more sophisticated analysis of linear and circular polarization states (see: Linear and Circular Polarization States in the Mach-Zehnder Experiment). Have fun with it !

# A physical interpretation of the wavefunction

Duns Scotus wrote: pluralitas non est ponenda sine necessitate. Plurality is not to be posited without necessity. And William of Ockham gave us the intuitive lex parsimoniae: the simplest solution tends to be the correct one. But redundancy in the description does not seem to bother physicists. When explaining the basic axioms of quantum physics in his famous Lectures on quantum mechanics, Richard Feynman writes:

“We are not particularly interested in the mathematical problem of finding the minimum set of independent axioms that will give all the laws as consequences. Redundant truth does not bother us. We are satisfied if we have a set that is complete and not apparently inconsistent.”

Also, most introductory courses on quantum mechanics will show that both ψ = exp(iθ) = exp[i(kx-ωt)] and ψ* = exp(-iθ) = exp[-i(kx-ωt)] = exp[i(ωt-kx)] = -ψ are acceptable waveforms for a particle that is propagating in the x-direction. Both have the required mathematical properties (as opposed to, say, some real-valued sinusoid). We would then think some proof should follow of why one would be better than the other or, preferably, one would expect as a discussion on what these two mathematical possibilities might represent¾but, no. That does not happen. The physicists conclude that “the choice is a matter of convention and, happily, most physicists use the same convention.”

Instead of making a choice here, we could, perhaps, use the various mathematical possibilities to incorporate spin in the description, as real-life particles – think of electrons and photons here – have two spin states (up or down), as shown below.

Table 1: Matching mathematical possibilities with physical realities?

 Spin and direction Spin up Spin down Positive x-direction ψ = exp[i(kx-ωt)] ψ* = exp[i(ωt-kx)] Negative x-direction χ = exp[i(ωt-kx)] χ* = exp[i(kx+ωt)]

That would make sense – for several reasons. First, theoretical spin-zero particles do not exist and we should therefore, perhaps, not use the wavefunction to describe them. More importantly, it is relatively easy to show that the weird 720-degree symmetry of spin-1/2 particles collapses into an ordinary 360-degree symmetry and that we, therefore, would have no need to describe them using spinors and other complicated mathematical objects. Indeed, the 720-degree symmetry of the wavefunction for spin-1/2 particles is based on an assumption that the amplitudes C’up = -Cup and C’down = -Cdown represent the same state—the same physical reality. As Feynman puts it: “Both amplitudes are just multiplied by −1 which gives back the original physical system. It is a case of a common phase change.”

In the physical interpretation given in Table 1, these amplitudes do not represent the same state: the minus sign effectively reverses the spin direction. Putting a minus sign in front of the wavefunction amounts to taking its complex conjugate: -ψ = ψ*. But what about the common phase change? There is no common phase change here: Feynman’s argument derives the C’up = -Cup and C’down = -Cdown identities from the following equations: C’up = eCup and C’down = eCdown. The two phase factors  are not the same: +π and -π are not the same. In a geometric interpretation of the wavefunction, +π is a counterclockwise rotation over 180 degrees, while -π is a clockwise rotation. We end up at the same point (-1), but it matters how we get there: -1 is a complex number with two different meanings.

We have written about this at length and, hence, we will not repeat ourselves here. However, this realization – that one of the key propositions in quantum mechanics is basically flawed – led us to try to question an axiom in quantum math that is much more fundamental: the loss of determinism in the description of interference.

The reader should feel reassured: the attempt is, ultimately, not successful—but it is an interesting exercise.

# The loss of determinism in quantum mechanics

The standard MIT course on quantum physics vaguely introduces Bell’s Theorem – labeled as a proof of what is referred to as the inevitable loss of determinism in quantum mechanics – early on. The argument is as follows. If we have a polarizer whose optical axis is aligned with, say, the x-direction, and we have light coming in that is polarized along some other direction, forming an angle α with the x-direction, then we know – from experiment – that the intensity of the light (or the fraction of the beam’s energy, to be precise) that goes through the polarizer will be equal to cos2α.

But, in quantum mechanics, we need to analyze this in terms of photons: a fraction cos2α of the photons must go through (because photons carry energy and that’s the fraction of the energy that is transmitted) and a fraction 1-cos2α must be absorbed. The mentioned MIT course then writes the following:

“If all the photons are identical, why is it that what happens to one photon does not happen to all of them? The answer in quantum mechanics is that there is indeed a loss of determinism. No one can predict if a photon will go through or will get absorbed. The best anyone can do is to predict probabilities. Two escape routes suggest themselves. Perhaps the polarizer is not really a homogeneous object and depending exactly on where the photon is it either gets absorbed or goes through. Experiments show this is not the case.

A more intriguing possibility was suggested by Einstein and others. A possible way out, they claimed, was the existence of hidden variables. The photons, while apparently identical, would have other hidden properties, not currently understood, that would determine with certainty which photon goes through and which photon gets absorbed. Hidden variable theories would seem to be untestable, but surprisingly they can be tested. Through the work of John Bell and others, physicists have devised clever experiments that rule out most versions of hidden variable theories. No one has figured out how to restore determinism to quantum mechanics. It seems to be an impossible task.”

The student is left bewildered here. Are there only two escape routes? And is this the way how polarization works, really? Are all photons identical? The Uncertainty Principle tells us that their momentum, position, or energy will be somewhat random. Hence, we do not need to assume that the polarizer is nonhomogeneous, but we need to think of what might distinguish the individual photons.

Considering the nature of the problem – a photon goes through or it doesn’t – it would be nice if we could find a binary identifier. The most obvious candidate for a hidden variable would be the polarization direction. If we say that light is polarized along the x-direction, we should, perhaps, distinguish between a plus and a minus direction? Let us explore this idea.

# Linear polarization states

The simple experiment above – linearly polarized light going through a polaroid – involves linearly polarized light. We can easily distinguish between left- and right-hand circular polarization, but if we have linearly polarized light, can we distinguish between a plus and a minus direction? Maybe. Maybe not. We can surely think about different relative phases and how that could potentially have an impact on the interaction with the molecules in the polarizer.

Suppose the light is polarized along the x-direction. We know the component of the electric field vector along the y-axis will then be equal to Ey = 0, and the magnitude of the x-component of E will be given by a sinusoid. However, here we have two distinct possibilities: Ex = cos(ω·t) or, alternatively, Ex = sin(ω·t). These are the same functions but – crucially important – with a phase difference of 90°: sin(ω·t) = cos(ω·t + π/2).

Figure 1: Two varieties of linearly polarized light? Would this matter? Sure. We can easily come up with some classical explanations of why this would matter. Think, for example, of birefringent material being defined in terms of quarter-wave plates. In fact, the more obvious question is: why would this not make a difference?

Of course, this triggers another question: why would we have two possibilities only? What if we add an additional 90° shift to the phase? We know that cos(ω·t + π) = –cos(ω·t). We cannot reduce this to cos(ω·t) or sin(ω·t). Hence, if we think in terms of 90° phase differences, then –cos(ω·t) = cos(ω·t + π)  and –sin(ω·t) = sin(ω·t + π) are different waveforms too. In fact, why should we think in terms of 90° phase shifts only? Why shouldn’t we think of a continuum of linear polarization states?

We have no sensible answer to that question. We can only say: this is quantum mechanics. We think of a photon as a spin-one particle and, for that matter, as a rather particular one, because it misses the zero state: it is either up, or down. We may now also assume two (linear) polarization states for the molecules in our polarizer and suggest a basic theory of interaction that may or may not explain this very basic fact: a photon gets absorbed, or it gets transmitted. The theory is that if the photon and the molecule are in the same (linear) polarization state, then we will have constructive interference and, somehow, a photon gets through. If the linear polarization states are opposite, then we will have destructive interference and, somehow, the photon is absorbed. Hence, our hidden variables theory for the simple situation that we discussed above (a photon does or does not go through a polarizer) can be summarized as follows:

 Linear polarization state Incoming photon up (+) Incoming photon down (-) Polarizer molecule up (+) Constructive interference: photon goes through Destructive interference: photon is absorbed Polarizer molecule down (-) Destructive interference: photon is absorbed Constructive interference: photon goes through

Nice. No loss of determinism here. But does it work? The quantum-mechanical mathematical framework is not there to explain how a polarizer could possibly work. It is there to explain the interference of a particle with itself. In Feynman’s words, this is the phenomenon “which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics.”

So, let us try our new theory of polarization states as a hidden variable on one of those interference experiments. Let us choose the standard one: the Mach-Zehnder interferometer experiment.

# Polarization states as hidden variables in the Mach-Zehnder experiment

The setup of the Mach-Zehnder interferometer is well known and should, therefore, probably not require any explanation. We have two beam splitters (BS1 and BS2) and two perfect mirrors (M1 and M2). An incident beam coming from the left is split at BS1 and recombines at BS2, which sends two outgoing beams to the photon detectors D0 and D1. More importantly, the interferometer can be set up to produce a precise interference effect which ensures all the light goes into D0, as shown below. Alternatively, the setup may be altered to ensure all the light goes into D1.

Figure 2: The Mach-Zehnder interferometer The classical explanation is easy enough. It is only when we think of the beam as consisting of individual photons that we get in trouble. Each photon must then, somehow, interfere with itself which, in turn, requires the photon to, somehow, go through both branches of the interferometer at the same time. This is solved by the magical concept of the probability amplitude: we think of two contributions a and b (see the illustration above) which, just like a wave, interfere to produce the desired result¾except that we are told that we should not try to think of these contributions as actual waves.

So that is the quantum-mechanical explanation and it sounds crazy and so we do not want to believe it. Our hidden variable theory should now show the photon does travel along one path only. If the apparatus is set up to get all photons in the D0 detector, then we might, perhaps, have a sequence of events like this:

 Photon polarization At BS1 At BS2 Final result Up (+) Photon is reflected Photon is reflected Photon goes to D0 Down (–) Photon is transmitted Photon is transmitted Photon goes to D0

Of course, we may also set up the apparatus to get all photons in the D1 detector, in which case the sequence of events might be this:

 Photon polarization At BS1 At BS2 Final result Up (+) Photon is reflected Photon is transmitted Photon goes to D1 Down (–) Photon is transmitted Photon is reflected Photon goes to D1

This is a nice symmetrical explanation that does not involve any quantum-mechanical weirdness. The problem is: it cannot work. Why not? What happens if we block one of the two paths? For example, let us block the lower path in the setup where all photons went to D0. We know – from experiment – that the outcome will be the following:

 Final result Probability Photon is absorbed at the block 0.50 Photon goes to D0 0.25 Photon goes to D1 0.25

How is this possible? Before blocking the lower path, no photon went to D1. They all went to D0. If our hidden variable theory was correct, the photons that do not get absorbed should also go to D0, as shown below.

 Photon polarization At BS1 At BS2 Final result Up (+) Photon is reflected Photon is reflected Photon goes to D0 Down (–) Photon is absorbed Photon was absorbed Photon was absorbed

# Conclusion

Our hidden variable theory does not work. Physical or geometric interpretations of the wavefunction are nice, but they do not explain quantum-mechanical interference. Their value is, therefore, didactic only.

Jean Louis Van Belle, 2 November 2018

# References

This paper discusses general principles in physics only. Hence, references were limited to references to general textbooks and courses and physics textbooks only. The two key references here are the MIT introductory course on quantum physics and Feynman’s Lectures – both of which can be consulted online. Additional references to other material are given in the text itself (see footnotes).

 Duns Scotus, Commentaria.

 Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 5, Section 5.

 See, for example, the MIT’s edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 4, Section 3.

 Photons are spin-one particles but they do not have a spin-zero state.

 Of course, the formulas only give the elementary wavefunction. The wave packet will be a Fourier sum of such functions.

 Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 6, Section 3.

 Jean Louis Van Belle, Euler’s wavefunction (http://vixra.org/abs/1810.0339, accessed on 30 October 2018)

 See: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 3 (Loss of determinism).

 The z-direction is the direction of wave propagation in this example. In quantum mechanics, we often define the direction of wave propagation as the x-direction. This will, hopefully, not confuse the reader. The choice of axes is usually clear from the context.

 Source of the illustration: https://upload.wikimedia.org/wikipedia/commons/7/71/Sine_cosine_one_period.svg..

 Classical theory assumes an atomic or molecular system will absorb a photon and, therefore, be in an excited state (with higher energy). The atomic or molecular system then goes back into its ground state by emitting another photon with the same energy. Hence, we should probably not think in terms of a specific photon getting through.

 Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 1, Section 1.

 Source of the illustration: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 4 (Quantum Superpositions).

# Feynman’s Seminar on Superconductivity (1)

The ultimate challenge for students of Feynman’s iconic Lectures series is, of course, to understand his final one: A Seminar on Superconductivity. As he notes in his introduction to this formidably dense piece, the text does not present the detail of each and every step in the development and, therefore, we’re not supposed to immediately understand everything. As Feynman puts it: we should just believe (more or less) that things would come out if we would be able to go through each and every step. Well… Let’s see. Feynman throws a lot of stuff in here—including, I suspect, some stuff that may not be directly relevant, but that he sort of couldn’t insert into all of his other Lectures. So where do we start?

It took me one long maddening day to figure out the first formula: It says that the amplitude for a particle to go from to in a vector potential (think of a classical magnetic field) is the amplitude for the same particle to go from to b when there is no field (A = 0) multiplied by the exponential of the line integral of the vector potential times the electric charge divided by Planck’s constant. I stared at this for quite a while, but then I recognized the formula for the magnetic effect on an amplitude, which I described in my previous post, which tells us that a magnetic field will shift the phase of the amplitude of a particle with an amount equal to: Hence, if we write 〈b|a〉 for A = 0 as 〈b|aA = 0 = C·eiθ, then 〈b|a〉 in A will, naturally, be equal to 〈b|a〉 in A = C·ei(θ+φ) = C·eiθ·eiφ = 〈b|aA = 0 ·eiφ, and so that explains it. 🙂 Alright… Next. Or… Well… Let us briefly re-examine the concept of the vector potential, because we’ll need it a lot. We introduced it in our post on magnetostatics. Let’s briefly re-cap the development there. In Maxwell’s set of equations, two out of the four equations give us the magnetic field: B = 0 and c2×B = j0. We noted the following in this regard:

1. The ∇B = 0 equation is true, always, unlike the ×E = 0 expression, which is true for electrostatics only (no moving charges). So the B = 0 equation says the divergence of B is zero, always.
2. The divergence of the curl of a vector field is always zero. Hence, if A is some vector field, then div(curl A) = •(×A) = 0, always.
3. We can now apply another theorem: if the divergence of a vector field, say D, is zero—so if D = 0—then will be the the curl of some other vector field C, so we can write: D = ×C.  Applying this to B = 0, we can write:

If B = 0, then there is an A such that B = ×A

So, in essence, we’re just re-defining the magnetic field (B) in terms of some other vector field. To be precise, we write it as the curl of some other vector field, which we refer to as the (magnetic) vector potential. The components of the magnetic field vector can then be re-written as: We need to note an important point here: the equations above suggest that the components of B depend on position only. In other words, we assume static magnetic fields, so they do not change with time. That, in turn, assumes steady currents. We will want to extend the analysis to also include magnetodynamics. It complicates the analysis but… Well… Quantum mechanics is complicated. Let us remind ourselves here of Feynman’s re-formulation of Maxwell’s equations as a set of two equations (expressed in terms of the magnetic (vector) and the electric potential) only:  These equations are wave equations, as you can see by writing out the second equation: It is a wave equation in three dimensions. Note that, even in regions where we do no have any charges or currents, we have non-zero solutions for φ and A. These non-zero solutions are, effectively, representing the electric and magnetic fields as they travel through free space. As Feynman notes, the advantage of re-writing Maxwell’s equations as we do above, is that the two new equations make it immediately apparent that we’re talking electromagnetic waves, really. As he notes, for many practical purposes, it will still be convenient to use the original equations in terms of E and B, but… Well… Not in quantum mechanics, it turns out. As Feynman puts it: “E and B are on the other side of the mountain we have climbed. Now we are ready to cross over to the other side of the peak. Things will look different—we are ready for some new and beautiful views.”

Well… Maybe. Appreciating those views, as part of our study of quantum mechanics, does take time and effort, unfortunately. 😦

### The Schrödinger equation in an electromagnetic field

Feynman then jots down Schrödinger’s equation for the same particle (with charge q) moving in an electromagnetic field that is characterized not only by the (scalar) potential Φ but also by a vector potential A: Now where does that come from? We know the standard formula in an electric field, right? It’s the formula we used to find the energy states of electrons in a hydrogen atom:

i·ħ·∂ψ/∂t = −(1/2)·(ħ2/m)∇2ψ + V·ψ

Of course, it is easy to see that we replaced V by q·Φ, which makes sense: the potential of a charge in an electric field is the product of the charge (q) and the (electric) potential (Φ), because Φ is, obviously, the potential energy of the unit charge. It’s also easy to see we can re-write −ħ2·∇2ψ as [(ħ/i)·∇]·[(ħ/i)·∇]ψ because (1/i)·(1/i) = 1/i2 = 1/(−1) = −1. 🙂 Alright. So it’s just that −q·A term in the (ħ/i)∇ − q·A expression that we need to explain now.

Unfortunately, that explanation is not so easy. Feynman basically re-derives Schrödinger’s equation using his trade-mark historical argument – which did not include any magnetic field – with a vector potential. The re-derivation is rather annoying, and I didn’t have the courage to go through it myself, so you should – just like me – just believe Feynman when he says that, when there’s a vector potential – i.e. when there’s a magnetic field – then that (ħ/i)·∇ operator – which is the momentum operator– ought to be replaced by a new momentum operator: So… Well… There we are… 🙂 So far, so good? Well… Maybe.

While, as mentioned, you won’t be interested in the mathematical argument, it is probably worthwhile to reproduce Feynman’s more intuitive explanation of why the operator above is what it is. In other words, let us try to understand that −qA term. Look at the following situation: we’ve got a solenoid here, and some current I is going through it so there’s a magnetic field B. Think of the dynamics while we turn on this flux. Maxwell’s second equation (∇×E = −∂B/∂t) tells us the line integral of E around a loop will be equal to the time rate of change of the magnetic flux through that loop. The ∇×E = −∂B/∂t equation is a differential equation, of course, so it doesn’t have the integral, but you get the idea—I hope. Now, using the B = ×A equation we can re-write the ∇×E = −∂B/∂t as ∇×E = −∂(×A)/∂t. This allows us to write the following:

∇×E = −∂(×A)/∂t = −×(∂A/∂t) ⇔ E = −∂A/∂t

This is a remarkable expression. Note its derivation is based on the commutativity of the curl and time derivative operators, which is a property that can easily be explained: if we have a function in two variables—say x and t—then the order of the derivation doesn’t matter: we can first take the derivative with respect to and then to t or, alternatively, we can first take the time derivative and then do the ∂/∂x operation. So… Well… The curl is, effectively, a derivative with regard to the spatial variables. OK. So what? What’s the point?

Well… If we’d have some charge q, as shown in the illustration above, that would happen to be there as the flux is being switched on, it will experience a force which is equal to F = qE. We can now integrate this over the time interval (t) during which the flux is being built up to get the following:

0t F = ∫0t m·a = ∫0t m·dv/dt = m·vt= ∫0t q·E = −∫0t q·∂A/∂t = −q·At

Assuming v0 and Aare zero, we may drop the time subscript and simply write:

v = −q·A

The point is: during the build-up of the magnetic flux, our charge will pick up some (classical) momentum that is equal to p = m·v = −q·A. So… Well… That sort of explains the additional term in our new momentum operator.

Note: For some reason I don’t quite understand, Feynman introduces the weird concept of ‘dynamical momentum’, which he defines as the quantity m·v + q·A, so that quantity must be zero in the analysis above. I quickly googled to see why but didn’t invest too much time in the research here. It’s just… Well… A bit puzzling. I don’t really see the relevance of his point here: I am quite happy to go along with the new operator, as it’s rather obvious that introducing changing magnetic fields must, obviously, also have some impact on our wave equations—in classical as well as in quantum mechanics.

### Local conservation of probability

The title of this section in Feynman’s Lecture (yes, still the same Lecture – we’re not switching topics here) is the equation of continuity for probabilities. I find it brilliant, because it confirms my interpretation of the wave function as describing some kind of energy flow. Let me quote Feynman on his endeavor here:

“An important part of the Schrödinger equation for a single particle is the idea that the probability to find the particle at a position is given by the absolute square of the wave function. It is also characteristic of the quantum mechanics that probability is conserved in a local sense. When the probability of finding the electron somewhere decreases, while the probability of the electron being elsewhere increases (keeping the total probability unchanged), something must be going on in between. In other words, the electron has a continuity in the sense that if the probability decreases at one place and builds up at another place, there must be some kind of flow between. If you put a wall, for example, in the way, it will have an influence and the probabilities will not be the same. So the conservation of probability alone is not the complete statement of the conservation law, just as the conservation of energy alone is not as deep and important as the local conservation of energy. If energy is disappearing, there must be a flow of energy to correspond. In the same way, we would like to find a “current” of probability such that if there is any change in the probability density (the probability of being found in a unit volume), it can be considered as coming from an inflow or an outflow due to some current.”

This is it, really ! The wave function does represent some kind of energy flow – between a so-called ‘real’ and a so-called ‘imaginary’ space, which are to be defined in terms of directional versus rotational energy, as I try to point out – admittedly: more by appealing to intuition than to mathematical rigor – in that post of mine on the meaning of the wavefunction.

So what is the flow – or probability current as Feynman refers to it? Well… Here’s the formula: Huh? Yes. Don’t worry too much about it right now. The essential point is to understand what this current – denoted by J – actually stands for: So what’s next? Well… Nothing. I’ll actually refer you to Feynman now, because I can’t improve on how he explains how pairs of electrons start behaving when temperatures are low enough to render Boltzmann’s Law irrelevant: the kinetic energy that’s associated with temperature can no longer break up electron pairs if temperature comes close to the zero point.

Huh? What? Electron pairs? Electrons are not supposed to form pairs, are they? They carry the same charge and are, therefore, supposed to repel each other. Well… Yes and no. In my post on the electron orbitals in a hydrogen atom – which just presented Feynman’s presentation on the subject-matter in a, hopefully, somewhat more readable format – we calculated electron orbitals neglecting spin. In Feynman’s words:

“We make another approximation by forgetting that the electron has spin. […] The non-relativistic Schrödinger equation disregards magnetic effects. [However] Small magnetic effects [do] occur because, from the electron’s point-of-view, the proton is a circulating charge which produces a magnetic field. In this field the electron will have a different energy with its spin up than with it down. [Hence] The energy of the atom will be shifted a little bit from what we will calculate. We will ignore this small energy shift. Also we will imagine that the electron is just like a gyroscope moving around in space always keeping the same direction of spin. Since we will be considering a free atom in space the total angular momentum will be conserved. In our approximation we will assume that the angular momentum of the electron spin stays constant, so all the rest of the angular momentum of the atom—what is usually called “orbital” angular momentum—will also be conserved. To an excellent approximation the electron moves in the hydrogen atom like a particle without spin—the angular momentum of the motion is a constant.”

To an excellent approximation… But… Well… Electrons in a metal do form pairs, because they can give up energy in that way and, hence, they are more stable that way. Feynman does not go into the details here – I guess because that’s way beyond the undergrad level – but refers to the Bardeen-Coopers-Schrieffer (BCS) theory instead – the authors of which got a Nobel Prize in Physics in 1972 (that’s a decade or so after Feynman wrote this particular Lecture), so I must assume the theory is well accepted now. 🙂

Of course, you’ll shout now: Hey! Hydrogen is not a metal! Well… Think again: the latest breakthrough in physics is making hydrogen behave like a metal. 🙂 And I am really talking the latest breakthrough: Science just published the findings of this experiment last month! 🙂 🙂 In any case, we’re not talking hydrogen here but superconducting materials, to which – as far as we know – the BCS theory does apply.

So… Well… I am done. I just wanted to show you why it’s important to work your way through Feynman’s last Lecture because… Well… Quantum mechanics does explain everything – although the nitty-gritty of it (the Meissner effect, the London equation, flux quantization, etc.) are rather hard bullets to bite. 😦

Don’t give up ! I am struggling with the nitty-gritty too ! 🙂