Polarization states as hidden variables?

This post explores the limits of the physical interpretation of the wavefunction we have been building up in previous posts. It does so by examining if it can be used to provide a hidden-variable theory for explaining quantum-mechanical interference. The hidden variable is the polarization state of the photon.

The outcome is as expected: the theory does not work. Hence, this paper clearly shows the limits of any physical or geometric interpretation of the wavefunction.

This post sounds somewhat academic because it is, in fact, a draft of a paper I might try to turn into an article for a journal. There is a useful addendum to the post below: it offers a more sophisticated analysis of linear and circular polarization states (see: Linear and Circular Polarization States in the Mach-Zehnder Experiment). Have fun with it !

A physical interpretation of the wavefunction

Duns Scotus wrote: pluralitas non est ponenda sine necessitate. Plurality is not to be posited without necessity.[1] And William of Ockham gave us the intuitive lex parsimoniae: the simplest solution tends to be the correct one.[2] But redundancy in the description does not seem to bother physicists. When explaining the basic axioms of quantum physics in his famous Lectures on quantum mechanics, Richard Feynman writes:

“We are not particularly interested in the mathematical problem of finding the minimum set of independent axioms that will give all the laws as consequences. Redundant truth does not bother us. We are satisfied if we have a set that is complete and not apparently inconsistent.”[3]

Also, most introductory courses on quantum mechanics will show that both ψ = exp(iθ) = exp[i(kx-ωt)] and ψ* = exp(-iθ) = exp[-i(kx-ωt)] = exp[i(ωt-kx)] = -ψ are acceptable waveforms for a particle that is propagating in the x-direction. Both have the required mathematical properties (as opposed to, say, some real-valued sinusoid). We would then think some proof should follow of why one would be better than the other or, preferably, one would expect as a discussion on what these two mathematical possibilities might represent¾but, no. That does not happen. The physicists conclude that “the choice is a matter of convention and, happily, most physicists use the same convention.”[4]

Instead of making a choice here, we could, perhaps, use the various mathematical possibilities to incorporate spin in the description, as real-life particles – think of electrons and photons here – have two spin states[5] (up or down), as shown below.

Table 1: Matching mathematical possibilities with physical realities?[6]

Spin and direction Spin up Spin down
Positive x-direction ψ = exp[i(kx-ωt)] ψ* = exp[i(ωt-kx)]
Negative x-direction χ = exp[i(ωt-kx)] χ* = exp[i(kx+ωt)]

That would make sense – for several reasons. First, theoretical spin-zero particles do not exist and we should therefore, perhaps, not use the wavefunction to describe them. More importantly, it is relatively easy to show that the weird 720-degree symmetry of spin-1/2 particles collapses into an ordinary 360-degree symmetry and that we, therefore, would have no need to describe them using spinors and other complicated mathematical objects.[7] Indeed, the 720-degree symmetry of the wavefunction for spin-1/2 particles is based on an assumption that the amplitudes C’up = -Cup and C’down = -Cdown represent the same state—the same physical reality. As Feynman puts it: “Both amplitudes are just multiplied by −1 which gives back the original physical system. It is a case of a common phase change.”[8]

In the physical interpretation given in Table 1, these amplitudes do not represent the same state: the minus sign effectively reverses the spin direction. Putting a minus sign in front of the wavefunction amounts to taking its complex conjugate: -ψ = ψ*. But what about the common phase change? There is no common phase change here: Feynman’s argument derives the C’up = -Cup and C’down = -Cdown identities from the following equations: C’up = eCup and C’down = eCdown. The two phase factors  are not the same: +π and -π are not the same. In a geometric interpretation of the wavefunction, +π is a counterclockwise rotation over 180 degrees, while -π is a clockwise rotation. We end up at the same point (-1), but it matters how we get there: -1 is a complex number with two different meanings.

We have written about this at length and, hence, we will not repeat ourselves here.[9] However, this realization – that one of the key propositions in quantum mechanics is basically flawed – led us to try to question an axiom in quantum math that is much more fundamental: the loss of determinism in the description of interference.

The reader should feel reassured: the attempt is, ultimately, not successful—but it is an interesting exercise.

The loss of determinism in quantum mechanics

The standard MIT course on quantum physics vaguely introduces Bell’s Theorem – labeled as a proof of what is referred to as the inevitable loss of determinism in quantum mechanics – early on. The argument is as follows. If we have a polarizer whose optical axis is aligned with, say, the x-direction, and we have light coming in that is polarized along some other direction, forming an angle α with the x-direction, then we know – from experiment – that the intensity of the light (or the fraction of the beam’s energy, to be precise) that goes through the polarizer will be equal to cos2α.

But, in quantum mechanics, we need to analyze this in terms of photons: a fraction cos2α of the photons must go through (because photons carry energy and that’s the fraction of the energy that is transmitted) and a fraction 1-cos2α must be absorbed. The mentioned MIT course then writes the following:

“If all the photons are identical, why is it that what happens to one photon does not happen to all of them? The answer in quantum mechanics is that there is indeed a loss of determinism. No one can predict if a photon will go through or will get absorbed. The best anyone can do is to predict probabilities. Two escape routes suggest themselves. Perhaps the polarizer is not really a homogeneous object and depending exactly on where the photon is it either gets absorbed or goes through. Experiments show this is not the case.

A more intriguing possibility was suggested by Einstein and others. A possible way out, they claimed, was the existence of hidden variables. The photons, while apparently identical, would have other hidden properties, not currently understood, that would determine with certainty which photon goes through and which photon gets absorbed. Hidden variable theories would seem to be untestable, but surprisingly they can be tested. Through the work of John Bell and others, physicists have devised clever experiments that rule out most versions of hidden variable theories. No one has figured out how to restore determinism to quantum mechanics. It seems to be an impossible task.”[10]

The student is left bewildered here. Are there only two escape routes? And is this the way how polarization works, really? Are all photons identical? The Uncertainty Principle tells us that their momentum, position, or energy will be somewhat random. Hence, we do not need to assume that the polarizer is nonhomogeneous, but we need to think of what might distinguish the individual photons.

Considering the nature of the problem – a photon goes through or it doesn’t – it would be nice if we could find a binary identifier. The most obvious candidate for a hidden variable would be the polarization direction. If we say that light is polarized along the x-direction, we should, perhaps, distinguish between a plus and a minus direction? Let us explore this idea.

Linear polarization states

The simple experiment above – linearly polarized light going through a polaroid – involves linearly polarized light. We can easily distinguish between left- and right-hand circular polarization, but if we have linearly polarized light, can we distinguish between a plus and a minus direction? Maybe. Maybe not. We can surely think about different relative phases and how that could potentially have an impact on the interaction with the molecules in the polarizer.

Suppose the light is polarized along the x-direction. We know the component of the electric field vector along the y-axis[11] will then be equal to Ey = 0, and the magnitude of the x-component of E will be given by a sinusoid. However, here we have two distinct possibilities: Ex = cos(ω·t) or, alternatively, Ex = sin(ω·t). These are the same functions but – crucially important – with a phase difference of 90°: sin(ω·t) = cos(ω·t + π/2).

  Figure 1: Two varieties of linearly polarized light?[12]

600px-Sine_cosine_one_period

Would this matter? Sure. We can easily come up with some classical explanations of why this would matter. Think, for example, of birefringent material being defined in terms of quarter-wave plates. In fact, the more obvious question is: why would this not make a difference?

Of course, this triggers another question: why would we have two possibilities only? What if we add an additional 90° shift to the phase? We know that cos(ω·t + π) = –cos(ω·t). We cannot reduce this to cos(ω·t) or sin(ω·t). Hence, if we think in terms of 90° phase differences, then –cos(ω·t) = cos(ω·t + π)  and –sin(ω·t) = sin(ω·t + π) are different waveforms too. In fact, why should we think in terms of 90° phase shifts only? Why shouldn’t we think of a continuum of linear polarization states?

We have no sensible answer to that question. We can only say: this is quantum mechanics. We think of a photon as a spin-one particle and, for that matter, as a rather particular one, because it misses the zero state: it is either up, or down. We may now also assume two (linear) polarization states for the molecules in our polarizer and suggest a basic theory of interaction that may or may not explain this very basic fact: a photon gets absorbed, or it gets transmitted. The theory is that if the photon and the molecule are in the same (linear) polarization state, then we will have constructive interference and, somehow, a photon gets through.[13] If the linear polarization states are opposite, then we will have destructive interference and, somehow, the photon is absorbed. Hence, our hidden variables theory for the simple situation that we discussed above (a photon does or does not go through a polarizer) can be summarized as follows:

Linear polarization state Incoming photon up (+) Incoming photon down (-)
Polarizer molecule up (+) Constructive interference: photon goes through Destructive interference: photon is absorbed
Polarizer molecule down (-) Destructive interference: photon is absorbed Constructive interference: photon goes through

Nice. No loss of determinism here. But does it work? The quantum-mechanical mathematical framework is not there to explain how a polarizer could possibly work. It is there to explain the interference of a particle with itself. In Feynman’s words, this is the phenomenon “which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics.”[14]

So, let us try our new theory of polarization states as a hidden variable on one of those interference experiments. Let us choose the standard one: the Mach-Zehnder interferometer experiment.

Polarization states as hidden variables in the Mach-Zehnder experiment

The setup of the Mach-Zehnder interferometer is well known and should, therefore, probably not require any explanation. We have two beam splitters (BS1 and BS2) and two perfect mirrors (M1 and M2). An incident beam coming from the left is split at BS1 and recombines at BS2, which sends two outgoing beams to the photon detectors D0 and D1. More importantly, the interferometer can be set up to produce a precise interference effect which ensures all the light goes into D0, as shown below. Alternatively, the setup may be altered to ensure all the light goes into D1.

Figure 2: The Mach-Zehnder interferometer[15]

Mach Zehnder

The classical explanation is easy enough. It is only when we think of the beam as consisting of individual photons that we get in trouble. Each photon must then, somehow, interfere with itself which, in turn, requires the photon to, somehow, go through both branches of the interferometer at the same time. This is solved by the magical concept of the probability amplitude: we think of two contributions a and b (see the illustration above) which, just like a wave, interfere to produce the desired result¾except that we are told that we should not try to think of these contributions as actual waves.

So that is the quantum-mechanical explanation and it sounds crazy and so we do not want to believe it. Our hidden variable theory should now show the photon does travel along one path only. If the apparatus is set up to get all photons in the D0 detector, then we might, perhaps, have a sequence of events like this:

Photon polarization At BS1 At BS2 Final result
Up (+) Photon is reflected Photon is reflected Photon goes to D0
Down () Photon is transmitted Photon is transmitted Photon goes to D0

 

Of course, we may also set up the apparatus to get all photons in the D1 detector, in which case the sequence of events might be this:

Photon polarization At BS1 At BS2 Final result
Up (+) Photon is reflected Photon is transmitted Photon goes to D1
Down () Photon is transmitted Photon is reflected Photon goes to D1

This is a nice symmetrical explanation that does not involve any quantum-mechanical weirdness. The problem is: it cannot work. Why not? What happens if we block one of the two paths? For example, let us block the lower path in the setup where all photons went to D0. We know – from experiment – that the outcome will be the following:

Final result Probability
Photon is absorbed at the block 0.50
Photon goes to D0 0.25
Photon goes to D1 0.25

How is this possible? Before blocking the lower path, no photon went to D1. They all went to D0. If our hidden variable theory was correct, the photons that do not get absorbed should also go to D0, as shown below.

Photon polarization At BS1 At BS2 Final result
Up (+) Photon is reflected Photon is reflected Photon goes to D0
Down () Photon is absorbed Photon was absorbed Photon was absorbed

Conclusion

Our hidden variable theory does not work. Physical or geometric interpretations of the wavefunction are nice, but they do not explain quantum-mechanical interference. Their value is, therefore, didactic only.

Jean Louis Van Belle, 2 November 2018

References

This paper discusses general principles in physics only. Hence, references were limited to references to general textbooks and courses and physics textbooks only. The two key references here are the MIT introductory course on quantum physics and Feynman’s Lectures – both of which can be consulted online. Additional references to other material are given in the text itself (see footnotes).

[1] Duns Scotus, Commentaria.

[2] See: https://en.wikipedia.org/wiki/Occam%27s_razor.

[3] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 5, Section 5.

[4] See, for example, the MIT’s edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 4, Section 3.

[5] Photons are spin-one particles but they do not have a spin-zero state.

[6] Of course, the formulas only give the elementary wavefunction. The wave packet will be a Fourier sum of such functions.

[7] See, for example, https://warwick.ac.uk/fac/sci/physics/staff/academic/mhadley/explanation/spin/, accessed on 30 October 2018

[8] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 6, Section 3.

[9] Jean Louis Van Belle, Euler’s wavefunction (http://vixra.org/abs/1810.0339, accessed on 30 October 2018)

[10] See: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 3 (Loss of determinism).

[11] The z-direction is the direction of wave propagation in this example. In quantum mechanics, we often define the direction of wave propagation as the x-direction. This will, hopefully, not confuse the reader. The choice of axes is usually clear from the context.

[12] Source of the illustration: https://upload.wikimedia.org/wikipedia/commons/7/71/Sine_cosine_one_period.svg..

[13] Classical theory assumes an atomic or molecular system will absorb a photon and, therefore, be in an excited state (with higher energy). The atomic or molecular system then goes back into its ground state by emitting another photon with the same energy. Hence, we should probably not think in terms of a specific photon getting through.

[14] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 1, Section 1.

[15] Source of the illustration: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 4 (Quantum Superpositions).

Advertisements

Surely You’re Joking, Mr Feynman !

I think I cracked the nut. Academics always throw two nasty arguments into the discussion on any geometric or physical interpretations of the wavefunction:

  1. The superposition of wavefunctions is done in the complex space and, hence, the assumption of a real-valued envelope for the wavefunction is, therefore, not acceptable.
  2. The wavefunction for spin-1/2 particles cannot represent any real object because of its 720-degree symmetry in space. Real objects have the same spatial symmetry as space itself, which is 360 degrees. Hence, physical interpretations of the wavefunction are nonsensical.

Well… I’ve finally managed to deconstruct those arguments – using, paradoxically, Feynman’s own arguments against him. Have a look: click the link to my latest paper ! Enjoy !

The metaphysics of physics

I realized that my last posts were just some crude and rude soundbites, so I thought it would be good to briefly summarize them into something more coherent. Please let me know what you think of it.

The Uncertainty Principle: epistemology versus physics

Anyone who has read anything about quantum physics will know that its concepts and principles are very non-intuitive. Several interpretations have therefore emerged. The mainstream interpretation of quantum mechanics is referred to as the Copenhagen interpretation. It mainly distinguishes itself from more frivolous interpretations (such as the many-worlds and the pilot-wave interpretations) because it is… Well… Less frivolous. Unfortunately, the Copenhagen interpretation itself seems to be subject to interpretation.

One such interpretation may be referred to as radical skepticism – or radical empiricism[1]: we can only say something meaningful about Schrödinger’s cat if we open the box and observe its state. According to this rather particular viewpoint, we cannot be sure of its reality if we don’t make the observation. All we can do is describe its reality by a superposition of the two possible states: dead or alive. That’s Hilbert’s logic[2]: the two states (dead or alive) are mutually exclusive but we add them anyway. If a tree falls in the wood and no one hears it, then it is both standing and not standing. Richard Feynman – who may well be the most eminent representative of mainstream physics – thinks this epistemological position is nonsensical, and I fully agree with him:

“A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating.” (Feynman’s Lectures, III-2-6)

So what is the mainstream physicist’s interpretation of the Copenhagen interpretation of quantum mechanics then? To fully answer that question, I should encourage the reader to read all of Feynman’s Lectures on quantum mechanics. But then you are reading this because you don’t want to do that, so let me quote from his introductory Lecture on the Uncertainty Principle: “Making an observation affects the phenomenon. The point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way.” (ibidem)

It has nothing to do with consciousness. Reality and consciousness are two very different things. After having concluded the tree did make a noise, even if no one was there to  hear it, he wraps up the philosophical discussion as follows: “We might ask: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us leave the problem in that form.” In short, I think we can all agree that the cat is dead or alive, or that the tree is standing or not standing¾regardless of the observer. It’s a binary situation. Not something in-between. The box obscures our view. That’s all. There is nothing more to it.

Of course, in quantum physics, we don’t study cats but look at the behavior of photons and electrons (we limit our analysis to quantum electrodynamics – so we won’t discuss quarks or other sectors of the so-called Standard Model of particle physics). The question then becomes: what can we reasonably say about the electron – or the photon – before we observe it, or before we make any measurement. Think of the Stein-Gerlach experiment, which tells us that we’ll always measure the angular momentum of an electron – along any axis we choose – as either +ħ/2 or, else, as -ħ/2. So what’s its state before it enters the apparatus? Do we have to assume it has some definite angular momentum, and that its value is as binary as the state of our cat (dead or alive, up or down)?

We should probably explain what we mean by a definite angular momentum. It’s a concept from classical physics, and it assumes a precise value (or magnitude) along some precise direction. We may challenge these assumptions. The direction of the angular momentum may be changing all the time, for example. If we think of the electron as a pointlike charge – whizzing around in its own space – then the concept of a precise direction of its angular momentum becomes quite fuzzy, because it changes all the time. And if its direction is fuzzy, then its value will be fuzzy as well. In classical physics, such fuzziness is not allowed, because angular momentum is conserved: it takes an outside force – or torque – to change it. But in quantum physics, we have the Uncertainty Principle: some energy (force over a distance, remember) can be borrowed – so to speak – as long as it’s swiftly being returned – within the quantitative limits set by the Uncertainty Principle: ΔE·Δt = ħ/2.

Mainstream physicists – including Feynman – do not try to think about this. For them, the Stern-Gerlach apparatus is just like Schrödinger’s box: it obscures the view. The cat is dead or alive, and each of the two states has some probability – but they must add up to one – and so they will write the state of the electron before it enters the apparatus as the superposition of the up and down states. I must assume you’ve seen this before:

|ψ〉 = Cup|up〉 + Cdown|down〉

It’s the so-called Dirac or bra-ket notation. Cup is the amplitude for the electron spin to be equal to +ħ/2 along the chosen direction – which we refer to as the z-direction because we will choose our reference frame such that the z-axis coincides with this chosen direction – and, likewise, Cup is the amplitude for the electron spin to be equal to -ħ/2 (along the same direction, obviously). Cup and Cup will be functions, and the associated probabilities will vary sinusoidally – with a phase difference so as to make sure both add up to one.

The model is consistent, but it feels like a mathematical trick. This description of reality – if that’s what it is – does not feel like a model of a real electron. It’s like reducing the cat in our box to the mentioned fuzzy state of being alive and dead at the same time. Let’s try to come up with something more exciting. 😊

[1] Academics will immediately note that radical empiricism and radical skepticism are very different epistemological positions but we are discussing some basic principles in physics here rather than epistemological theories.

[2] The reference to Hilbert’s logic refers to Hilbert spaces: a Hilbert space is an abstract vector space. Its properties allow us to work with quantum-mechanical states, which become state vectors. You should not confuse them with the real or complex vectors you’re used to. The only thing state vectors have in common with real or complex vectors is that (1) we also need a base (aka as a representation in quantum mechanics) to define them and (2) that we can make linear combinations.

The ‘flywheel’ electron model

Physicists describe the reality of electrons by a wavefunction. If you are reading this article, you know how a wavefunction looks like: it is a superposition of elementary wavefunctions. These elementary wavefunctions are written as Ai·exp(-iθi), so they have an amplitude Ai  and an argument θi = (Ei/ħ)·t – (pi/ħ)·x. Let’s forget about uncertainty, so we can drop the index (i) and think of a geometric interpretation of A·exp(-iθ) = A·eiθ.

Here we have a weird thing: physicists think the minus sign in the exponent (-iθ) should always be there: the convention is that we get the imaginary unit (i) by a 90° rotation of the real unit (1) – but the rotation is counterclockwise rotation. I like to think a rotation in the clockwise direction must also describe something real. Hence, if we are seeking a geometric interpretation, then we should explore the two mathematical possibilities: A·eiθ and A·e+iθ. I like to think these two wavefunctions describe the same electron but with opposite spin. How should we visualize this? I like to think of A·eiθ and A·e+iθ as two-dimensional harmonic oscillators:

eiθ = cos(-θ) + i·sin(-θ) = cosθ – i·sinθ

e+iθ = cosθ + i·sinθ

So we may want to imagine our electron as a pointlike electric charge (see the green dot in the illustration below) to spin around some center in either of the two possible directions. The cosine keeps track of the oscillation in one dimension, while the sine (plus or minus) keeps track of the oscillation in a direction that is perpendicular to the first one.

Figure 1: A pointlike charge in orbit

Circle_cos_sin

So we have a weird oscillator in two dimensions here, and we may calculate the energy in this oscillation. To calculate such energy, we need a mass concept. We only have a charge here, but a (moving) charge has an electromagnetic mass. Now, the electromagnetic mass of the electron’s charge may or may not explain all the mass of the electron (most physicists think it doesn’t) but let’s assume it does for the sake of the model that we’re trying to build up here. The point is: the theory of electromagnetic mass gives us a very simple explanation for the concept of mass here, and so we’ll use it for the time being. So we have some mass oscillating in two directions simultaneously: we basically assume space is, somehow, elastic. We have worked out the V-2 engine metaphor before, so we won’t repeat ourselves here.

Figure 2: A perpetuum mobile?

V2

Previously unrelated but structurally similar formulas may be related here:

  1. The energy of an oscillator: E = (1/2)·m·a2ω2
  2. Kinetic energy: E = (1/2)·m·v2
  3. The rotational (kinetic) energy that’s stored in a flywheel: E = (1/2)·I·ω2 = (1/2)·m·r2·ω2
  4. Einstein’s energy-mass equivalence relation: E = m·c2

Of course, we are mixing relativistic and non-relativistic formulas here, and there’s the 1/2 factor – but these are minor issues. For example, we were talking not one but two oscillators, so we should add their energies: (1/2)·m·a2·ω2 + (1/2)·m·a2·ω2 = m·a2·ω2. Also, one can show that the classical formula for kinetic energy (i.e. E = (1/2)·m·v2) morphs into E = m·c2 when we use the relativistically correct force equation for an oscillator. So, yes, our metaphor – or our suggested physical interpretation of the wavefunction, I should say – makes sense.

If you know something about physics, then you know the concept of the electromagnetic mass – its mathematical derivation, that is – gives us the classical electron radius, aka as the Thomson radius. It’s the smallest of a trio of radii that are relevant when discussing electrons: the other two radii are the Bohr radius and the Compton scattering radius respectively. The Thomson radius is used in the context of elastic scattering: the frequency of the incident particle (usually a photon), and the energy of the electron itself, do not change. In contrast, Compton scattering does change the frequency of the photon that is being scattered, and also impacts the energy of our electron. [As for the Bohr radius, you know that’s the radius of an electron orbital, roughly speaking – or the size of a hydrogen atom, I should say.]

Now, if we combine the E = m·a2·ω2 and E = m·c2 equations, then a·ω must be equal to c, right? Can we show this? Maybe. It is easy to see that we get the desired equality by substituting the amplitude of the oscillation (a) for the Compton scattering radius r = ħ/(m·c), and ω (the (angular) frequency of the oscillation) by using the Planck relation (ω = E/ħ):     

a·ω = [ħ/(m·c)]·[E/ħ] = E/(m·c) = m·c2/(m·c) = c

We get a wonderfully simple geometric model of an electron here: an electric charge that spins around in a plane. Its radius is the Compton electron radius – which makes sense – and the radial velocity of our spinning charge is the speed of light – which may or may not make sense. Of course, we need an explanation of why this spinning charge doesn’t radiate its energy away – but then we don’t have such explanation anyway. All we can say is that the electron charge seems to be spinning in its own space – that it’s racing along a geodesic. It’s just like mass creates its own space here: according to Einstein’s general relativity theory, gravity becomes a pseudo-force—literally: no real force. How? I am not sure: the model here assumes the medium – empty space – is, somehow, perfectly elastic: the electron constantly borrows energy from one direction and then returns it to the other – so to speak. A crazy model, yes – but is there anything better? We only want to present a metaphor here: a possible visualization of quantum-mechanical models.

However, if this model is to represent anything real, then many more questions need to be answered. For starters, let’s think about an interpretation of the results of the Stern-Gerlach experiment.

Precession

A spinning charge is a tiny magnet – and so it’s got a magnetic moment, which we need to explain the Stern-Gerlach experiment. But it doesn’t explain the discrete nature of the electron’s angular momentum: it’s either +ħ/2 or -ħ/2, nothing in-between, and that’s the case along any direction we choose. How can we explain this? Also, space is three-dimensional. Why would electrons spin in a perfect plane? The answer is: they don’t.

Indeed, the corollary of the above-mentioned binary value of the angular momentum is that the angular momentum – or the electron’s spin – is never completely along any direction. This may or may not be explained by the precession of a spinning charge in a field, which is illustrated below (illustration taken from Feynman’s Lectures, II-35-3).

Figure 3: Precession of an electron in a magnetic fieldprecession

So we do have an oscillation in three dimensions here, really – even if our wavefunction is a two-dimensional mathematical object. Note that the measurement (or the Stein-Gerlach apparatus in this case) establishes a line of sight and, therefore, a reference frame, so ‘up’ and ‘down’, ‘left’ and ‘right’, and ‘in front’ and ‘behind’ get meaning. In other words, we establish a real space. The question then becomes: how and why does an electron sort of snap into place?

The geometry of the situation suggests the logical angle of the angular momentum vector should be 45°. Now, if the value of its z-component (i.e. its projection on the z-axis) is to be equal to ħ/2, then the magnitude of J itself should be larger. To be precise, it should be equal to ħ/√2 ≈ 0.7·ħ (just apply Pythagoras’ Theorem). Is that value compatible with our flywheel model?

Maybe. Let’s see. The classical formula for the magnetic moment is μ = I·A, with I the (effective) current and A the (surface) area. The notation is confusing because I is also used for the moment of inertia, or rotational mass, but… Well… Let’s do the calculation. The effective current is the electron charge (qe) divided by the period (T) of the orbital revolution: : I = qe/T. The period of the orbit is the time that is needed for the electron to complete one loop. That time (T) is equal to the circumference of the loop (2π·a) divided by the tangential velocity (vt). Now, we suggest vt = r·ω = a·ω = c, and the circumference of the loop is 2π·a. For a, we still use the Compton radius a = ħ/(m·c). Now, the formula for the area is A = π·a2, so we get:

μ = I·A = [qe/T]·π·a2 = [qe·c/(2π·a)]·[π·a2] = [(qe·c)/2]·a = [(qe·c)/2]·[ħ/(m·c)] = [qe/(2m)]·ħ

In a classical analysis, we have the following relation between angular momentum and magnetic moment:

μ = (qe/2m)·J

Hence, we find that the angular momentum J is equal to ħ, so that’s twice the measured value. We’ve got a problem. We would have hoped to find ħ/2 or ħ/√2. Perhaps it’s  because a = ħ/(m·c) is the so-called reduced Compton scattering radius…

Well… No.

Maybe we’ll find the solution one day. I think it’s already quite nice we have a model that’s accurate up to a factor of 1/2 or 1/√2. 😊

Post scriptum: I’ve turned this into a small article which may or may not be more readable. You can link to it here. Comments are more than welcome.

A Survivor’s Guide to Quantum Mechanics?

When modeling electromagnetic waves, the notion of left versus right circular polarization is quite clear and fully integrated in the mathematical treatment. In contrast, quantum math sticks to the very conventional idea that the imaginary unit (i) is – always! – a counter-clockwise rotation by 90 degrees. We all know that –i would do just as an imaginary unit as i, because the definition of the imaginary unit says the only requirement is that its square has to be equal to –1, and (–i)2 is also equal to –1.

So we actually have two imaginary units: i and –i. However, physicists stubbornly think there is only one direction for measuring angles, and that is counter-clockwise. That’s a mathematical convention, Professor: it’s something in your head only. It is not real. Nature doesn’t care about our conventions and, therefore, I feel the spin ‘up’ versus spin ‘down’ should correspond to the two mathematical possibilities: if the ‘up’ state is represented by some complex function, then the ‘down’ state should be represented by its complex conjugate.

This ‘additional’ rule wouldn’t change the basic quantum-mechanical rules – which are written in terms of state vectors in a Hilbert space (and, yes, a Hilbert space is an unreal as it sounds: its rules just say you should separate cats and dogs while adding them – which is very sensible advice, of course). However, they would, most probably (just my intuition – I need to prove it), avoid these crazy 720 degree symmetries which inspire the likes of Penrose to say there is no physical interpretation on the wavefunction.

Oh… As for the title of my post… I think it would be a great title for a book – because I’ll need some space to work it all out. 🙂

Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

“Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.”

So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂

Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real.

Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16)

Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂

Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective.

Should we reinvent wavefunction math?

Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂

If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:formula 1The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:formula 2As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·ei∙θ = a·ei∙(E/ħ)·t = a·cos[(E/ħ)∙t]  i·a·sin[(E/ħ)∙t] once more:

Circle_cos_sin

So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:

propagation

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors here – something with a magnitude and a direction in a real space – as opposed to state vectors from the Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:

  1. E is measured in newton per coulomb (N/C).
  2. B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

F = qE + q(v×B)

Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:

B = (1/c)∙ex×E = (1/c)∙iE

This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.

E and B

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.wavefunction

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

Wavefunction 2But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (again, real vectors – not these weird state vectors) rather than complex algebra?

I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂

Hmm… I need to do some research here. 🙂

Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂

P.S. 2: I also published this post on my new blog: https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. 🙂

Wavefunctions and the twin paradox

My previous post was awfully long, so I must assume many of my readers may have started to read it, but… Well… Gave up halfway or even sooner. 🙂 I added a footnote, though, which is interesting to reflect upon. Also, I know many of my readers aren’t interested in the math—even if they understand one cannot really appreciate quantum theory without the math. But… Yes. I may have left some readers behind. Let me, therefore, pick up the most interesting bit of all of the stories in my last posts in as easy a language as I can find.

We have that weird 360/720° symmetry in quantum physics or—to be precise—we have it for elementary matter-particles (think of electrons, for example). In order to, hopefully, help you understand what it’s all about, I had to explain the often-confused but substantially different concepts of a reference frame and a representational base (or representation tout court). I won’t repeat that explanation, but think of the following.

If we just rotate the reference frame over 360°, we’re just using the same reference frame and so we see the same thing: some object which we, vaguely, describe by some ei·θ function. Think of some spinning object. In its own reference frame, it will just spin around some center or, in ours, it will spin while moving along some axis in its own reference frame or, seen from ours, as moving in some direction while it’s spinning—as illustrated below.

To be precise, I should say that we describe it by some Fourier sum of such functions. Now, if its spin direction is… Well… In the other direction, then we’ll describe it by by some ei·θ function (again, you should read: a Fourier sum of such functions). Now, the weird thing is is the following: if we rotate the object itself, over the same 360°, we get a different object: our ei·θ and ei·θ function (again: think of a Fourier sum, so that’s a wave packet, really) becomes a −e±i·θ thing. We get a minus sign in front of it. So what happened here? What’s the difference, really?

Well… I don’t know. It’s very deep. Think of you and me as two electrons who are watching each other. If I do nothing, and you keep watching me while turning around me, for a full 360° (so that’s a rotation of your reference frame over 360°), then you’ll end up where you were when you started and, importantly, you’ll see the same thing: me. 🙂 I mean… You’ll see exactly the same thing: if I was an e+i·θ wave packet, I am still an an e+i·θ wave packet now. Or if I was an ei·θ wave packet, then I am still an an ei·θ wave packet now. Easy. Logical. Obvious, right?

But so now we try something different: turn around, over a full 360° turn, and you stay where you are and watch me while I am turning around. What happens? Classically, nothing should happen but… Well… This is the weird world of quantum mechanics: when I am back where I was—looking at you again, so to speak—then… Well… I am not quite the same any more. Or… Well… Perhaps I am but you see me differently. If I was e+i·θ wave packet, then I’ve become a −e+i·θ wave packet now.

Not hugely different but… Well… That minus sign matters, right? Or If I was wave packet built up from elementary a·ei·θ waves, then I’ve become a −ei·θ wave packet now. What happened?

It makes me think of the twin paradox in special relativity. We know it’s a paradox—so that’s an apparent contradiction only: we know which twin stayed on Earth and which one traveled because of the gravitational forces on the traveling twin. The one who stays on Earth does not experience any acceleration or deceleration. Is it the same here? I mean… The one who’s turning around must experience some force.

Can we relate this to the twin paradox? Maybe. Note that a minus sign in front of the e−±i·θ functions amounts a minus sign in front of both the sine and cosine components. So… Well… The negative of a sine and cosine is the sine and cosine but with a phase shift of 180°: −cosθ = cos(θ ± π) and −sinθ = sin(θ ± π). Now, adding or subtracting a common phase factor to/from the argument of the wavefunction amounts to changing the origin of time. So… Well… I do think the twin paradox and this rather weird business of 360° and 720° symmetries are, effectively, related. 🙂

Post scriptumGoogle honors Max Born’s 135th birthday today. 🙂 I think that’s a great coincidence in light of the stuff I’ve been writing about lately (possible interpretations of the wavefunction). 🙂

Wavefunctions, perspectives, reference frames, representations and symmetries

Ouff ! This title is quite a mouthful, isn’t it? 🙂 So… What’s the topic of the day? Well… In our previous posts, we developed a few key ideas in regard to a possible physical interpretation of the (elementary) wavefunction. It’s been an interesting excursion, and I summarized it in another pre-publication paper on the open arXiv.org site.

In my humble view, one of the toughest issues to deal with when thinking about geometric (or physical) interpretations of the wavefunction is the fact that a wavefunction does not seem to obey the classical 360° symmetry in space. In this post, I want to muse a bit about this and show that… Well… It does and it doesn’t. It’s got to do with what happens when you change from one representational base (or representation, tout court) to another which is… Well… Like changing the reference frame but, at the same time, it is also more than just a change of the reference frame—and so that explains the weird stuff (like that 720° symmetry of the amplitudes for spin-1/2 particles, for example).

I should warn you before you start reading: I’ll basically just pick up some statements from my paper (and previous posts) and develop some more thoughts on them. As a result, this post may not be very well structured. Hence, you may want to read the mentioned paper first.

The reality of directions

Huh? The reality of directions? Yes. I warned you. This post may cause brain damage. 🙂 The whole argument revolves around a thought experiment—but one whose results have been verified in zillions of experiments in university student labs so… Well… We do not doubt the results and, therefore, we do not doubt the basic mathematical results: we just want to try to understand them better.

So what is the set-up? Well… In the illustration below (Feynman, III, 6-3), Feynman compares the physics of two situations involving rather special beam splitters. Feynman calls them modified or ‘improved’ Stern-Gerlach apparatuses. The apparatus basically splits and then re-combines the two new beams along the z-axis. It is also possible to block one of the beams, so we filter out only particles with their spin up or, alternatively, with their spin down. Spin (or angular momentum or the magnetic moment) as measured along the z-axis, of course—I should immediately add: we’re talking the z-axis of the apparatus here.

rotation about z

The two situations involve a different relative orientation of the apparatuses: in (a), the angle is 0°, while in (b) we have a (right-handed) rotation of 90° about the z-axis. He then proves—using geometry and logic only—that the probabilities and, therefore, the magnitudes of the amplitudes (denoted by C+ and C and C’+ and C’ in the S and T representation respectively) must be the same, but the amplitudes must have different phases, noting—in his typical style, mixing academic and colloquial language—that “there must be some way for a particle to tell that it has turned a corner in (b).”

The various interpretations of what actually happens here may shed some light on the heated discussions on the reality of the wavefunction—and of quantum states. In fact, I should note that Feynman’s argument revolves around quantum states. To be precise, the analysis is focused on two-state systems only, and the wavefunction—which captures a continuum of possible states, so to speak—is introduced only later. However, we may look at the amplitude for a particle to be in the up– or down-state as a wavefunction and, therefore (but do note that’s my humble opinion once more), the analysis is actually not all that different.

We know, from theory and experiment, that the amplitudes are different. For example, for the given difference in the relative orientation of the two apparatuses (90°), we know that the amplitudes are given by C’+ = ei∙φ/2C+ = e i∙π/4C+ and C’ = ei∙φ/2C+ = e i∙π/4C respectively (the amplitude to go from the down to the up state, or vice versa, is zero). Hence, yes, wenot the particle, Mr. Feynman!—know that, in (b), the electron has, effectively, turned a corner.

The more subtle question here is the following: is the reality of the particle in the two setups the same? Feynman, of course, stays away from such philosophical question. He just notes that, while “(a) and (b) are different”, “the probabilities are the same”. He refrains from making any statement on the particle itself: is or is it not the same? The common sense answer is obvious: of course, it is! The particle is the same, right? In (b), it just took a turn—so it is just going in some other direction. That’s all.

However, common sense is seldom a good guide when thinking about quantum-mechanical realities. Also, from a more philosophical point of view, one may argue that the reality of the particle is not the same: something might—or must—have happened to the electron because, when everything is said and done, the particle did take a turn in (b). It did not in (a). [Note that the difference between ‘might’ and ‘must’ in the previous phrase may well sum up the difference between a deterministic and a non-deterministic world view but… Well… This discussion is going to be way too philosophical already, so let’s refrain from inserting new language here.]

Let us think this through. The (a) and (b) set-up are, obviously, different but… Wait a minute… Nothing is obvious in quantum mechanics, right? How can we experimentally confirm that they are different?

Huh? I must be joking, right? You can see they are different, right? No. I am not joking. In physics, two things are different if we get different measurement results. [That’s a bit of a simplified view of the ontological point of view of mainstream physicists, but you will have to admit I am not far off.] So… Well… We can’t see those amplitudes and so… Well… If we measure the same thing—same probabilities, remember?—why are they different? Think of this: if we look at the two beam splitters as one single tube (an ST tube, we might say), then all we did in (b) was bend the tube. Pursuing the logic that says our particle is still the same even when it takes a turn, we could say the tube is still the same, despite us having wrenched it over a 90° corner.

Now, I am sure you think I’ve just gone nuts, but just try to stick with me a little bit longer. Feynman actually acknowledges the same: we need to experimentally prove (a) and (b) are different. He does so by getting third apparatus in (U), as shown below, whose relative orientation to T is the same in both (a) and (b), so there is no difference there.

third apparatus

Now, the axis of is not the z-axis: it is the x-axis in (a), and the y-axis in (b). So what? Well… I will quote Feynman here—not (only) because his words are more important than mine but also because every word matters here:

“The two apparatuses in (a) and (b) are, in fact, different, as we can see in the following way. Suppose that we put an apparatus in front of which produces a pure +x state. Such particles would be split into +z and −z into beams in S, but the two beams would be recombined to give a +x state again at P1—the exit of S. The same thing happens again in T. If we follow by a third apparatus U, whose axis is in the +x direction and, as shown in (a), all the particles would go into the + beam of U. Now imagine what happens if and are swung around together by 90° to the positions shown in (b). Again, the apparatus puts out just what it takes in, so the particles that enter are in a +state with respect to S, which is different. By symmetry, we would now expect only one-half of the particles to get through.”

I should note that (b) shows the apparatus wide open so… Well… I must assume that’s a mistake (and should alert the current editors of the Lectures to it): Feynman’s narrative tells us we should also imagine it with the minus channel shut. In that case, it should, effectively, filter approximately half of the particles out, while they all get through in (a). So that’s a measurement result which shows the direction, as we see it, makes a difference.

Now, Feynman would be very angry with me—because, as mentioned, he hates philosophers—but I’d say: this experiment proves that a direction is something real. Of course, the next philosophical question then is: what is a direction? I could answer this by pointing to the experiment above: a direction is something that alters the probabilities between the STU tube as set up in (a) versus the STU tube in (b). In fact—but, I admit, that would be pretty ridiculous—we could use the varying probabilities as we wrench this tube over varying angles to define an angle! But… Well… While that’s a perfectly logical argument, I agree it doesn’t sound very sensical.

OK. Next step. What follows may cause brain damage. 🙂 Please abandon all pre-conceived notions and definitions for a while and think through the following logic.

You know this stuff is about transformations of amplitudes (or wavefunctions), right? [And you also want to hear about those special 720° symmetry, right? No worries. We’ll get there.] So the questions all revolve around this: what happens to amplitudes (or the wavefunction) when we go from one reference frame—or representation, as it’s referred to in quantum mechanics—to another?

Well… I should immediately correct myself here: a reference frame and a representation are two different things. They are related but… Well… Different… Quite different. Not same-same but different. 🙂 I’ll explain why later. Let’s go for it.

Before talking representations, let us first think about what we really mean by changing the reference frame. To change it, we first need to answer the question: what is our reference frame? It is a mathematical notion, of course, but then it is also more than that: it is our reference frame. We use it to make measurements. That’s obvious, you’ll say, but let me make a more formal statement here:

The reference frame is given by (1) the geometry (or the shape, if that sounds easier to you) of the measurement apparatus (so that’s the experimental set-up) here) and (2) our perspective of it.

If we would want to sound academic, we might refer to Kant and other philosophers here, who told us—230 years ago—that the mathematical idea of a three-dimensional reference frame is grounded in our intuitive notions of up and down, and left and right. [If you doubt this, think about the necessity of the various right-hand rules and conventions that we cannot do without in math, and in physics.] But so we do not want to sound academic. Let us be practical. Just think about the following. The apparatus gives us two directions:

(1) The up direction, which we associate with the positive direction of the z-axis, and

(2) the direction of travel of our particle, which we associate with the positive direction of the y-axis.

Now, if we have two axes, then the third axis (the x-axis) will be given by the right-hand rule, right? So we may say the apparatus gives us the reference frame. Full stop. So… Well… Everything is relative? Is this reference frame relative? Are directions relative? That’s what you’ve been told, but think about this: relative to what? Here is where the object meets the subject. What’s relative? What’s absolute? Frankly, I’ve started to think that, in this particular situation, we should, perhaps, not use these two terms. I am not saying that our observation of what physically happens here gives these two directions any absolute character but… Well… You will have to admit they are more than just some mathematical construct: when everything is said and done, we will have to admit that these two directions are real. because… Well… They’re part of the reality that we are observing, right? And the third one… Well… That’s given by our perspective—by our right-hand rule, which is… Well… Our right-hand rule.

Of course, now you’ll say: if you think that ‘relative’ and ‘absolute’ are ambiguous terms and that we, therefore, may want to avoid them a bit more, then ‘real’ and its opposite (unreal?) are ambiguous terms too, right? Well… Maybe. What language would you suggest? 🙂 Just stick to the story for a while. I am not done yet. So… Yes… What is their reality? Let’s think about that in the next section.

Perspectives, reference frames and symmetries

You’ve done some mental exercises already as you’ve been working your way through the previous section, but you’ll need to do plenty more. In fact, they may become physical exercise too: when I first thought about these things (symmetries and, more importantly, asymmetries in space), I found myself walking around the table with some asymmetrical everyday objects and papers with arrows and clocks and other stuff on it—effectively analyzing what right-hand screw, thumb or grip rules actually mean. 🙂

So… Well… I want you to distinguish—just for a while—between the notion of a reference frame (think of the xyz reference frame that comes with the apparatus) and your perspective on it. What’s our perspective on it? Well… You may be looking from the top, or from the side and, if from the side, from the left-hand side or the right-hand side—which, if you think about it, you can only define in terms of the various positive and negative directions of the various axes. 🙂 If you think this is getting ridiculous… Well… Don’t. Feynman himself doesn’t think this is ridiculous, because he starts his own “long and abstract side tour” on transformations with a very simple explanation of how the top and side view of the apparatus are related to the axes (i.e. the reference frame) that comes with it. You don’t believe me? This is the very first illustration of his Lecture on this:

Modified Stern-GerlachHe uses it to explain the apparatus (which we don’t do here because you’re supposed to already know how these (modified or improved) Stern-Gerlach apparatuses work). So let’s continue this story. Suppose that we are looking in the positive y-direction—so that’s the direction in which our particle is moving—then we might imagine how it would look like when we would make a 180° turn and look at the situation from the other side, so to speak. We do not change the reference frame (i.e. the orientation) of the apparatus here: we just change our perspective on it. Instead of seeing particles going away from us, into the apparatus, we now see particles coming towards us, out of the apparatus.

What happens—but that’s not scientific language, of course—is that left becomes right, and right becomes left. Top still is top, and bottom is bottom. We are looking now in the negative y-direction, and the positive direction of the x-axis—which pointed right when we were looking in the positive y-direction—now points left. I see you nodding your head now—because you’ve heard about parity inversions, mirror symmetries and what have you—and I hear you say: “That’s the mirror world, right?”

No. It is not. I wrote about this in another post: the world in the mirror is the world in the mirror. We don’t get a mirror image of an object by going around it and looking at its back side. I can’t dwell too much on this (just check that post, and another one who talks about the same), but so don’t try to connect it to the discussions on symmetry-breaking and what have you. Just stick to this story, which is about transformations of amplitudes (or wavefunctions). [If you really want to know—but I know this sounds counterintuitive—the mirror world doesn’t really switch left for right. Your reflection doesn’t do a 180 degree turn: it is just reversed front to back, with no rotation at all. It’s only your brain which mentally adds (or subtracts) the 180 degree turn that you assume must have happened from the observed front to back reversal. So the left to right reversal is only apparent. It’s a common misconception, and… Well… I’ll let you figure this out yourself. I need to move on.] Just note the following:

  1. The xyz reference frame remains a valid right-handed reference frame. Of course it does: it comes with our beam splitter, and we can’t change its reality, right? We’re just looking at it from another angle. Our perspective on it has changed.
  2. However, if we think of the real and imaginary part of the wavefunction describing the electrons that are going through our apparatus as perpendicular oscillations (as shown below)—a cosine and sine function respectively—then our change in perspective might, effectively, mess up our convention for measuring angles.

I am not saying it does. Not now, at least. I am just saying it might. It depends on the plane of the oscillation, as I’ll explain in a few moments. Think of this: we measure angles counterclockwise, right? As shown below… But… Well… If the thing below would be some funny clock going backwards—you’ve surely seen them in a bar or so, right?—then… Well… If they’d be transparent, and you’d go around them, you’d see them as going… Yes… Clockwise. 🙂 [This should remind you of a discussion on real versus pseudo-vectors, or polar versus axial vectors, but… Well… We don’t want to complicate the story here.]

Circle_cos_sin

Now, if we would assume this clock represents something real—and, of course, I am thinking of the elementary wavefunction eiθ = cosθ + i·sinθ now—then… Well… Then it will look different when we go around it. When going around our backwards clock above and looking at it from… Well… The back, we’d describe it, naively, as… Well… Think! What’s your answer? Give me the formula! 🙂

[…]

We’d see it as eiθ = cos(−θ) + i·sin(−θ) = cosθ − i·sinθ, right? The hand of our clock now goes clockwise, so that’s the opposite direction of our convention for measuring angles. Hence, instead of eiθ, we write eiθ, right? So that’s the complex conjugate. So we’ve got a different image of the same thing here. Not good. Not good at all. :-/

You’ll say: so what? We can fix this thing easily, right? You don’t need the convention for measuring angles or for the imaginary unit (i) here. This particle is moving, right? So if you’d want to look at the elementary wavefunction as some sort of circularly polarized beam (which, I admit, is very much what I would like to do, but its polarization is rather particular as I’ll explain in a minute), then you just need to define left- and right-handed angles as per the standard right-hand screw rule (illustrated below). To hell with the counterclockwise convention for measuring angles!

right-hand rule

You are right. We could use the right-hand rule more consistently. We could, in fact, use it as an alternative convention for measuring angles: we could, effectively, measure them clockwise or counterclockwise depending on the direction of our particle. But… Well… The fact is: we don’t. We do not use that alternative convention when we talk about the wavefunction. Physicists do use the counterclockwise convention all of the time and just jot down these complex exponential functions and don’t realize that, if they are to represent something real, our perspective on the reference frame matters. To put it differently, the direction in which we are looking at things matters! Hence, the direction is not… Well… I am tempted to say… Not relative at all but then… Well… We wanted to avoid that term, right? 🙂

[…]

I guess that, by now, your brain may suffered from various short-circuits. If not, stick with me a while longer. Let us analyze how our wavefunction model might be impacted by this symmetry—or asymmetry, I should say.

The flywheel model of an electron

In our previous posts, we offered a model that interprets the real and the imaginary part of the wavefunction as oscillations which each carry half of the total energy of the particle. These oscillations are perpendicular to each other, and the interplay between both is how energy propagates through spacetime. Let us recap the fundamental premises:

  1. The dimension of the matter-wave field vector is force per unit mass (N/kg), as opposed to the force per unit charge (N/C) dimension of the electric field vector. This dimension is an acceleration (m/s2), which is the dimension of the gravitational field.
  2. We assume this gravitational disturbance causes our electron (or a charged mass in general) to move about some center, combining linear and circular motion. This interpretation reconciles the wave-particle duality: fields interfere but if, at the same time, they do drive a pointlike particle, then we understand why, as Feynman puts it, “when you do find the electron some place, the entire charge is there.” Of course, we cannot prove anything here, but our elegant yet simple derivation of the Compton radius of an electron is… Well… Just nice. 🙂
  3. Finally, and most importantly in the context of this discussion, we noted that, in light of the direction of the magnetic moment of an electron in an inhomogeneous magnetic field, the plane which circumscribes the circulatory motion of the electron should also comprise the direction of its linear motion. Hence, unlike an electromagnetic wave, the plane of the two-dimensional oscillation (so that’s the polarization plane, really) cannot be perpendicular to the direction of motion of our electron.

Let’s say some more about the latter point here. The illustrations below (one from Feynman, and the other is just open-source) show what we’re thinking of. The direction of the angular momentum (and the magnetic moment) of an electron—or, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is traveling—cannot be parallel to the direction of motion. On the contrary, it must be perpendicular to the direction of motion. In other words, if we imagine our electron as spinning around some center (see the illustration on the left-hand side), then the disk it circumscribes (i.e. the plane of the polarization) has to comprise the direction of motion.

Of course, we need to add another detail here. As my readers will know, we do not really have a precise direction of angular momentum in quantum physics. While there is no fully satisfactory explanation of this, the classical explanation—combined with the quantization hypothesis—goes a long way in explaining this: an object with an angular momentum J and a magnetic moment μ that is not exactly parallel to some magnetic field B, will not line up: it will precess—and, as mentioned, the quantization of angular momentum may well explain the rest. [Well… Maybe… We have detailed our attempts in this regard in various posts on this (just search for spin or angular momentum on this blog, and you’ll get a dozen posts or so), but these attempts are, admittedly, not fully satisfactory. Having said that, they do go a long way in relating angles to spin numbers.]

The thing is: we do assume our electron is spinning around. If we look from the up-direction only, then it will be spinning clockwise if its angular momentum is down (so its magnetic moment is up). Conversely, it will be spinning counterclockwise if its angular momentum is up. Let us take the up-state. So we have a top view of the apparatus, and we see something like this:electron waveI know you are laughing aloud now but think of your amusement as a nice reward for having stuck to the story so far. Thank you. 🙂 And, yes, do check it yourself by doing some drawings on your table or so, and then look at them from various directions as you walk around the table as—I am not ashamed to admit this—I did when thinking about this. So what do we get when we change the perspective? Let us walk around it, counterclockwise, let’s say, so we’re measuring our angle of rotation as some positive angle. Walking around it—in whatever direction, clockwise or counterclockwise—doesn’t change the counterclockwise direction of our… Well… That weird object that might—just might—represent an electron that has its spin up and that is traveling in the positive y-direction.

When we look in the direction of propagation (so that’s from left to right as you’re looking at this page), and we abstract away from its linear motion, then we could, vaguely, describe this by some wrenched eiθ = cosθ + i·sinθ function, right? The x- and y-axes of the apparatus may be used to measure the cosine and sine components respectively.

Let us keep looking from the top but walk around it, rotating ourselves over a 180° angle so we’re looking in the negative y-direction now. As I explained in one of those posts on symmetries, our mind will want to switch to a new reference frame: we’ll keep the z-axis (up is up, and down is down), but we’ll want the positive direction of the x-axis to… Well… Point right. And we’ll want the y-axis to point away, rather than towards us. In short, we have a transformation of the reference frame here: z’zy’ = − y, and x’ = − x. Mind you, this is still a regular right-handed reference frame. [That’s the difference with a mirror image: a mirrored right-hand reference frame is no longer right-handed.] So, in our new reference frame, that we choose to coincide with our perspective, we will now describe the same thing as some −cosθ − i·sinθ = −eiθ function. Of course, −cosθ = cos(θ + π) and −sinθ = sin(θ + π) so we can write this as:

cosθ − i·sinθ = cos(θ + π) + i·sinθ = ei·(θ+π) = eiπ·eiθ = −eiθ.

Sweet ! But… Well… First note this is not the complex conjugate: eiθ = cosθ − i·sinθ ≠ −cosθ − i·sinθ = −eiθ. Why is that? Aren’t we looking at the same clock, but from the back? No. The plane of polarization is different. Our clock is more like those in Dali’s painting: it’s flat. 🙂 And, yes, let me lighten up the discussion with that painting here. 🙂 We need to have some fun while torturing our brain, right?

The_Persistence_of_Memory

So, because we assume the plane of polarization is different, we get an −eiθ function instead of a eiθ function.

Let us now think about the ei·(θ+π) function. It’s the same as −eiθ but… Well… We walked around the z-axis taking a full 180° turn, right? So that’s π in radians. So that’s the phase shift here. Hey! Try the following now. Go back and walk around the apparatus once more, but let the reference frame rotate with us, as shown below. So we start left and look in the direction of propagation, and then we start moving about the z-axis (which points out of this page, toward you, as you are looking at this), let’s say by some small angle α. So we rotate the reference frame about the z-axis by α and… Well… Of course, our ei·θ now becomes an our ei·(θ+α) function, right? We’ve just derived the transformation coefficient for a rotation about the z-axis, didn’t we? It’s equal to ei·α, right? We get the transformed wavefunction in the new reference frame by multiplying the old one by ei·α, right? It’s equal to ei·α·ei·θ = ei·(θ+α), right?

electron wave perspective changeWell…

[…]

No. The answer is: no. The transformation coefficient is not ei·α but ei·α/2. So we get an additional 1/2 factor in the phase shift.

Huh? Yes. That’s what it is: when we change the representation, by rotating our apparatus over some angle α about the z-axis, then we will, effectively, get a new wavefunction, which will differ from the old one by a phase shift that is equal to only half of the rotation angle only.

Huh? Yes. It’s even weirder than that. For a spin down electron, the transformation coefficient is ei·α/2, so we get an additional minus sign in the argument.

Huh? Yes.

I know you are terribly disappointed, but that’s how it is. That’s what hampers an easy geometric interpretation of the wavefunction. Paraphrasing Feynman, I’d say that, somehow, our electron not only knows whether or not it has taken a turn, but it also knows whether or not it is moving away from us or, conversely, towards us.

[…]

But… Hey! Wait a minute! That’s it, right? 

What? Well… That’s it! The electron doesn’t know whether it’s moving away or towards us. That’s nonsense. But… Well… It’s like this:

Our ei·α coefficient describes a rotation of the reference frame. In contrast, the ei·α/2 and ei·α/2 coefficients describe what happens when we rotate the T apparatus! Now that is a very different proposition. 

Right! You got it! Representations and reference frames are different things. Quite different, I’d say: representations are real, reference frames aren’t—but then you don’t like philosophical language, do you? 🙂 But think of it. When we just go about the z-axis, a full 180°, but we don’t touch that T-apparatus, we don’t change reality. When we were looking at the electron while standing left to the apparatus, we watched the electrons going in and moving away from us, and when we go about the z-axis, a full 180°, looking at it from the right-hand side, we see the electrons coming out, moving towards us. But it’s still the same reality. We simply change the reference frame—from xyz to x’y’z’ to be precise: we do not change the representation.

In contrast, when we rotate the apparatus over a full 180°, our electron now goes in the opposite direction. And whether that’s away or towards us, that doesn’t matter: it was going in one direction while traveling through S, and now it goes in the opposite direction—relative to the direction it was going in S, that is.

So what happens, really, when we change the representation, rather than the reference frame? Well… Let’s think about that. 🙂

Quantum-mechanical weirdness?

The transformation matrix for the amplitude of a system to be in an up or down state (and, hence, presumably, for a wavefunction) for a rotation about the z-axis is the following one:

rotation matrix

Feynman derives this matrix in a rather remarkable intellectual tour de force in the 6th of his Lectures on Quantum Mechanics. So that’s pretty early on. He’s actually worried about that himself, apparently, and warns his students that “This chapter is a rather long and abstract side tour, and it does not introduce any idea which we will not also come to by a different route in later chapters. You can, therefore, skip over it, and come back later if you are interested.”

Well… That’s how approached it. I skipped it, and didn’t worry about those transformations for quite a while. But… Well… You can’t avoid them. In some weird way, they are at the heart of the weirdness of quantum mechanics itself. Let us re-visit his argument. Feynman immediately gets that the whole transformation issue here is just a matter of finding an easy formula for that phase shift. Why? He doesn’t tell us. Lesser mortals like us must just assume that’s how the instinct of a genius works, right? 🙂 So… Well… Because he knows—from experiment—that the coefficient is ei·α/2 instead of ei·α, he just says the phase shift—which he denotes by λ—must be some proportional to the angle of rotation—which he denotes by φ rather than α (so as to avoid confusion with the Euler angle α). So he writes:

λ = m·φ

Initially, he also tries the obvious thing: m should be one, right? So λ = φ, right? Well… No. It can’t be. Feynman shows why that can’t be the case by adding a third apparatus once again, as shown below.

third apparatusLet me quote him here, as I can’t explain it any better:

“Suppose T is rotated by 360°; then, clearly, it is right back at zero degrees, and we should have C’+ = C+ and C’C or, what is the same thing, ei·m·2π = 1. We get m = 1. [But no!] This argument is wrong! To see that it is, consider that is rotated by 180°. If m were equal to 1, we would have C’+ei·πC+ = −C+ and C’ei·πC = −C. [Feynman works with states here, instead of the wavefunction of the particle as a whole. I’ll come back to this.] However, this is just the original state all over again. Both amplitudes are just multiplied by 1 which gives back the original physical system. (It is again a case of a common phase change.) This means that if the angle between and is increased to 180°, the system would be indistinguishable from the zero-degree situation, and the particles would again go through the (+) state of the apparatus. At 180°, though, the (+) state of the apparatus is the (xstate of the original S apparatus. So a (+x) state would become a (x) state. But we have done nothing to change the original state; the answer is wrong. We cannot have m = 1. We must have the situation that a rotation by 360°, and no smaller angle reproduces the same physical state. This will happen if m = 1/2.”

The result, of course, is this weird 720° symmetry. While we get the same physics after a 360° rotation of the T apparatus, we do not get the same amplitudes. We get the opposite (complex) number: C’+ei·2π/2C+ = −C+ and C’ei·2π/2C = −C. That’s OK, because… Well… It’s a common phase shift, so it’s just like changing the origin of time. Nothing more. Nothing less. Same physics. Same reality. But… Well… C’+ ≠ −C+ and C’ ≠ −C, right? We only get our original amplitudes back if we rotate the T apparatus two times, so that’s by a full 720 degrees—as opposed to the 360° we’d expect.

Now, space is isotropic, right? So this 720° business doesn’t make sense, right?

Well… It does and it doesn’t. We shouldn’t dramatize the situation. What’s the actual difference between a complex number and its opposite? It’s like x or −x, or t and −t. I’ve said this a couple of times already again, and I’ll keep saying it many times more: Nature surely can’t be bothered by how we measure stuff, right? In the positive or the negative direction—that’s just our choice, right? Our convention. So… Well… It’s just like that −eiθ function we got when looking at the same experimental set-up from the other side: our eiθ and −eiθ functions did not describe a different reality. We just changed our perspective. The reference frame. As such, the reference frame isn’t real. The experimental set-up is. And—I know I will anger mainstream physicists with this—the representation is. Yes. Let me say it loud and clear here:

A different representation describes a different reality.

In contrast, a different perspective—or a different reference frame—does not.

Conventions

While you might have had a lot of trouble going through all of the weird stuff above, the point is: it is not all that weird. We can understand quantum mechanics. And in a fairly intuitive way, really. It’s just that… Well… I think some of the conventions in physics hamper such understanding. Well… Let me be precise: one convention in particular, really. It’s that convention for measuring angles. Indeed, Mr. Leonhard Euler, back in the 18th century, might well be “the master of us all” (as Laplace is supposed to have said) but… Well… He couldn’t foresee how his omnipresent formula—eiθ = cosθ + i·sinθ—would, one day, be used to represent something real: an electron, or any elementary particle, really. If he would have known, I am sure he would have noted what I am noting here: Nature can’t be bothered by our conventions. Hence, if eiθ represents something real, then eiθ must also represent something real. [Coz I admire this genius so much, I can’t resist the temptation. Here’s his portrait. He looks kinda funny here, doesn’t he? :-)]

Leonhard_Euler

Frankly, he would probably have understood quantum-mechanical theory as easily and instinctively as Dirac, I think, and I am pretty sure he would have noted—and, if he would have known about circularly polarized waves, probably agreed to—that alternative convention for measuring angles: we could, effectively, measure angles clockwise or counterclockwise depending on the direction of our particle—as opposed to Euler’s ‘one-size-fits-all’ counterclockwise convention. But so we did not adopt that alternative convention because… Well… We want to keep honoring Euler, I guess. 🙂

So… Well… If we’re going to keep honoring Euler by sticking to that ‘one-size-fits-all’ counterclockwise convention, then I do believe that eiθ and eiθ represent two different realities: spin up versus spin down.

Yes. In our geometric interpretation of the wavefunction, these are, effectively, two different spin directions. And… Well… These are real directions: we see something different when they go through a Stern-Gerlach apparatus. So it’s not just some convention to count things like 0, 1, 2, etcetera versus 0, −1, −2 etcetera. It’s the same story again: different but related mathematical notions are (often) related to different but related physical possibilities. So… Well… I think that’s what we’ve got here. Think of it. Mainstream quantum math treats all wavefunctions as right-handed but… Well… A particle with up spin is a different particle than one with down spin, right? And, again, Nature surely cannot be bothered about our convention of measuring phase angles clockwise or counterclockwise, right? So… Well… Kinda obvious, right? 🙂

Let me spell out my conclusions here:

1. The angular momentum can be positive or, alternatively, negative: J = +ħ/2 or −ħ/2. [Let me note that this is not obvious. Or less obvious than it seems, at first. In classical theory, you would expect an electron, or an atomic magnet, to line up with the field. Well… The Stern-Gerlach experiment shows they don’t: they keep their original orientation. Well… If the field is weak enough.]

2. Therefore, we would probably like to think that an actual particle—think of an electron, or whatever other particle you’d think of—comes in two variants: right-handed and left-handed. They will, therefore, either consist of (elementary) right-handed waves or, else, (elementary) left-handed waves. An elementary right-handed wave would be written as: ψ(θi= eiθi = ai·(cosθi + i·sinθi). In contrast, an elementary left-handed wave would be written as: ψ(θie−iθi = ai·(cosθii·sinθi). So that’s the complex conjugate.

So… Well… Yes, I think complex conjugates are not just some mathematical notion: I believe they represent something real. It’s the usual thing: Nature has shown us that (most) mathematical possibilities correspond to real physical situations so… Well… Here you go. It is really just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! [As for the differences—different polarization plane and dimensions and what have you—I’ve already summed those up, so I won’t repeat myself here.] The point is: if we have two different physical situations, we’ll want to have two different functions to describe it. Think of it like this: why would we have two—yes, I admit, two related—amplitudes to describe the up or down state of the same system, but only one wavefunction for it? You tell me.

[…]

Authors like me are looked down upon by the so-called professional class of physicists. The few who bothered to react to my attempts to make sense of Einstein’s basic intuition in regard to the nature of the wavefunction all said pretty much the same thing: “Whatever your geometric (or physical) interpretation of the wavefunction might be, it won’t be compatible with the isotropy of space. You cannot imagine an object with a 720° symmetry. That’s geometrically impossible.”

Well… Almost three years ago, I wrote the following on this blog: “As strange as it sounds, a spin-1/2 particle needs two full rotations (2×360°=720°) until it is again in the same state. Now, in regard to that particularity, you’ll often read something like: “There is nothing in our macroscopic world which has a symmetry like that.” Or, worse, “Common sense tells us that something like that cannot exist, that it simply is impossible.” [I won’t quote the site from which I took this quotes, because it is, in fact, the site of a very respectable  research center!] Bollocks! The Wikipedia article on spin has this wonderful animation: look at how the spirals flip between clockwise and counterclockwise orientations, and note that it’s only after spinning a full 720 degrees that this ‘point’ returns to its original configuration after spinning a full 720 degrees.

720 degree symmetry

So… Well… I am still pursuing my original dream which is… Well… Let me re-phrase what I wrote back in January 2015:

Yes, we can actually imagine spin-1/2 particles, and we actually do not need all that much imagination!

In fact, I am tempted to think that I’ve found a pretty good representation or… Well… A pretty good image, I should say, because… Well… A representation is something real, remember? 🙂

Post scriptum (10 December 2017): Our flywheel model of an electron makes sense, but also leaves many unanswered questions. The most obvious one question, perhaps, is: why the up and down state only?

I am not so worried about that question, even if I can’t answer it right away because… Well… Our apparatus—the way we measure reality—is set up to measure the angular momentum (or the magnetic moment, to be precise) in one direction only. If our electron is captured by some harmonic (or non-harmonic?) oscillation in multiple dimensions, then it should not be all that difficult to show its magnetic moment is going to align, somehow, in the same or, alternatively, the opposite direction of the magnetic field it is forced to travel through.

Of course, the analysis for the spin up situation (magnetic moment down) is quite peculiar: if our electron is a mini-magnet, why would it not line up with the magnetic field? We understand the precession of a spinning top in a gravitational field, but… Hey… It’s actually not that different. Try to imagine some spinning top on the ceiling. 🙂 I am sure we can work out the math. 🙂 The electron must be some gyroscope, really: it won’t change direction. In other words, its magnetic moment won’t line up. It will precess, and it can do so in two directions, depending on its state. 🙂 […] At least, that’s why my instinct tells me. I admit I need to work out the math to convince you. 🙂

The second question is more important. If we just rotate the reference frame over 360°, we see the same thing: some rotating object which we, vaguely, describe by some e+i·θ function—to be precise, I should say: by some Fourier sum of such functions—or, if the rotation is in the other direction, by some ei·θ function (again, you should read: a Fourier sum of such functions). Now, the weird thing, as I tried to explain above is the following: if we rotate the object itself, over the same 360°, we get a different object: our ei·θ and ei·θ function (again: think of a Fourier sum, so that’s a wave packet, really) becomes a −e±i·θ thing. We get a minus sign in front of it. So what happened here? What’s the difference, really?

Well… I don’t know. It’s very deep. If I do nothing, and you keep watching me while turning around me, for a full 360°, then you’ll end up where you were when you started and, importantly, you’ll see the same thing. Exactly the same thing: if I was an e+i·θ wave packet, I am still an an e+i·θ wave packet now. Or if I was an ei·θ wave packet, then I am still an an ei·θ wave packet now. Easy. Logical. Obvious, right?

But so now we try something different: turn around, over a full 360° turn, and you stay where you are. When I am back where I was—looking at you again, so to speak—then… Well… I am not quite the same any more. Or… Well… Perhaps I am but you see me differently. If I was e+i·θ wave packet, then I’ve become a −e+i·θ wave packet now. Not hugely different but… Well… That minus sign matters, right? Or If I was wave packet built up from elementary a·ei·θ waves, then I’ve become a −ei·θ wave packet now. What happened?

It makes me think of the twin paradox in special relativity. We know it’s a paradox—so that’s an apparent contradiction only: we know which twin stayed on Earth and which one traveled because of the gravitational forces on the traveling twin. The one who stays on Earth does not experience any acceleration or deceleration. Is it the same here? I mean… The one who’s turning around must experience some force.

Can we relate this to the twin paradox? Maybe. Note that a minus sign in front of the e−±i·θ functions amounts a minus sign in front of both the sine and cosine components. So… Well… The negative of a sine and cosine is the sine and cosine but with a phase shift of 180°: −cosθ = cos(θ ± π) and −sinθ = sin(θ ± π). Now, adding or subtracting a common phase factor to/from the argument of the wavefunction amounts to changing the origin of time. So… Well… I do think the twin paradox and this rather weird business of 360° and 720° symmetries are, effectively, related. 🙂

The reality of the wavefunction

If you haven’t read any of my previous posts on the geometry of the wavefunction (this link goes to the most recent one of them), then don’t attempt to read this one. It brings too much stuff together to be comprehensible. In fact, I am not even sure if I am going to understand what I write myself. 🙂 [OK. Poor joke. Acknowledged.]

Just to recap the essentials, I part ways with mainstream physicists in regard to the interpretation of the wavefunction. For mainstream physicists, the wavefunction is just some mathematical construct. Nothing real. Of course, I acknowledge mainstream physicists have very good reasons for that, but… Well… I believe that, if there is interference, or diffraction, then something must be interfering, or something must be diffracting. I won’t dwell on this because… Well… I have done that too many times already. My hypothesis is that the wavefunction is, in effect, a rotating field vector, so it’s just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below).

Of course, it must be different, and it is. First, the (physical) dimension of the field vector of the matter-wave must be different. So what is it? Well… I am tempted to associate the real and imaginary component of the wavefunction with a force per unit mass (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so that’s the dimension of a gravitational field.

Second, I also am tempted to think that this gravitational disturbance causes an electron (or any matter-particle) to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves do not involve any mass: they’re just an oscillating field. Nothing more. Nothing less. Why would I believe there must still be some pointlike particle involved? Well… As Feynman puts it: “When you do find the electron some place, the entire charge is there.” (Feynman’s Lectures, III-21-4) So… Well… That’s why.

The third difference is one that I thought of only recently: the plane of the oscillation cannot be perpendicular to the direction of motion of our electron, because then we can’t explain the direction of its magnetic moment, which is either up or down when traveling through a Stern-Gerlach apparatus. I am more explicit on that in the mentioned post, so you may want to check there. 🙂

I wish I mastered the software to make animations such as the one above (for which I have to credit Wikipedia), but so I don’t. You’ll just have to imagine it. That’s great mental exercise, so… Well… Just try it. 🙂

Let’s now think about rotating reference frames and transformations. If the z-direction is the direction along which we measure the angular momentum (or the magnetic moment), then the up-direction will be the positive z-direction. We’ll also assume the y-direction is the direction of travel of our elementary particle—and let’s just consider an electron here so we’re more real. 🙂 So we’re in the reference frame that Feynman used to derive the transformation matrices for spin-1/2 particles (or for two-state systems in general). His ‘improved’ Stern-Gerlach apparatus—which I’ll refer to as a beam splitter—illustrates this geometry.

Modified Stern-Gerlach

So I think the magnetic moment—or the angular momentum, really—comes from an oscillatory motion in the x– and y-directions. One is the real component (the cosine function) and the other is the imaginary component (the sine function), as illustrated below. Circle_cos_sin

So the crucial difference with the animations above (which illustrate left- and a right-handed polarization respectively) is that we, somehow, need to imagine the circular motion is not in the xz-plane, but in the yz-plane. Now what happens if we change the reference frame?

Well… That depends on what you mean by changing the reference frame. Suppose we’re looking in the positive y-direction—so that’s the direction in which our particle is moving—, then we might imagine how it would look like when we would make a 180° turn and look at the situation from the other side, so to speak. Now, I did a post on that earlier this year, which you may want to re-read. When we’re looking at the same thing from the other side (from the back side, so to speak), we will want to use our familiar reference frame. So we will want to keep the z-axis as it is (pointing upwards), and we will also want to define the x– and y-axis using the familiar right-hand rule for defining a coordinate frame. So our new x-axis and our new y-axis will the same as the old x- and y-axes but with the sign reversed. In short, we’ll have the following mini-transformation: (1) z‘ = z, (2) x’ = −x, and (3) y’ = −y.

So… Well… If we’re effectively looking at something real that was moving along the y-axis, then it will now still be moving along the y’-axis, but in the negative direction. Hence, our elementary wavefunction eiθ = cosθ + i·sinθ will transform into −cosθ − i·sinθ = −cosθ − i·sinθ = cosθ − i·sinθ. It’s the same wavefunction. We just… Well… We just changed our reference frame. We didn’t change reality.

Now you’ll cry wolf, of course, because we just went through all that transformational stuff in our last post. To be specific, we presented the following transformation matrix for a rotation along the z-axis:rotation matrix

Now, if φ is equal to 180° (so that’s π in radians), then these eiφ/2 and eiφ/2/√2 factors are equal to eiπ/2 = +i and eiπ/2 = −i respectively. Hence, our eiθ = cosθ + i·sinθ becomes…

Hey ! Wait a minute ! We’re talking about two very different things here, right? The eiθ = cosθ + i·sinθ is an elementary wavefunction which, we presume, describes some real-life particle—we talked about an electron with its spin in the up-direction—while these transformation matrices are to be applied to amplitudes describing… Well… Either an up– or a down-state, right?

Right. But… Well… Is it so different, really? Suppose our eiθ = cosθ + i·sinθ wavefunction describes an up-electron, then we still have to apply that eiφ/2 = eiπ/2 = +i factor, right? So we get a new wavefunction that will be equal to eiφ/2·eiθ = eiπ/2·eiθ = +i·eiθ = i·cosθ + i2·sinθ = sinθ − i·cosθ, right? So how can we reconcile that with the cosθ − i·sinθ function we thought we’d find?

We can’t. So… Well… Either my theory is wrong or… Well… Feynman can’t be wrong, can he? I mean… It’s not only Feynman here. We’re talking all mainstream physicists here, right?

Right. But think of it. Our electron in that thought experiment does, effectively, make a turn of 180°, so it is going in the other direction now ! That’s more than just… Well… Going around the apparatus and looking at stuff from the other side.

Hmm… Interesting. Let’s think about the difference between the sinθ − i·cosθ and cosθ − i·sinθ functions. First, note that they will give us the same probabilities: the square of the absolute value of both complex numbers is the same. [It’s equal to 1 because we didn’t bother to put a coefficient in front.] Secondly, we should note that the sine and cosine functions are essentially the same. They just differ by a phase factor: cosθ = sin(θ + π/2) and −sinθ = cos(θ + π/2). Let’s see what we can do with that. We can write the following, for example:

sinθ − i·cosθ = −cos(θ + π/2) − i·sin(θ + π/2) = −[cos(θ + π/2) + i·sin(θ + π/2)] = −ei·(θ + π/2)

Well… I guess that’s something at least ! The ei·θ and −ei·(θ + π/2) functions differ by a phase shift and a minus sign so… Well… That’s what it takes to reverse the direction of an electron. 🙂 Let us mull over that in the coming days. As I mentioned, these more philosophical topics are not easily exhausted. 🙂

Transforming amplitudes for spin-1/2 particles

Some say it is not possible to fully understand quantum-mechanical spin. Now, I do agree it is difficult, but I do not believe it is impossible. That’s why I wrote so many posts on it. Most of these focused on elaborating how the classical view of how a rotating charge precesses in a magnetic field might translate into the weird world of quantum mechanics. Others were more focused on the corollary of the quantization of the angular momentum, which is that, in the quantum-mechanical world, the angular momentum is never quite all in one direction only—so that explains some of the seemingly inexplicable randomness in particle behavior.

Frankly, I think those explanations help us quite a bit already but… Well… We need to go the extra mile, right? In fact, that’s drives my search for a geometric (or physical) interpretation of the wavefunction: the extra mile. 🙂

Now, in one of these many posts on spin and angular momentum, I advise my readers – you, that is – to try to work yourself through Feynman’s 6th Lecture on quantum mechanics, which is highly abstract and, therefore, usually skipped. [Feynman himself told his students to skip it, so I am sure that’s what they did.] However, if we believe the physical (or geometric) interpretation of the wavefunction that we presented in previous posts is, somehow, true, then we need to relate it to the abstract math of these so-called transformations between representations. That’s what we’re going to try to do here. It’s going to be just a start, and I will probably end up doing several posts on this but… Well… We do have to start somewhere, right? So let’s see where we get today. 🙂

The thought experiment that Feynman uses throughout his Lecture makes use of what Feynman’s refers to as modified or improved Stern-Gerlach apparatuses. They allow us to prepare a pure state or, alternatively, as Feynman puts it, to analyze a state. In theory, that is. The illustration below present a side and top view of such apparatus. We may already note that the apparatus itself—or, to be precise, our perspective of it—gives us two directions: (1) the up direction, so that’s the positive direction of the z-axis, and (2) the direction of travel of our particle, which coincides with the positive direction of the y-axis. [This is obvious and, at the same time, not so obvious, but I’ll talk about that in my next post. In this one, we basically need to work ourselves through the math, so we don’t want to think too much about philosophical stuff.]

Modified Stern-Gerlach

The kind of questions we want to answer in this post are variants of the following basic one: if a spin-1/2 particle (let’s think of an electron here, even if the Stern-Gerlach experiment is usually done with an atomic beam) was prepared in a given condition by one apparatus S, say the +S state, what is the probability (or the amplitude) that it will get through a second apparatus T if that was set to filter out the +T state?

The result will, of course, depend on the angles between the two apparatuses S and T, as illustrated below. [Just to respect copyright, I should explicitly note here that all illustrations are taken from the mentioned Lecture, and that the line of reasoning sticks close to Feynman’s treatment of the matter too.]

basic set-up

We should make a few remarks here. First, this thought experiment assumes our particle doesn’t get lost. That’s obvious but… Well… If you haven’t thought about this possibility, I suspect you will at some point in time. So we do assume that, somehow, this particle makes a turn. It’s an important point because… Well… Feynman’s argument—who, remember, represents mainstream physics—somehow assumes that doesn’t really matter. It’s the same particle, right? It just took a turn, so it’s going in some other direction. That’s all, right? Hmm… That’s where I part ways with mainstream physics: the transformation matrices for the amplitudes that we’ll find here describe something real, I think. It’s not just perspective: something happened to the electron. That something does not only change the amplitudes but… Well… It describes a different electron. It describes an electron that goes in a different direction now. But… Well… As said, these are reflections I will further develop in my next post. 🙂 Let’s focus on the math here. The philosophy will follow later. 🙂 Next remark.

Second, we assume the (a) and (b) illustrations above represent the same physical reality because the relative orientation between the two apparatuses, as measured by the angle α, is the same. Now that is obvious, you’ll say, but, as Feynman notes, we can only make that assumption because experiments effectively confirm that spacetime is, effectively, isotropic. In other words, there is no aether allowing us to establish some sense of absolute direction. Directions are relativerelative to the observer, that is… But… Well… Again, in my next post, I’ll argue that it’s not because directions are relative that they are, somehow, not real. Indeed, in my humble opinion, it does matter whether an electron goes here or, alternatively, there. These two different directions are not just two different coordinate frames. But… Well… Again. The philosophy will follow later. We need to stay focused on the math here.

Third and final remark. This one is actually very tricky. In his argument, Feynman also assumes the two set-ups below are, somehow, equivalent.

equivalent set-up

You’ll say: Huh? If not, say it! Huh? 🙂 Yes. Good. Huh? Feynman writes equivalentnot the same because… Well… They’re not the same, obviously:

  1. In the first set-up (a), T is wide open, so the apparatus is not supposed to do anything with the beam: it just splits and re-combines it.
  2. In set-up (b) the T apparatus is, quite simply, not there, so… Well… Again. Nothing is supposed to happen with our particles as they come out of S and travel to U.

The fundamental idea here is that our spin-1/2 particle (again, think of an electron here) enters apparatus U in the same state as it left apparatus S. In both set-ups, that is! Now that is a very tricky assumption, because… Well… While the net turn of our electron is the same, it is quite obvious it has to take two turns to get to U in (a), while it only takes one turn in (b). And so… Well… You can probably think of other differences too. So… Yes. And no. Same-same but different, right? 🙂

Right. That is why Feynman goes out of his way to explain the nitty-gritty behind: he actually devotes a full page in small print on this, which I’ll try to summarize in just a few paragraphs here. [And, yes, you should check my summary against Feynman’s actual writing on this.] It’s like this. While traveling through apparatus T in set-up (a), time goes by and, therefore, the amplitude would be different by some phase factor δ. [Feynman doesn’t say anything about this, but… Well… In the particle’s own frame of reference, this phase factor depend on the energy, the momentum and the time and distance traveled. Think of the argument of the elementary wavefunction here: θ = (E∙t – px)/ħ).] Now, if we believe that the amplitude is just some mathematical construct—so that’s what mainstream physicists (not me!) believe—then we could effectively say that the physics of (a) and (b) are the same, as Feynman does. In fact, let me quote him here:

“The physics of set-up (a) and (b) should be the same but the amplitudes could be different by some phase factor without changing the result of any calculation about the real world.”

Hmm… It’s one of those mysterious short passages where we’d all like geniuses like Feynman (or Einstein, or whomever) to be more explicit on their world view: if the amplitudes are different, can the physics really be the same? I mean… Exactly the same? It all boils down to that unfathomable belief that, somehow, the particle is real but the wavefunction that describes it, is not. Of course, I admit that it’s true that choosing another zero point for the time variable would also change all amplitudes by a common phase factor and… Well… That’s something that I consider to be not real. But… Well… The time and distance traveled in the apparatus is the time and distance traveled in the apparatus, right?

Bon… I have to stay away from these questions as for now—we need to move on with the math here—but I will come back to it later. But… Well… Talking math, I should note a very interesting mathematical point here. We have these transformation matrices for amplitudes, right? Well… Not yet. In fact, the coefficient of these matrices are exactly what we’re going to try to derive in this post, but… Well… Let’s assume we know them already. 🙂 So we have a 2-by-2 matrix to go from S to T, from T to U, and then one to go from S to U without going through T, which we can write as RSTRTU,  and RSU respectively. Adding the subscripts for the base states in each representation, the equivalence between the (a) and (b) situations can then be captured by the following formula:

phase factor

So we have that phase factor here: the left- and right-hand side of this equation is, effectively, same-same but different, as they would say in Asia. 🙂 Now, Feynman develops a beautiful mathematical argument to show that the eiδ factor effectively disappears if we convert our rotation matrices to some rather special form that is defined as follows:

normalization

I won’t copy his argument here, but I’d recommend you go over it because it is wonderfully easy to follow and very intriguing at the same time. [Yes. Simple things can be very intriguing.] Indeed, the calculation below shows that the determinant of these special rotation matrices will be equal to 1.

det is one

So… Well… So what? You’re right. I am being sidetracked here. The point is that, if we put all of our rotation matrices in this special form, the eiδ factor vanishes and the formula above reduces to:

reduced formula

So… Yes. End of excursion. Let us remind ourselves of what it is that we are trying to do here. As mentioned above, the kind of questions we want to answer will be variants of the following basic one: if a spin-1/2 particle was prepared in a given condition by one apparatus (S), say the +S state, what is the probability (or the amplitude) that it will get through a second apparatus (T) if that was set to filter out the +T state?

We said the result would depend on the angles between the two apparatuses S and T. I wrote: angles—plural. Why? Because a rotation will generally be described by the three so-called Euler angles:  α, β and γ. Now, it is easy to make a mistake here, because there is a sequence to these so-called elemental rotations—and right-hand rules, of course—but I will let you figure that out. 🙂

The basic idea is the following: if we can work out the transformation matrices for each of these elemental rotations, then we can combine them and find the transformation matrix for any rotation. So… Well… That fills most of Feynman’s Lecture on this, so we don’t want to copy all that. We’ll limit ourselves to the logic for a rotation about the z-axis, and then… Well… You’ll see. 🙂

So… The z-axis… We take that to be the direction along which we are measuring the angular momentum of our electron, so that’s the direction of the (magnetic) field gradient, so that’s the up-axis of the apparatus. In the illustration below, that direction points out of the page, so to speak, because it is perpendicular to the direction of the x– and the y-axis that are shown. Note that the y-axis is the initial direction of our beam.

rotation about z

Now, because the (physical) orientation of the fields and the field gradients of S and T is the same, Feynman says that—despite the angle—the probability for a particle to be up or down with regard to and T respectively should be the same. Well… Let’s be fair. He does not only say that: experiment shows it to be true. [Again, I am tempted to interject here that it is not because the probabilities for (a) and (b) are the same, that the reality of (a) and (b) is the same, but… Well… You get me. That’s for the next post. Let’s get back to the lesson here.] The probability is, of course, the square of the absolute value of the amplitude, which we will denote as C+C, C’+, and C’ respectively. Hence, we can write the following:

same probabilities

Now, the absolute values (or the magnitudes) are the same, but the amplitudes may differ. In fact, they must be different by some phase factor because, otherwise, we would not be able to distinguish the two situations, which are obviously different. As Feynman, finally, admits himself—jokingly or seriously: “There must be some way for a particle to know that it has turned the corner at P1.” [P1 is the midway point between and in the illustration, of course—not some probability.]

So… Well… We write:

C’+ = eiλ ·C+ and C’ = eiμ ·C

Now, Feynman notes that an equal phase change in all amplitudes has no physical consequence (think of re-defining our t0 = 0 point), so we can add some arbitrary amount to both λ and μ without changing any of the physics. So then we can choose this amount as −(λ + μ)/2. We write:

subtracting a number

Now, it shouldn’t you too long to figure out that λ’ is equal to λ’ = λ/2 + μ/2 = −μ’. So… Well… Then we can just adopt the convention that λ = −μ. So our C’+ = eiλ ·C+ and C’ = eiμ ·C equations can now be written as:

C’+ = eiλ ·C+ and C’ = eiλ·C

The absolute values are the same, but the phases are different. Right. OK. Good move. What’s next?

Well… The next assumption is that the phase shift λ is proportional to the angle (α) between the two apparatuses. Hence, λ is equal to λ = m·α, and we can re-write the above as:

C’+ = ei·C+ and C’ = ei·C

Now, this assumption may or may not seem reasonable. Feynman justifies it with a continuity argument, arguing any rotation can be built up as a sequence of infinitesimal rotations and… Well… Let’s not get into the nitty-gritty here. [If you want it, check Feynman’s Lecture itself.] Back to the main line of reasoning. So we’ll assume we can write λ as λ = m·α. The next question then is: what is the value for m? Now, we obviously do get exactly the same physics if we rotate by 360°, or 2π radians. So we might conclude that the amplitudes should be the same and, therefore, that ei = eim·2π has to be equal to one, so C’+ = C+ and C’ = C . That’s the case if m is equal to 1. But… Well… No. It’s the same thing again: the probabilities (or the magnitudes) have to be the same, but the amplitudes may be different because of some phase factor. In fact, they should be different. If m = 1/2, then we also get the same physics, even if the amplitudes are not the same. They will be each other’s opposite:

same physical state

Huh? Yes. Think of it. The coefficient of proportionality (m) cannot be equal to 1. If it would be equal to 1, and we’d rotate by 180° only, then we’d also get those C’+ = −C+ and C’ = −C equations, and so these coefficients would, therefore, also describe the same physical situation. Now, you will understand, intuitively, that a rotation of the apparatus by 180° will not give us the same physical situation… So… Well… In case you’d want a more formal argument proving a rotation by 180° does not give us the same physical situation, Feynman has one for you. 🙂

I know that, by now, you’re totally tired and bored, and so you only want the grand conclusion at this point. Well… All of what I wrote above should, hopefully, help you to understand that conclusion, which – I quote Feynman here – is the following:

If we know the amplitudes C+ and C of spin one-half particles with respect to a reference frame S, and we then use new base states, defined with respect to a reference frame T which is obtained from S by a rotation α around the z-axis, the new amplitudes are given in terms of the old by the following formulas:

conclusion

[Feynman denotes our angle α by phi (φ) because… He uses the Euler angles a bit differently. But don’t worry: it’s the same angle.]

What about the amplitude to go from the C to the C’+ state, and from the C+ to the C’ state? Well… That amplitude is zero. So the transformation matrix is this one:

rotation matrix

Let’s take a moment and think about this. Feynman notes the following, among other things: “It is very curious to say that if you turn the apparatus 360° you get new amplitudes. [They aren’t really new, though, because the common change of sign doesn’t give any different physics.] But if something has been rotated by a sequence of small rotations whose net result is to return it to the original orientation, then it is possible to define the idea that it has been rotated 360°—as distinct from zero net rotation—if you have kept track of the whole history.”

This is very deep. It connects space and time into one single geometric space, so to speak. But… Well… I’ll try to explain this rather sweeping statement later. Feynman also notes that a net rotation of 720° does give us the same amplitudes and, therefore, cannot be distinguished from the original orientation. Feynman finds that intriguing but… Well… I am not sure if it’s very significant. I do note some symmetries in quantum physics involve 720° rotations but… Well… I’ll let you think about this. 🙂

Note that the determinant of our matrix is equal to a·b·ceiφ/2·eiφ/2 = 1. So… Well… Our rotation matrix is, effectively, in that special form! How comes? Well… When equating λ = −μ, we are effectively putting the transformation into that special form.  Let us also, just for fun, quickly check the normalization condition. It requires that the probabilities, in any given representation, add to up to one. So… Well… Do they? When they come out of S, our electrons are equally likely to be in the up or down state. So the amplitudes are 1/√2. [To be precise, they are ±1/√2 but… Well… It’s the phase factor story once again.] That’s normalized: |1/√2|2 + |1/√2|2 = 1. The amplitudes to come out of the apparatus in the up or down state are eiφ/2/√2 and eiφ/2/√2 respectively, so the probabilities add up to |eiφ/2/√2|2 + |eiφ/2/√2|2 = … Well… It’s 1. Check it. 🙂

Let me add an extra remark here. The normalization condition will result in matrices whose determinant will be equal to some pure imaginary exponential, like eiα. So is that what we have here? Yes. We can re-write 1 as 1 = ei·0 = e0, so α = 0. 🙂 Capito? Probably not, but… Well… Don’t worry about it. Just think about the grand results. As Feynman puts it, this Lecture is really “a sort of cultural excursion.” 🙂

Let’s do a practical calculation here. Let’s suppose the angle is, effectively, 180°. So the eiφ/2 and eiφ/2/√2 factors are equal to eiπ/2 = +i and eiπ/2 = −i, so… Well… What does that mean—in terms of the geometry of the wavefunction? Hmm… We need to do some more thinking about the implications of all this transformation business for our geometric interpretation of he wavefunction, but so we’ll do that in our next post. Let us first work our way out of this rather hellish transformation logic. 🙂 [See? I do admit it is all quite difficult and abstruse, but… Well… We can do this, right?]

So what’s next? Well… Feynman develops a similar argument (I should say same-same but different once more) to derive the coefficients for a rotation of ±90° around the y-axis. Why 90° only? Well… Let me quote Feynman here, as I can’t sum it up more succinctly than he does: “With just two transformations—90° about the y-axis, and an arbitrary angle about the z-axis [which we described above]—we can generate any rotation at all.”

So how does that work? Check the illustration below. In Feynman’s words again: “Suppose that we want the angle α around x. We know how to deal with the angle α α around z, but now we want it around x. How do we get it? First, we turn the axis down onto x—which is a rotation of +90°. Then we turn through the angle α around z’. Then we rotate 90° about y”. The net result of the three rotations is the same as turning around x by the angle α. It is a property of space.”

full rotation

Besides helping us greatly to derive the transformation matrix for any rotation, the mentioned property of space is rather mysterious and deep. It sort of reduces the degrees of freedom, so to speak. Feynman writes the following about this:

“These facts of the combinations of rotations, and what they produce, are hard to grasp intuitively. It is rather strange, because we live in three dimensions, but it is hard for us to appreciate what happens if we turn this way and then that way. Perhaps, if we were fish or birds and had a real appreciation of what happens when we turn somersaults in space, we could more easily appreciate such things.”

In any case, I should limit the number of philosophical interjections. If you go through the motions, then you’ll find the following elemental rotation matrices:

full set of rotation matrices

What about the determinants of the Rx(φ) and Ry(φ) matrices? They’re also equal to one, so… Yes. A pure imaginary exponential, right? 1 = ei·0 = e0. 🙂

What’s next? Well… We’re done. We can now combine the elemental transformations above in a more general format, using the standardized Euler angles. Again, just go through the motions. The Grand Result is:

euler transformatoin

Does it give us normalized amplitudes? It should, but it looks like our determinant is going to be a much more complicated complex exponential. 🙂 Hmm… Let’s take some time to mull over this. As promised, I’ll be back with more reflections in my next post.

The geometry of the wavefunction, electron spin and the form factor

Our previous posts showed how a simple geometric interpretation of the elementary wavefunction yielded the (Compton scattering) radius of an elementary particle—for an electron, at least: for the proton, we only got the order of magnitude right—but then a proton is not an elementary particle. We got lots of other interesting equations as well… But… Well… When everything is said and done, it’s that equivalence between the E = m·a2·ω2 and E = m·c2 relations that we… Well… We need to be more specific about it.

Indeed, I’ve been ambiguous here and there—oscillating between various interpretations, so to speak. 🙂 In my own mind, I refer to my unanswered questions, or my ambiguous answers to them, as the form factor problem. So… Well… That explains the title of my post. But so… Well… I do want to be somewhat more conclusive in this post. So let’s go and see where we end up. 🙂

To help focus our mind, let us recall the metaphor of the V-2 perpetuum mobile, as illustrated below. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down. It provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring: it is described by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs. Of course, instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft, but then that’s not fancy enough for me. 🙂

V-2 engine

At first sight, the analogy between our flywheel model of an electron and the V-twin engine seems to be complete: the 90 degree angle of our V-2 engine makes it possible to perfectly balance the pistons and we may, therefore, think of the flywheel as a (symmetric) rotating mass, whose angular momentum is given by the product of the angular frequency and the moment of inertia: L = ω·I. Of course, the moment of inertia (aka the angular mass) will depend on the form (or shape) of our flywheel:

  1. I = m·a2 for a rotating point mass m or, what amounts to the same, for a circular hoop of mass m and radius a.
  2. For a rotating (uniformly solid) disk, we must add a 1/2 factor: I = m·a2/2.

How can we relate those formulas to the E = m·a2·ω2 formula? The kinetic energy that is being stored in a flywheel is equal Ekinetic = I·ω2/2, so that is only half of the E = m·a2·ω2 product if we substitute I for I = m·a2. [For a disk, we get a factor 1/4, so that’s even worse!] However, our flywheel model of an electron incorporates potential energy too. In fact, the E = m·a2·ω2 formula just adds the (kinetic and potential) energy of two oscillators: we do not really consider the energy in the flywheel itself because… Well… The essence of our flywheel model of an electron is not the flywheel: the flywheel just transfers energy from one oscillator to the other, but so… Well… We don’t include it in our energy calculations. The essence of our model is that two-dimensional oscillation which drives the electron, and which is reflected in Einstein’s E = m·c2 formula. That two-dimensional oscillation—the a2·ω2 = c2 equation, really—tells us that the resonant (or natural) frequency of the fabric of spacetime is given by the speed of light—but measured in units of a. [If you don’t quite get this, re-write the a2·ω2 = c2 equation as ω = c/a: the radius of our electron appears as a natural distance unit here.]

Now, we were extremely happy with this interpretation not only because of the key results mentioned above, but also because it has lots of other nice consequences. Think of our probabilities as being proportional to energy densities, for example—and all of the other stuff I describe in my published paper on this. But there is even more on the horizon: a follower of this blog (a reader with an actual PhD in physics, for a change) sent me an article analyzing elementary particles as tiny black holes because… Well… If our electron is effectively spinning around, then its tangential velocity is equal to ω = c. Now, recent research suggest black holes are also spinning at (nearly) the speed of light. Interesting, right? However, in order to understand what she’s trying to tell me, I’ll first need to get a better grasp of general relativity, so I can relate what I’ve been writing here and in previous posts to the Schwarzschild radius and other stuff.

Let me get back to the lesson here. In the reference frame of our particle, the wavefunction really looks like the animation below: it has two components, and the amplitude of the two-dimensional oscillation is equal to a, which we calculated as = ħ·/(m·c) = 3.8616×10−13 m, so that’s the (reduced) Compton scattering radius of an electron.

Circle_cos_sin

In my original article on this, I used a more complicated argument involving the angular momentum formula, but I now prefer a more straightforward calculation:

c = a·ω = a·E/ħ = a·m·c2/ħ  ⇔ = ħ/(m·c)

The question is: what is that rotating arrow? I’ve been vague and not so vague on this. The thing is: I can’t prove anything in this regard. But my hypothesis is that it is, in effect, a rotating field vector, so it’s just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below).

There are a number of crucial differences though:

  1. The (physical) dimension of the field vector of the matter-wave is different: I associate the real and imaginary component of the wavefunction with a force per unit mass (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so that’s the dimension of a gravitational field.
  2. I do believe this gravitational disturbance, so to speak, does cause an electron to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves do not involve any mass: they’re just an oscillating field. Nothing more. Nothing less. In contrast, as Feynman puts it: “When you do find the electron some place, the entire charge is there.” (Feynman’s Lectures, III-21-4)
  3. The third difference is one that I thought of only recently: the plane of the oscillation cannot be perpendicular to the direction of motion of our electron, because then we can’t explain the direction of its magnetic moment, which is either up or down when traveling through a Stern-Gerlach apparatus.

I mentioned that in my previous post but, for your convenience, I’ll repeat what I wrote there. The basic idea here is illustrated below (credit for this illustration goes to another blogger on physics). As for the Stern-Gerlach experiment itself, let me refer you to a YouTube video from the Quantum Made Simple site.

Figure 1 BohrThe point is: the direction of the angular momentum (and the magnetic moment) of an electron—or, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is traveling—cannot be parallel to the direction of motion. On the contrary, it is perpendicular to the direction of motion. In other words, if we imagine our electron as spinning around some center, then the disk it circumscribes will comprise the direction of motion.

However, we need to add an interesting detail here. As you know, we don’t really have a precise direction of angular momentum in quantum physics. [If you don’t know this… Well… Just look at one of my many posts on spin and angular momentum in quantum physics.] Now, we’ve explored a number of hypotheses but, when everything is said and done, a rather classical explanation turns out to be the best: an object with an angular momentum J and a magnetic moment μ (I used bold-face because these are vector quantities) that is parallel to some magnetic field B, will not line up, as you’d expect a tiny magnet to do in a magnetic field—or not completely, at least: it will precess. I explained that in another post on quantum-mechanical spin, which I advise you to re-read if you want to appreciate the point that I am trying to make here. That post integrates some interesting formulas, and so one of the things on my ‘to do’ list is to prove that these formulas are, effectively, compatible with the electron model we’ve presented in this and previous posts.

Indeed, when one advances a hypothesis like this, it’s not enough to just sort of show that the general geometry of the situation makes sense: we also need to show the numbers come out alright. So… Well… Whatever we think our electron—or its wavefunction—might be, it needs to be compatible with stuff like the observed precession frequency of an electron in a magnetic field.

Our model also needs to be compatible with the transformation formulas for amplitudes. I’ve been talking about this for quite a while now, and so it’s about time I get going on that.

Last but not least, those articles that relate matter-particles to (quantum) gravity—such as the one I mentioned above—are intriguing too and, hence, whatever hypotheses I advance here, I’d better check them against those more advanced theories too, right? 🙂 Unfortunately, that’s going to take me a few more years of studying… But… Well… I still have many years ahead—I hope. 🙂

Post scriptum: It’s funny how one’s brain keeps working when sleeping. When I woke up this morning, I thought: “But it is that flywheel that matters, right? That’s the energy storage mechanism and also explains how photons possibly interact with electrons. The oscillators drive the flywheel but, without the flywheel, nothing is happening. It is really the transfer of energy—through the flywheel—which explains why our flywheel goes round and round.”

It may or may not be useful to remind ourselves of the math in this regard. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time (as a function of the angle θ) is equal to: d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ. Now, the motion of the second oscillator (just look at that second piston going up and down in the V-2 engine) is given by the sinθ function, which is equal to cos(θ − π /2). Hence, its kinetic energy is equal to sin2(θ − π /2), and how it changes (as a function of θ again) is equal to 2∙sin(θ − π /2)∙cos(θ − π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ. So here we have our energy transfer: the flywheel organizes the borrowing and returning of energy, so to speak. That’s the crux of the matter.

So… Well… What if the relevant energy formula is E = m·a2·ω2/2 instead of E = m·a2·ω2? What are the implications? Well… We get a √2 factor in our formula for the radius a, as shown below.

square 2

Now that is not so nice. For the tangential velocity, we get a·ω = √2·c. This is also not so nice. How can we save our model? I am not sure, but here I am thinking of the mentioned precession—the wobbling of our flywheel in a magnetic field. Remember we may think of Jz—the angular momentum or, to be precise, its component in the z-direction (the direction in which we measure it—as the projection of the real angular momentum J. Let me insert Feynman’s illustration here again (Feynman’s Lectures, II-34-3), so you get what I am talking about.

precession

Now, all depends on the angle (θ) between Jz and J, of course. We did a rather obscure post on these angles, but the formulas there come in handy now. Just click the link and review it if and when you’d want to understand the following formulas for the magnitude of the presumed actual momentum:magnitude formulaIn this particular case (spin-1/2 particles), j is equal to 1/2 (in units of ħ, of course). Hence, is equal to √0.75 ≈ 0.866. Elementary geometry then tells us cos(θ) = (1/2)/√(3/4) =  = 1/√3. Hence, θ ≈ 54.73561°. That’s a big angle—larger than the 45° angle we had secretly expected because… Well… The 45° angle has that √2 factor in it: cos(45°) = sin(45°) = 1/√2.

Hmm… As you can see, there is no easy fix here. Those damn 1/2 factors! They pop up everywhere, don’t they? 🙂 We’ll solve the puzzle. One day… But not today, I am afraid. I’ll call it the form factor problem… Because… Well… It sounds better than the 1/2 or √2 problem, right? 🙂

Note: If you’re into quantum math, you’ll note ħ/(m·c) is the reduced Compton scattering radius. The standard Compton scattering radius is equal to  = (2π·ħ)/(m·c) =  h/(m·c) = h/(m·c). It doesn’t solve the √2 problem. Sorry. The form factor problem. 🙂

To be honest, I finished my published paper on all of this with a suggestion that, perhaps, we should think of two circular oscillations, as opposed to linear ones. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of our two-dimensional oscillation as an oscillation of a polar and azimuthal angle. It’s just a thought but… Well… I am sure it’s going to keep me busy for a while. 🙂polar_coordsThey are oscillations, still, so I am not thinking of two flywheels that keep going around in the same direction. No. More like a wobbling object on a spring. Something like the movement of a bobblehead on a spring perhaps. 🙂bobblehead

Electron spin and the geometry of the wavefunction

In our previous posts, we interpreted the elementary wavefunction ψ = a·ei∙θ = a·cosθ − i·a·sinθ as a two-dimensional oscillation in spacetime. In addition to assuming the two directions of the oscillation were perpendicular to each other, we also assumed they were perpendicular to the direction of motion. While the first assumption is essential in our interpretation, the second assumption is solely based on an analogy with a circularly polarized electromagnetic wave. We also assumed the matter wave could be right-handed as well as left-handed (as illustrated below), and that these two physical possibilities corresponded to the angular momentum being equal to plus or minus ħ/2 respectively.

 

This allowed us to derive the Compton scattering radius of an elementary particle. Indeed, we interpreted the rotating vector as a resultant vector, which we get by adding the sine and cosine motions, which represent the real and imaginary components of our wavefunction. The energy of this two-dimensional oscillation is twice the energy of a one-dimensional oscillator and, therefore, equal to E = m·a2·ω2. Now, the angular frequency is given by ω = E/ħ and E must, obviously, also be equal to E = m·c2. Substitition and re-arranging the terms gives us the Compton scattering radius:

Compton radius

The value given above is the (reduced) Compton scattering radius for an electron. For a proton, we get a value of about 2.1×10−16 m, which is about 1/4 of the radius of a proton as measured in scattering experiments. Hence, for a proton, our formula does not give us the exact (i.e. experimentally verified) value but it does give us the correct order of magnitude—which is fine because we know a proton is not an elementary particle and, hence, the motion of its constituent parts (quarks) is… Well… It complicates the picture hugely.

If we’d presume the electron charge would, effectively, be rotating about the center, then its tangential velocity is given by v = a·ω = [ħ·/(m·c)]·(E/ħ) = c, which is yet another wonderful implication of our hypothesis. Finally, the a·ω formula allowed us to interpret the speed of light as the resonant frequency of the fabric of space itself, as illustrated when re-writing this equality as follows:

Einstein

This gave us a natural and forceful interpretation of Einstein’s mass-energy equivalence formula: the energy in the E = m·c2· equation is, effectively, a two-dimensional oscillation of mass.

However, while toying with this and other results (for example, we may derive a Poynting vector and show probabilities are, effectively, proportional to energy densities), I realize the plane of our two-dimensional oscillation cannot be perpendicular to the direction of motion of our particle. In fact, the direction of motion must lie in the same plane. This is a direct consequence of the direction of the angular momentum as measured by, for example, the Stern-Gerlach experiment. The basic idea here is illustrated below (credit for this illustration goes to another blogger on physics). As for the Stern-Gerlach experiment itself, let me refer you to a YouTube video from the Quantum Made Simple site.

Figure 1 BohrThe point is: the direction of the angular momentum (and the magnetic moment) of an electron—or, to be precise, its component as measured in the direction of the (inhomogenous) magnetic field through which our electron is traveling—cannot be parallel to the direction of motion. On the contrary, it is perpendicular to the direction of motion. In other words, if we imagine our electron as some rotating disk or a flywheel, then it will actually comprise the direction of motion.

What are the implications? I am not sure. I will definitely need to review whatever I wrote about the de Broglie wavelength in previous posts. We will also need to look at those transformations of amplitudes once again. Finally, we will also need to relate this to the quantum-mechanical formulas for the angular momentum and the magnetic moment.

Post scriptum: As in previous posts, I need to mention one particularity of our model. When playing with those formulas, we contemplated two different formulas for the angular mass: one is the formula for a rotating mass (I = m·r2/2), and the other is the one for a rotating mass (I = m·r2). The only difference between the two is a 1/2 factor, but it turns out we need it to get a sensical result. For a rotating mass, the angular momentum is equal to the radius times the momentum, so that’s the radius times the mass times the velocity: L = m·v·r. [See also Feynman, Vol. II-34-2, in this regard)] Hence, for our model, we get L = m·v·r = m·c·a = m·c·ħ/(m·c) = ħ. Now, we know it’s equal to ±ħ/2, so we need that 1/2 factor in the formula.

Can we relate this 1/2 factor to the g-factor for the electron’s magnetic moment, which is (approximately) equal to 2? Maybe. We’d need to look at the formula for a rotating charged disk. That’s for a later post, however. It’s been enough for today, right? 🙂

I would just like to signal another interesting consequence of our model. If we would interpret the radius of our disk (a)—so that’s the Compton radius of our electron, as opposed to the Thomson radius—as the uncertainty in the position of our electron, then our L = m·v·r = m·c·a = p·a = ħ/2 formula as a very particular expression of the Uncertainty Principle: p·Δx= ħ/2. Isn’t that just plain nice? 🙂

Re-visiting the Complementarity Principle: the field versus the flywheel model of the matter-wave

This post is a continuation of the previous one: it is just going to elaborate the questions I raised in the post scriptum of that post. Let’s first review the basics once more.

The geometry of the elementary wavefunction

In the reference frame of the particle itself, the geometry of the wavefunction simplifies to what is illustrated below: an oscillation in two dimensions which, viewed together, form a plane that would be perpendicular to the direction of motion—but then our particle doesn’t move in its own reference frame, obviously. Hence, we could be looking at our particle from any direction and we should, presumably, see a similar two-dimensional oscillation. That is interesting because… Well… If we rotate this circle around its center (in whatever direction we’d choose), we get a sphere, right? It’s only when it starts moving, that it loses its symmetry. Now, that is very intriguing, but let’s think about that later.

Circle_cos_sin

Let’s assume we’re looking at it from some specific direction. Then we presumably have some charge (the green dot) moving about some center, and its movement can be analyzed as the sum of two oscillations (the sine and cosine) which represent the real and imaginary component of the wavefunction respectively—as we observe it, so to speak. [Of course, you’ve been told you can’t observe wavefunctions so… Well… You should probably stop reading this. :-)] We write:

ψ = = a·ei∙θ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ) 

So that’s the wavefunction in the reference frame of the particle itself. When we think of it as moving in some direction (so relativity kicks in), we need to add the p·x term to the argument (θ = E·t − px). It is easy to show this term doesn’t change the argument (θ), because we also get a different value for the energy in the new reference frame: E= γ·E0 and so… Well… I’ll refer you to my post on this, in which I show the argument of the wavefunction is invariant under a Lorentz transformation: the way Ev and pv and, importantly, the coordinates and t relativistically transform ensures the invariance.

In fact, I’ve always wanted to read de Broglie‘s original thesis because I strongly suspect he saw that immediately. If you click this link, you’ll find an author who suggests the same. Having said that, I should immediately add this does not imply there is no need for a relativistic wave equation: the wavefunction is a solution for the wave equation and, yes, I am the first to note the Schrödinger equation has some obvious issues, which I briefly touch upon in one of my other posts—and which is why Schrödinger himself and other contemporaries came up with a relativistic wave equation (Oskar Klein and Walter Gordon got the credit but others (including Louis de Broglie) also suggested a relativistic wave equation when Schrödinger published his). In my humble opinion, the key issue is not that Schrödinger’s equation is non-relativistic. It’s that 1/2 factor again but… Well… I won’t dwell on that here. We need to move on. So let’s leave the wave equation for what it is and go back to our wavefunction.

You’ll note the argument (or phase) of our wavefunction moves clockwise—or counterclockwise, depending on whether you’re standing in front of behind the clock. Of course, Nature doesn’t care about where we stand or—to put it differently—whether we measure time clockwise, counterclockwise, in the positive, the negative or whatever direction. Hence, I’ve argued we can have both left- as well as right-handed wavefunctions, as illustrated below (for p ≠ 0). Our hypothesis is that these two physical possibilities correspond to the angular momentum of our electron being either positive or negative: Jz = +ħ/2 or, else, Jz = −ħ/2. [If you’ve read a thing or two about neutrinos, then… Well… They’re kinda special in this regard: they have no charge and neutrinos and antineutrinos are actually defined by their helicity. But… Well… Let’s stick to trying to describing electrons for a while.]

The line of reasoning that we followed allowed us to calculate the amplitude a. We got a result that tentatively confirms we’re on the right track with our interpretation: we found that = ħ/me·c, so that’s the Compton scattering radius of our electron. All good ! But we were still a bit stuck—or ambiguous, I should say—on what the components of our wavefunction actually are. Are we really imagining the tip of that rotating arrow is a pointlike electric charge spinning around the center? [Pointlike or… Well… Perhaps we should think of the Thomson radius of the electron here, i.e. the so-called classical electron radius, which is equal to the Compton radius times the fine-structure constant: rThomson = α·rCompton ≈ 3.86×10−13/137.]

So that would be the flywheel model.

In contrast, we may also think the whole arrow is some rotating field vector—something like the electric field vector, with the same or some other physical dimension, like newton per charge unit, or newton per mass unit? So that’s the field model. Now, these interpretations may or may not be compatible—or complementary, I should say. I sure hope they are but… Well… What can we reasonably say about it?

Let us first note that the flywheel interpretation has a very obvious advantage, because it allows us to explain the interaction between a photon and an electron, as I demonstrated in my previous post: the electromagnetic energy of the photon will drive the circulatory motion of our electron… So… Well… That’s a nice physical explanation for the transfer of energy. However, when we think about interference or diffraction, we’re stuck: flywheels don’t interfere or diffract. Only waves do. So… Well… What to say?

I am not sure, but here I want to think some more by pushing the flywheel metaphor to its logical limits. Let me remind you of what triggered it all: it was the mathematical equivalence of the energy equation for an oscillator (E = m·a2·ω2) and Einstein’s formula (E = m·c2), which tells us energy and mass are equivalent but… Well… They’re not the same. So what are they then? What is energy, and what is mass—in the context of these matter-waves that we’re looking at. To be precise, the E = m·a2·ω2 formula gives us the energy of two oscillators, so we need a two-spring model which—because I love motorbikes—I referred to as my V-twin engine model, but it’s not an engine, really: it’s two frictionless pistons (or springs) whose direction of motion is perpendicular to each other, so they are in a 90° degree angle and, therefore, their motion is, effectively, independent. In other words: they will not interfere with each other. It’s probably worth showing the illustration just one more time. And… Well… Yes. I’ll also briefly review the math one more time.

V-2 engine

If the magnitude of the oscillation is equal to a, then the motion of these piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ). Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t). The kinetic and potential energy of one oscillator – think of one piston or one spring only – can then be calculated as:

  1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ)
  2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ)

The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy—for one piston, or one spring—is equal to:

E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2

Hence, adding the energy of the two oscillators, we have a perpetuum mobile storing an energy that is equal to twice this amount: E = m·a2·ω2. It is a great metaphor. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. However, we still have to prove this engine is, effectively, a perpetuum mobile: we need to prove the energy that is being borrowed or returned by one piston is the energy that is being returned or borrowed by the other. That is easy to do, but I won’t bother you with that proof here: you can double-check it in the referenced post or – more formally – in an article I posted on viXra.org.

It is all beautiful, and the key question is obvious: if we want to relate the E = m·a2·ω2 and E = m·c2 formulas, we need to explain why we could, potentially, write as a·ω = a·√(k/m). We’ve done that already—to some extent at least. The tangential velocity of a pointlike particle spinning around some axis is given by v = r·ω. Now, the radius is given by = ħ/(m·c), and ω = E/ħ = m·c2/ħ, so is equal to to v = [ħ/(m·c)]·[m·c2/ħ] = c. Another beautiful result, but what does it mean? We need to think about the meaning of the ω = √(k/m) formula here. In the mentioned article, we boldly wrote that the speed of light is to be interpreted as the resonant frequency of spacetime, but so… Well… What do we really mean by that? Think of the following.

Einstein’s E = mc2 equation implies the ratio between the energy and the mass of any particle is always the same:

F3

This effectively reminds us of the ω2 = C1/L or ω2 = k/m formula for harmonic oscillators. The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two (or more) degrees of freedom. In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light (c) emerges here as the defining property of spacetime: the resonant frequency, so to speak. We have no further degrees of freedom here.

Let’s think about k. [I am not trying to avoid the ω2= 1/LC formula here. It’s basically the same concept: the ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor, an inductor, and a capacitor. Writing the formula as ω2= C−1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring, so… Well… You get it, right? The ω2= C1/L and ω2 = k/m sort of describe the same thing: harmonic oscillation. It’s just… Well… Unlike the ω2= C1/L, the ω2 = k/m is directly compatible with our V-twin engine metaphor, because it also involves physical distances, as I’ll show you here.] The in the ω2 = k/m is, effectively, the stiffness of the spring. It is defined by Hooke’s Law, which states that the force that is needed to extend or compress a spring by some distance  is linearly proportional to that distance, so we write: F = k·x.

Now that is interesting, isn’t it? We’re talking exactly the same thing here: spacetime is, presumably, isotropic, so it should oscillate the same in any direction—I am talking those sine and cosine oscillations now, but in physical space—so there is nothing imaginary here: all is real or… Well… As real as we can imagine it to be. 🙂

We can elaborate the point as follows. The F = k·x equation implies k is a force per unit distance: k = F/x. Hence, its physical dimension is newton per meter (N/m). Now, the in this equation may be equated to the maximum extension of our spring, or the amplitude of the oscillation, so that’s the radius in the metaphor we’re analyzing here. Now look at how we can re-write the a·ω = a·√(k/m) equation:

Einstein

In case you wonder about the E = F·a substitution: just remember that energy is force times distance. [Just do a dimensional analysis: you’ll see it works out.] So we have a spectacular result here, for several reasons. The first, and perhaps most obvious reason, is that we can actually derive Einstein’s E = m·c2 formula from our flywheel model. Now, that is truly glorious, I think. However, even more importantly, this equation suggests we do not necessarily need to think of some actual mass oscillating up and down and sideways at the same time: the energy in the oscillation can be thought of a force acting over some distance, regardless of whether or not it is actually acting on a particle. Now, that energy will have an equivalent mass which is—or should be, I’d say… Well… The mass of our electron or, generalizing, the mass of the particle we’re looking at.

Huh? Yes. In case you wonder what I am trying to get at, I am trying to convey the idea that the two interpretations—the field versus the flywheel model—are actually fully equivalent, or compatible, if you prefer that term. In Asia, they would say: they are the “same-same but different” 🙂 but, using the language that’s used when discussing the Copenhagen interpretation of quantum physics, we should actually say the two models are complementary.

You may shrug your shoulders but… Well… It is a very deep philosophical point, really. 🙂 As far as I am concerned, I’ve never seen a better illustration of the (in)famous Complementarity Principle in quantum physics because… Well… It goes much beyond complementarity. This is about equivalence. 🙂 So it’s just like Einstein’s equation. 🙂

Post scriptum: If you read my posts carefully, you’ll remember I struggle with those 1/2 factors here and there. Textbooks don’t care about them. For example, when deriving the size of an atom, or the Rydberg energy, even Feynman casually writes that “we need not trust our answer [to questions like this] within factors like 2, π, etcetera.” Frankly, that’s disappointing. Factors like 2, 1/2, π or 2π are pretty fundamental numbers, and so they need an explanation. So… Well… I do loose sleep over them. :-/ Let me advance some possible explanation here.

As for Feynman’s model, and the derivation of electron orbitals in general, I think it’s got to do with the fact that electrons do want to pair up when thermal motion does not come into play: think of the Cooper pairs we use to explain superconductivity (so that’s the BCS theory). The 1/2 factor in Schrödinger’s equation also has weird consequences (when you plug in the elementary wavefunction and do the derivatives, you get a weird energy concept: E = m·v2, to be precise). This problem may also be solved when assuming we’re actually calculating orbitals for a pair of electrons, rather than orbitals for just one electron only. [We’d get twice the mass (and, presumably, the charge, so… Well… It might work—but I haven’t done it yet. It’s on my agenda—as so many other things, but I’ll get there… One day. :-)]

So… Well… Let’s get back to the lesson here. In this particular context (i.e. in the context of trying to find some reasonable physical interpretation of the wavefunction), you may or may not remember (if not, check my post on it) ‘ll remember I had to use the I = m·r2/2 formula for the angular momentum, as opposed to the I = m·r2 formula. I = m·r2/2 (with the 1/2 factor) gives us the angular momentum of a disk with radius r, as opposed to a point mass going around some circle with radius r. I noted that “the addition of this 1/2 factor may seem arbitrary”—and it totally is, of course—but so it gave us the result we wanted: the exact (Compton scattering) radius of our electron.

Now, the arbitrary 1/2 factor may or may be explained as follows. In the field model of our electron, the force is linearly proportional to the extension or compression. Hence, to calculate the energy involved in stretching it from x = 0 to a, we need to calculate it as the following integral:

half factor

So… Well… That will give you some food for thought, I’d guess. 🙂 If it racks your brain too much—or if you’re too exhausted by this point (which is OK, because it racks my brain too!)—just note we’ve also shown that the energy is proportional to the square of the amplitude here, so that’s a nice result as well… 🙂

Talking food for thought, let me make one final point here. The c2 = a2·k/m relation implies a value for k which is equal to k = m·c2/a = E/a. What does this tell us? In one of our previous posts, we wrote that the radius of our electron appeared as a natural distance unit. We wrote that because of another reason: the remark was triggered by the fact that we can write the cratio as c/ω = a·ω/ω = a. This implies the tangential and angular velocity in our flywheel model of an electron would be the same if we’d measure distance in units of a. Now, the E = a·k = a·F/(just re-writing…) implies that the force is proportional to the energy— F = (x/a)·E — and the proportionality coefficient is… Well… x/a. So that’s the distance measured in units of a. So… Well… Isn’t that great? The radius of our atom appearing as a natural distance unit does fit in nicely with our geometric interpretation of the wavefunction, doesn’t it? I mean… Do I need to say more?

I hope not because… Well… I can’t explain any better for the time being. I hope I sort of managed to convey the message. Just to make sure, in case you wonder what I was trying to do here, it’s the following: I told you appears as a resonant frequency of spacetime and, in this post, I tried to explain what that really means. I’d appreciate if you could let me know if you got it. If not, I’ll try again. 🙂 When everything is said and done, one only truly understands stuff when one is able to explain it to someone else, right? 🙂 Please do think of more innovative or creative ways if you can! 🙂

OK. That’s it but… Well… I should, perhaps, talk about one other thing here. It’s what I mentioned in the beginning of this post: this analysis assumes we’re looking at our particle from some specific direction. It could be any direction but… Well… It’s some direction. We have no depth in our line of sight, so to speak. That’s really interesting, and I should do some more thinking about it. Because the direction could be any direction, our analysis is valid for any direction. Hence, if our interpretation would happen to be some true—and that’s a big if, of course—then our particle has to be spherical, right? Why? Well… Because we see this circular thing from any direction, so it has to be a sphere, right?

Well… Yes. But then… Well… While that logic seems to be incontournable, as they say in French, I am somewhat reluctant to accept it at face value. Why? I am not sure. Something inside of me says I should look at the symmetries involved… I mean the transformation formulas for wavefunction when doing rotations and stuff. So… Well… I’ll be busy with that for a while, I guess. 😦

Post scriptum 2: You may wonder whether this line of reasoning would also work for a proton. Well… Let’s try it. Because its mass is so much larger than that of an electron (about 1835 times), the = ħ/(m·c) formula gives a much smaller radius: 1835 times smaller, to be precise, so that’s around 2.1×10−16 m, which is about 1/4 of the so-called charge radius of a proton, as measured by scattering experiments. So… Well… We’re not that far off, but… Well… We clearly need some more theory here. Having said that, a proton is not an elementary particle, so its mass incorporates other factors than what we’re considering here (two-dimensional oscillations).

The flywheel model of an electron

One of my readers sent me the following question on the geometric (or even physical) interpretation of the wavefunction that I’ve been offering in recent posts:

Does this mean that the wave function is merely describing excitations in a matter field; or is this unsupported?

My reply was very short: “Yes. In fact, we can think of a matter-particle as a tiny flywheel that stores energy.”

However, I realize this answer answers the question only partially. Moreover, I now feel I’ve been quite ambiguous in my description. When looking at the geometry of the elementary wavefunction (see the animation below, which shows us a left- and right-handed wave respectively), two obvious but somewhat conflicting interpretations readily come to mind:

(1) One is that the components of the elementary wavefunction represent an oscillation (in two dimensions) of a field. We may call it a matter field (yes, think of the scalar Higgs field here), but we could also think of it as an oscillation of the spacetime fabric itself: a tiny gravitational wave, in effect. All we need to do here is to associate the sine and cosine component with a physical dimension. The analogy here is the electromagnetic field vector, whose dimension is force per unit charge (newton/coulomb). So we may associate the sine and cosine components of the wavefunction with, say, the force per unit mass dimension (newton/kg) which, using Newton’s Law (F = m·a) reduces to the dimension of acceleration (m/s2), which is the dimension of gravitational fields. I’ll refer to this interpretation as the field interpretation of the matter wave (or wavefunction).

(2) The other interpretation is what I refer to as the flywheel interpretation of the electron. If you google this, you won’t find anything. However, you will probably stumble upon the so-called Zitterbewegung interpretation of quantum mechanics, which is a more elaborate theory based on the same basic intuition. The Zitterbewegung (a term which was coined by Erwin Schrödinger himself, and which you’ll see abbreviated as zbw) is, effectively, a local circulatory motion of the electron, which is presumed to be the basis of the electron’s spin and magnetic moment. All that I am doing, is… Well… I think I do push the envelope of this interpretation quite a bit. 🙂

The first interpretation implies our rotating arrow is, effectively, some field vector. In contrast, the second interpretation implies it’s only the tip of the rotating arrow that, literally, matters: we should look at it as a pointlike charge moving around a central axis, which is the direction of propagation. Let’s look at both.

The flywheel interpretation

The flywheel interpretation has an advantage over the field interpretation, because it also gives us a wonderfully simple physical interpretation of the interaction between electrons and photons—or, further speculating, between matter-particles (fermions) and force-carrier particles (bosons) in general. In fact, Feynman shows how this might work—but in a rather theoretical Lecture on symmetries and conservation principles, and he doesn’t elaborate much, so let me do that for him. The argument goes as follows.

A light beam—an electromagnetic wave—consists of a large number of photons. These photons are thought of as being circularly polarized: look at those animations above again. The Planck-Einstein equation tells us the energy of each photon is equal to E = ħ·ω = h·f. [I should, perhaps, quickly note that the frequency is, obviously, the frequency of the electromagnetic wave. It, therefore, is not to be associated with a matter wave: the de Broglie wavelength and the wavelength of light are very different concepts, even if the Planck-Einstein equation looks the same for both.]

Now, if our beam consists of photons, the total energy of our beam will be equal to W = N·E = N·ħ·ω. It is crucially important to note that this energy is to be interpreted as the energy that is carried by the beam in a certain time: we should think of the beam as being finite, somehow, in time and in space. Otherwise, our reasoning doesn’t make sense.

The photons carry angular momentum. Just look at those animations (above) once more. It doesn’t matter much whether or not we think of light as particles or as a wave: you can see there is angular momentum there. Photons are spin-1 particles, so the angular momentum will be equal to ± ħ. Hence, then the total angular momentum Jz (the direction of propagation is supposed to be the z-axis here) will be equal to JzN·ħ. [This, of course, assumes all photons are polarized in the same way, which may or may not be the case. You should just go along with the argument right now.] Combining the W = N·ħ·ω and JzN·ħ equations, we get:

JzN·ħ = W/ω

For a photon, we do accept the field interpretation, as illustrated below. As mentioned above, the z-axis here is the direction of propagation (so that’s the line of sight when looking at the diagram). So we have an electric field vector, which we write as ε (epsilon) so as to not cause any confusion with the Ε we used for the energy. [You may wonder if we shouldn’t also consider the magnetic field vector, but then we know the magnetic field vector is, basically, a relativistic effect which vanishes in the reference frame of the charge itself.] The phase of the electric field vector is φ = ω·t.

RH photon

Now, a charge (so that’s our electron now) will experience a force which is equal to F = q·ε. We use bold letters here because F and ε are vectors. We now need to look at our electron which, in our interpretation of the elementary wavefunction, we think of as rotating about some axis. So that’s what’s represented below. [Both illustrations are Feynman’s, not mine. As for the animations above, I borrowed them from Wikipedia.]

electron

Now, in previous posts, we calculated the radius based on a similar argument as the one Feynman used to get that JzN·ħ = W/ω equation. I’ll refer you those posts and just mention the result here: r is the Compton scattering radius for an electron, which is equal to:

radius formula

An equally spectacular implication of our flywheel model of the electron was the following: we found that the angular velocity v was equal to vr·ω = [ħ·/(m·c)]·(E/ħ) = c. Hence, in our flywheel model of an electron, it is effectively spinning around at the speed of light. Note that the angular frequency (ω) in the vr·ω equation is not the angular frequency of our photon: it’s the frequency of our electron. So we use the same Planck-Einstein equation (ω = E/ħ) but the energy E is the (rest) energy of our electron, so that’s about 0.511 MeV (so that’s an order of magnitude which is 100,000 to 300,000 times that of photons in the visible spectrum). Hence, the angular frequencies of our electron and our photon are very different. Feynman casually reflects this difference by noting the phases of our electron and our photon will differ by a phase factor, which he writes as φ0.

Just to be clear here, at this point, our analysis here diverges from Feynman’s. Feynman had no intention whatsoever to talk about Schrödinger’s Zitterbewegung hypothesis when he wrote what he wrote back in the 1960s. In fact, Feynman is very reluctant to venture into physical interpretations of the wavefunction in all his Lectures on quantum mechanics—which is surprising. Because he comes so tantalizing close at many occasions—as he does here: he describes the motion of the electron here as that of a harmonic oscillator which can be driven by an external electric field. Now that is a physical interpretation, and it is totally consistent with the one I’ve advanced in my recent posts. Indeed, Feynman also describes it as an oscillation in two dimensions—perpendicular to each other and to the direction of motion, as we do— in both the flywheel as well as the field interpretation of the wavefunction!

This point is important enough to quote Feynman himself in this regard:

“We have often described the motion of the electron in the atom as a harmonic oscillator which can be driven into oscillation by an external electric field. We’ll suppose that the atom is isotropic, so that it can oscillate equally well in the x– or y- directions. Then in the circularly polarized light, the x displacement and the displacement are the same, but one is 90° behind the other. The net result is that the electron moves in a circle.”

Right on! But so what happens really? As our light beam—the photons, really—are being absorbed by our electron (or our atom), it absorbs angular momentum. In other words, there is a torque about the central axis. Let me remind you of the formulas for the angular momentum and for torque respectively: L = r×p and τr×F. Needless to say, we have two vector cross-products here. Hence, if we use the τr×F formula, we need to find the tangential component of the force (Ft), whose magnitude will be equal to Ft = q·εtNow, energy is force over some distance so… Well… You may need to think about it for a while but, if you’ve understood all of the above, you should also be able to understand the following formula:

dW/dt = q·εt·v

[If you have trouble, remember is equal to ds/dt = Δs/Δt for Δt → 0, and re-write the equation above as dW = q·εt·v·dt = q·εt·ds = Ft·ds. Capito?]

Now, you may or may not remember that the time rate of change of angular momentum must be equal to the torque that is being applied. Now, the torque is equal to τ = Ft·r = q·εt·r, so we get:

dJz/dt = q·εt·v

The ratio of dW/dt and dJz/dt gives us the following interesting equation:

Feynman formula

Now, Feynman tries to relate this to the JzN·ħ = W/ω formula but… Well… We should remind ourselves that the angular frequency of these photons is not the angular frequency of our electron. So… Well… What can we say about this equation? Feynman suggests to integrate dJz and dW over some time interval, which makes sense: as mentioned, we interpreted W as the energy that is carried by the beam in a certain time. So if we integrate dW over this time interval, we get W. Likewise, if we integrate dJz over the same time interval, we should get the total angular momentum that our electron is absorbing from the light beam. Now, because dJz = dW/ω, we do concur with Feynman’s conclusion: the total angular momentum which is being absorbed by the electron is proportional to the total energy of the beam, and the constant of proportionality is equal to 1/ω.

It’s just… Well… The ω here is the angular frequency of the electron. It’s not the angular frequency of the beam. Not in our flywheel model of the electron which, admittedly, is not the model which Feynman used in his analysis. Feynman’s analysis is simpler: he assumes an electron at rest, so to speak, and then the beam drives it so it goes around in a circle with a velocity that is, effectively, given by the angular frequency of the beam itself. So… Well… Fine. Makes sense. As said, I just pushed the analysis a bit further along here. Both analyses raise an interesting question: how and where is the absorbed energy being stored? What is the mechanism here?

In Feynman’s analysis, the answer is quite simple: the electron did not have any motion before but does spin around after the beam hit it. So it has more energy now: it wasn’t a tiny flywheel before, but it is now!

In contrast, in my interpretation of the matter wave, the electron was spinning around already, so where does the extra energy go now? As its energy increases, ω = E/ħ must increase, right? Right. At the same time, the velocity v = r·ω must still be equal to vr·ω = [ħ·/(m·c)]·(E/ħ) = c, right? Right. So… If ω increases, but r·ω must equal the speed of light, then must actually decrease somewhat, right?

Right. It’s a weird but inevitable conclusion, it seems. I’ll let you think about it. 🙂

To conclude this post—which, I hope, the reader who triggered it will find interesting—I would like to quote Feynman on an issue on which most textbooks remain silent: the two-state nature of photons. I will just quote him without trying to comment or alter what he writes, because what he writes is clear enough, I think:

“Now let’s ask the following question: If light is linearly polarized in the x-direction, what is its angular momentum? Light polarized in the x-direction can be represented as the superposition of RHC and LHC polarized light. […] The interference of these two amplitudes produces the linear polarization, but it has equal probabilities to appear with plus or minus one unit of angular momentum. [Macroscopic measurements made on a beam of linearly polarized light will show that it carries zero angular momentum, because in a large number of photons there are nearly equal numbers of RHC and LHC photons contributing opposite amounts of angular momentum—the average angular momentum is zero.]

Now, we have said that any spin-one particle can have three values of Jz, namely +101 (the three states we saw in the Stern-Gerlach experiment). But light is screwy; it has only two states. It does not have the zero case. This strange lack is related to the fact that light cannot stand still. For a particle of spin which is standing still, there must be the 2j+1 possible states with values of Jz going in steps of from j to +j. But it turns out that for something of spin j with zero mass only the states with the components +j and j along the direction of motion exist. For example, light does not have three states, but only two—although a photon is still an object of spin one.”

In his typical style and frankness—for which he is revered by some (like me) but disliked by others—he admits this is very puzzling, and not obvious at all! Let me quote him once more:

“How is this consistent with our earlier proofs—based on what happens under rotations in space—that for spin-one particles three states are necessary? For a particle at rest, rotations can be made about any axis without changing the momentum state. Particles with zero rest mass (like photons and neutrinos) cannot be at rest; only rotations about the axis along the direction of motion do not change the momentum state. Arguments about rotations around one axis only are insufficient to prove that three states are required. We have tried to find at least a proof that the component of angular momentum along the direction of motion must for a zero mass particle be an integral multiple of ħ/2—and not something like ħ/3. Even using all sorts of properties of the Lorentz transformation and what not, we failed. Maybe it’s not true. We’ll have to talk about it with Prof. Wigner, who knows all about such things.”

The reference to Eugene Wigner is historically interesting. Feynman probably knew him very well—if only because they had both worked together on the Manhattan Project—and it’s true Wigner was not only a great physicist but a mathematical genius as well. However, Feynman probably quotes him here for the 1963 Nobel Prize he got for… Well… Wigner’s “contributions to the theory of the atomic nucleus and elementary particles, particularly through the discovery and application of fundamental symmetry principles.” 🙂 I’ll let you figure out how what I write about in this post, and symmetry arguments, might be related. 🙂

That’s it for today, folks! I hope you enjoyed this. 🙂

Post scriptum: The main disadvantage of the flywheel interpretation is that it doesn’t explain interference: waves interfere—some rotating mass doesn’t. Ultimately, the wave and flywheel interpretation must, somehow, be compatible. One way to think about it is that the electron can only move as it does—in a “local circulatory motion”—if there is a force on it that makes it move the way it does. That force must be gravitational because… Well… There is no other candidate, is there? [We’re not talking some electron orbital here—some negative charge orbiting around a positive nucleus. We’re just considering the electron itself.] So we just need to prove that our rotating arrow will also represent a force, whose components will make our electron move the way it does. That should not be difficult. The analogy of the V-twin engine should do the trick. I’ll deal with that in my next post. If we’re able to provide such proof (which, as mentioned, should not be difficult), it will be a wonderful illustration of the complementarity principle. 🙂

However, just thinking about it does raise some questions already. Circular motion like this can be explained in two equivalent ways. The most obvious way to think about it is to assume some central field. It’s the planetary model (illustrated below). However, that doesn’t suit our purposes because it’s hard – if possible at all – to relate it to the wavefunction oscillation.

planetary model

The second model is our two-spring or V-twin engine model (illustrated below), but then what is the mass here? One hypothesis that comes to mind is that we’re constantly accelerating and decelerating an electric charge (the electron charge)—against all other charges in the Universe, so to speak. So that’s a force over a distance—energy. And energy has an equivalent mass.

V-2 engineThe question which remains open, then, is the following: what is the nature of this force? In previous posts, I suggested it might be gravitational, but so here we’re back to the drawing board: we’re talking an electrical force, but applied to some mass which acquires mass because of… Well… Because of the force—because of the oscillation (the moving charge) itself. Hmm…. I need to think about this.

The speed of light as an angular velocity (2)

My previous post on the speed of light as an angular velocity was rather cryptic. This post will be a bit more elaborate. Not all that much, however: this stuff is and remains quite dense, unfortunately. 😦 But I’ll do my best to try to explain what I am thinking of. Remember the formula (or definition) of the elementary wavefunction:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

How should we interpret this? We know an actual particle will be represented by a wave packet: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t − pkx)/ħ. But… Well… Let’s see how far we get when analyzing the elementary wavefunction itself only.

According to mathematical convention, the imaginary unit (i) is a 90° angle in the counterclockwise direction. However, Nature surely cannot be bothered about our convention of measuring phase angles – or time itself – clockwise or counterclockwise. Therefore, both right- as well as left-handed polarization may be possible, as illustrated below.

The left-handed elementary wavefunction would be written as:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) − i·a·sin(px/ħ − E∙t/ħ)

In my previous posts, I hypothesized that the two physical possibilities correspond to the angular momentum of our particle – say, an electron – being either positive or negative: J = +ħ/2 or, else, J = −ħ/2. I will come back to this in a moment. Let us first further examine the functional form of the wavefunction.

We should note that both the direction as well as the magnitude of the (linear) momentum (p) are relative: they depend on the orientation and relative velocity of our reference frame – which are, in effect, relative to the reference frame of our object. As such, the wavefunction itself is relative: another observer will obtain a different value for both the momentum (p) as well as for the energy (E). Of course, this makes us think of the relativity of the electric and magnetic field vectors (E and B) but… Well… It’s not quite the same because – as I will explain in a moment – the argument of the wavefunction, considered as a whole, is actually invariant under a Lorentz transformation.

Let me elaborate this point. If we consider the reference frame of the particle itself, then the idea of direction and momentum sort of vanishes, as the momentum vector shrinks to the origin itself: p = 0. Let us now look at how the argument of the wavefunction transforms. The E and p in the argument of the wavefunction (θ = ω∙t – kx = (E/ħ)∙t – (p/ħ)∙x = (E∙t – px)/ħ) are, of course, the energy and momentum as measured in our frame of reference. Hence, we will want to write these quantities as E = Ev and p = pv = pvv. If we then use natural time and distance units (hence, the numerical value of c is equal to 1 and, hence, the (relative) velocity is then measured as a fraction of c, with a value between 0 and 1), we can relate the energy and momentum of a moving object to its energy and momentum when at rest using the following relativistic formulas:

E= γ·E0 and p= γ·m0v = γ·E0v/c2

The argument of the wavefunction can then be re-written as:

θ = [γ·E0/ħ]∙t – [(γ·E0v/c2)/ħ]∙x = (E0/ħ)·(t − v∙x/c2)·γ = (E0/ħ)∙t’

The γ in these formulas is, of course, the Lorentz factor, and t’ is the proper time: t’ = (t − v∙x/c2)/√(1−v2/c2). Two essential points should be noted here:

1. The argument of the wavefunction is invariant. There is a primed time (t’) but there is no primed θ (θ’): θ = (Ev/ħ)·t – (pv/ħ)·x = (E0/ħ)∙t’.

2. The E0/ħ coefficient pops up as an angular frequency: E0/ħ = ω0. We may refer to it as the frequency of the elementary wavefunction.

Now, if you don’t like the concept of angular frequency, we can also write: f0 = ω0/2π = (E0/ħ)/2π = E0/h. Alternatively, and perhaps more elucidating, we get the following formula for the period of the oscillation:

T0 = 1/f0 = h/E0

This is interesting, because we can look at the period as a natural unit of time for our particle. This period is inversely proportional to the (rest) energy of the particle, and the constant of proportionality is h. Substituting Efor m0·c2, we may also say it’s inversely proportional to the (rest) mass of the particle, with the constant of proportionality equal to h/c2. The period of an electron, for example, would be equal to about 8×10−21 s. That’s very small, and it only gets smaller for larger objects ! But what does all of this really tell us? What does it actually mean?

We can look at the sine and cosine components of the wavefunction as an oscillation in two dimensions, as illustrated below.

Circle_cos_sin

Look at the little green dot going around. Imagine it is some mass going around and around. Its circular motion is equivalent to the two-dimensional oscillation. Indeed, instead of saying it moves along a circle, we may also say it moves simultaneously (1) left and right and back again (the cosine) while also moving (2) up and down and back again (the sine).

Now, a mass that rotates about a fixed axis has angular momentum, which we can write as the vector cross-product L = r×p or, alternatively, as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular massL = I·ω. [Note we write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface).]

We can now do some calculations. We already know the angular velocity (ω) is equal to E0/ħ. Now, the magnitude of r in the Lr×p vector cross-product should equal the magnitude of ψ = a·ei∙E·t/ħ, so we write: r = a. What’s next? Well… The momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω.

So now we only need to think about what formula we should use for the angular mass. If we’re thinking, as we are doing here, of some point mass going around some center, then the formula to use is I = m·r2. However, we may also want to think that the two-dimensional oscillation of our point mass actually describes the surface of a disk, in which case the formula for I becomes I = m·r2/2. Of course, the addition of this 1/2 factor may seem arbitrary but, as you will see, it will give us a more intuitive result. This is what we get:

L = I·ω = (m·r2/2)·(E/ħ) = (1/2)·a2·(E/c2)·(E/ħ) = a2·E2/(2·ħ·c2)

Note that our frame of reference is that of the particle itself, so we should actually write ω0, m0 and E0 instead of ω, m and E. The value of the rest energy of an electron is about 0.510 MeV, or 8.1871×10−14 N∙m. Now, this momentum should equal J = ±ħ/2. We can, therefore, derive the (Compton scattering) radius of an electron:Formula 1Substituting the various constants with their numerical values, we find that a is equal 3.8616×10−13 m, which is the (reduced) Compton scattering radius of an electron. The (tangential) velocity (v) can now be calculated as being equal to v = r·ω = a·ω = [ħ·/(m·c)]·(E/ħ) = c. This is an amazing result. Let us think about it.

In our previous posts, we introduced the metaphor of two springs or oscillators, whose energy was equal to E = m·ω2. Is this compatible with Einstein’s E = m·c2 mass-energy equivalence relation? It is. The E = m·c2 implies E/m = c2. We, therefore, can write the following:

ω = E/ħ = m·c2/ħ = m·(E/m)·/ħ ⇔ ω = E/ħ

Hence, we should actually have titled this and the previous post somewhat differently: the speed of light appears as a tangential velocity. Think of the following: the ratio of c and ω is equal to c/ω = a·ω/ω = a. Hence, the tangential and angular velocity would be the same if we’d measure distance in units of a. In other words, the radius of an electron appears as a natural distance unit here: if we’d measure ω in units of per second, rather than in radians (which are expressed in the SI unit of distance, i.e. the meter) per second, the two concepts would coincide.

More fundamentally, we may want to look at the radius of an electron as a natural unit of velocityHuh? Yes. Just re-write the c/ω = a as ω = c/a. What does it say? Exactly what I said, right? As such, the radius of an electron is not only a norm for measuring distance but also for time. 🙂

If you don’t quite get this, think of the following. For an electron, we get an angular frequency that is equal to ω = E/ħ = (8.19×10−14 N·m)/(1.05×10−34 N·m·s) ≈ 7.76×1020 radians per second. That’s an incredible velocity, because radians are expressed in distance units—so that’s in meter. However, our mass is not moving along the unit circle, but along a much tinier orbit. The ratio of the radius of the unit circle and is equal to 1/a ≈ (1 m)/(3.86×10−13 m) ≈ 2.59×1012. Now, if we divide the above-mentioned velocity of 7.76×1020 radians per second by this factor, we get… Right ! The speed of light: 2.998×1082 m/s. 🙂

Post scriptum: I have no clear answer to the question as to why we should use the I = m·r2/2 formula, as opposed to the I = m·r2 formula. It ensures we get the result we want, but this 1/2 factor is actually rather enigmatic. It makes me think of the 1/2 factor in Schrödinger’s equation, which is also quite enigmatic. In my view, the 1/2 factor should not be there in Schrödinger’s equation. Electron orbitals tend to be occupied by two electrons with opposite spin. That’s why their energy levels should be twice as much. And so I’d get rid of the 1/2 factor, solve for the energy levels, and then divide them by two again. Or something like that. 🙂 But then that’s just my personal opinion or… Well… I’ve always been intrigued by the difference between the original printed edition of the Feynman Lectures and the online version, which has been edited on this point. My printed edition is the third printing, which is dated July 1966, and – on this point – it says the following:

“Don’t forget that meff has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude, about 2 to 20 times the free-space mass of the electron.”

Two to twenty times. Not 1 or 0.5 to 20 times. No. Two times. As I’ve explained a couple of times, if we’d define a new effective mass which would be twice the old concept – so meffNEW = 2∙meffOLD – then such re-definition would not only solve a number of paradoxes and inconsistencies, but it will also justify my interpretation of energy as a two-dimensional oscillation of mass.

However, the online edition has been edited here to reflect the current knowledge about the behavior of an electron in a medium. Hence, if you click on the link above, you will read that the effective mass can be “about 0.1 to 30 times” the free-space mass of the electron. Well… This is another topic altogether, and so I’ll sign off here and let you think about it all. 🙂

The speed of light as an angular velocity

Over the weekend, I worked on a revised version of my paper on a physical interpretation of the wavefunction. However, I forgot to add the final remarks on the speed of light as an angular velocity. I know… This post is for my faithful followers only. It is dense, but let me add the missing bits here:

12

Post scriptum (29 October): Einstein’s view on aether theories probably still holds true: “We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity, space without aether is unthinkable – for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

The above quote is taken from the Wikipedia article on aether theories. The same article also quotes Robert Laughlin, the 1998 Nobel Laureate in Physics, who said this about aether in 2005: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed. […] The word ‘aether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. […]The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic aether. But we do not call it this because it is taboo.”

I really love this: a relativistic aether. My interpretation of the wavefunction is very consistent with that.

A physical explanation for relativistic length contraction?

My last posts were all about a possible physical interpretation of the quantum-mechanical wavefunction. To be precise, we have been interpreting the wavefunction as a gravitational wave. In this interpretation, the real and imaginary component of the wavefunction get a physical dimension: force per unit mass (newton per kg). The inspiration here was the structural similarity between Coulomb’s and Newton’s force laws. They both look alike: it’s just that one gives us a force per unit charge (newton per coulomb), while the other gives us a force per unit mass.

So… Well… Many nice things came out of this – and I wrote about that at length – but last night I was thinking this interpretation may also offer an explanation of relativistic length contraction. Before we get there, let us re-visit our hypothesis.

The geometry of the wavefunction

The elementary wavefunction is written as:

ψ = a·ei(E·t − px)/ħa·cos(px/ħ – E∙t/ħ) + i·a·sin(px/ħ – E∙t/ħ)

Nature should not care about our conventions for measuring the phase angle clockwise or counterclockwise and, therefore, the ψ = a·ei[E·t − px]/ħ function may also be permitted. We know that cos(θ) = cos(θ) and sinθ = sin(θ), so we can write:    

ψ = a·ei(E·t − p∙x)/ħa·cos(E∙t/ħ – px/ħ) + i·a·sin(E∙t/ħ – px/ħ)

= a·cos(px/ħ – E∙t/ħ) i·a·sin(px/ħ – E∙t/ħ)

The vectors p and x are the the momentum and position vector respectively: p = (px, py, pz) and x = (x, y, z). However, if we assume there is no uncertainty about p – not about the direction nor the magnitude – then we may choose an x-axis which reflects the direction of p. As such, x = (x, y, z) reduces to (x, 0, 0), and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis or, if p = 0, that our particle is located somewhere on the x-axis. Hence, the analysis is one-dimensional only.

The geometry of the elementary wavefunction is illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. You can check this as follows: if the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively. We speculated this should correspond to the two possible values for the quantum-mechanical spin of the wave: +ħ/2 or −ħ/2. The cosine and sine components for the left-handed wave are shown below. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2).

circular polarizaton with components

As for the wave velocity, and its direction of propagation, we know that the (phase) velocity of any wave F(kx – ωt) is given by vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, the momentum might also be in the negative x-direction, in which case k would be equal to -p and, therefore, we would get a negative phase velocity: vp = ω/k = E/p.

The de Broglie relations

E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). Of course, we may write: f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time is easy to interpret. The wavefunction of a particle with more energy, or more mass, will have a higher density in time than a particle with less energy.

In contrast, the second de Broglie relation is somewhat harder to interpret. According to the p = h/λ relation, the wavelength is inversely proportional to the momentum: λ = h/p. The velocity of a photon, or a (theoretical) particle with zero rest mass (m0 = 0), is c and, therefore, we find that p = mvv = mcc = m∙c (all of the energy is kinetic). Hence, we can write: p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass, the wavelength can be written as:

λ = h/p = hc/E = h/mc

However, this is a limiting situation – applicable to photons only. Real-life matter-particles should have some mass[1] and, therefore, their velocity will never be c.[2]

Hence, if p goes to zero, then the wavelength becomes infinitely long: if p → 0 then λ → ∞. How should we interpret this inverse proportionality between λ and p? To answer this question, let us first see what this wavelength λ actually represents.

If we look at the ψ = a·cos(p∙x/ħ – E∙t/ħ) – i·a·sin(p∙x/ħ – E∙t/ħ) once more, and if we write p∙x/ħ as Δ, then we can look at p∙x/ħ as a phase factor, and so we will be interested to know for what x this phase factor Δ = p∙x/ħ will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ

So now we get a meaningful interpretation for that wavelength. It is the distance between the crests (or the troughs) of the wave, so to speak, as illustrated below. Of course, this two-dimensional wave has no real crests or troughs: we measure crests and troughs against the y-axis here. Hence, our definition depend on the frame of reference.

wavelength

Now we know what λ actually represents for our one-dimensional elementary wavefunction. Now, the time that is needed for one cycle is equal to T = 1/f = 2π·(ħ/E). Hence, we can now calculate the wave velocity:

v = λ/T = (h/p)/[2π·(ħ/E)] = E/p

Unsurprisingly, we just get the phase velocity that we had calculated already: v = vp = E/p. The question remains: what if p is zero? What if we are looking at some particle at rest? It is an intriguing question: we get an infinitely long wavelength, and an infinite wave velocity.

Now, re-writing the v = E/p as v = m∙c2/m∙vg  = cg, in which βg is the relative classical velocity[3] of our particle βg = vg/c) tells us that the phase velocities will effectively be superluminal (βg  < 1 so 1/ βg > 1), but what if βg approaches zero? The conclusion seems unavoidable: for a particle at rest, we only have a frequency in time, as the wavefunction reduces to:

ψ = a·e−i·E·t/ħ = a·cos(E∙t/ħ) – i·a·sin(E∙t/ħ)

How should we interpret this?

A physical interpretation of relativistic length contraction?

In my previous posts, we argued that the oscillations of the wavefunction pack energy. Because the energy of our particle is finite, the wave train cannot be infinitely long. If we assume some definite number of oscillations, then the string of oscillations will be shorter as λ decreases. Hence, the physical interpretation of the wavefunction that is offered here may explain relativistic length contraction.

🙂

Yep. Think about it. 🙂

[1] Even neutrinos have some (rest) mass. This was first confirmed by the US-Japan Super-Kamiokande collaboration in 1998. Neutrinos oscillate between three so-called flavors: electron neutrinos, muon neutrinos and tau neutrinos. Recent data suggests that the sum of their masses is less than a millionth of the rest mass of an electron. Hence, they propagate at speeds that are very near to the speed of light.

[2] Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as KE = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As v approaches c, γ approaches infinity and, therefore, the kinetic energy would become infinite as well.

[3] Because our particle will be represented by a wave packet, i.e. a superimposition of elementary waves with different E and p, the classical velocity of the particle becomes the group velocity of the wave, which is why we denote it by vg.

The geometry of the wavefunction (2)

This post further builds on the rather remarkable results we got in our previous posts. Let us start with the basics once again. The elementary wavefunction is written as:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

Of course, Nature (or God, as Einstein would put it) does not care about our conventions for measuring an angle (i.e. the phase of our wavefunction) clockwise or counterclockwise and, therefore, the ψ = a·ei[E·t − px]/ħ function is also permitted. We know that cos(θ) = cos(−θ) and sinθ = −sin(θ), so we can write:    

ψ = a·ei[E·t − p∙x]/ħa·cos(E∙t/ħ − px/ħ) + i·a·sin(E∙t/ħ − px/ħ)

= a·cos(px/ħ − E∙t/ħ) − i·a·sin(px/ħ − E∙t/ħ)

The vectors p and x are the momentum and position vector respectively: p = (px, py, pz) and x = (x, y, z). However, if we assume there is no uncertainty about p – not about the direction, and not about the magnitude – then the direction of p can be our x-axis. In this reference frame, x = (x, y, z) reduces to (x, 0, 0), and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis or, if p = 0, that our particle is located somewhere on the x-axis. So we have an analysis in one dimension only then, which facilitates our calculations. The geometry of the wavefunction is then as illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. [You can check this as follows: if the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively.]

Now, you will remember that we speculated the two polarizations (left- versus right-handed) should correspond to the two possible values for the quantum-mechanical spin of the wave (+ħ/2 or −ħ/2). We will come back to this at the end of this post. Just focus on the essentials first: the cosine and sine components for the left-handed wave are shown below. Look at it carefully and try to understand. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2).

circular polarizaton with components

As for the wave velocity, and its direction of propagation, we know that the (phase) velocity of any waveform F(kx − ωt) is given by vp = ω/k. In our case, we find that vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, the momentum might also be in the negative x-direction, in which case k would be equal to −p and, therefore, we would get a negative phase velocity: vp = ω/k = (E/ħ)/(−p/ħ) = −E/p.

As you know, E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). [If in doubt, check my post on essential wave math.] Now, you also know that f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time (oscillations or radians per second) is easy to interpret. A particle will always have some mass and, therefore, some energy, and it is easy to appreciate the fact that the wavefunction of a particle with more energy (or more mass) will have a higher density in time than a particle with less energy.

However, the second de Broglie relation is somewhat harder to interpret. Note that the wavelength is inversely proportional to the momentum: λ = h/p. Hence, if p goes to zero, then the wavelength becomes infinitely long, so we write:

If p → 0 then λ → ∞.

For the limit situation, a particle with zero rest mass (m0 = 0), the velocity may be c and, therefore, we find that p = mvv = mcc = m∙c (all of the energy is kinetic) and, therefore, p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass (m0 = 0), the wavelength can be written as:

λ = h/p = hc/E = h/mc

Of course, we are talking a photon here. We get the zero rest mass for a photon. In contrast, all matter-particles should have some mass[1] and, therefore, their velocity will never equal c.[2] The question remains: how should we interpret the inverse proportionality between λ and p?

Let us first see what this wavelength λ actually represents. If we look at the ψ = a·cos(p∙x/ħ − E∙t/ħ) − i·a·sin(p∙x/ħ – E∙t/ħ) once more, and if we write p∙x/ħ as Δ, then we can look at p∙x/ħ as a phase factor, and so we will be interested to know for what x this phase factor Δ = p∙x/ħ will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ

So now we get a meaningful interpretation for that wavelength. It is the distance between the crests (or the troughs) of the wave, so to speak, as illustrated below. Of course, this two-dimensional wave has no real crests or troughs: they depend on your frame of reference.

wavelength

So now we know what λ actually represent for our one-dimensional elementary wavefunction. Now, the time that is needed for one cycle is equal to T = 1/f = 2π·(ħ/E). Hence, we can now calculate the wave velocity:

v = λ/T = (h/p)/[2π·(ħ/E)] = E/p

Unsurprisingly, we just get the phase velocity that we had calculated already: v = vp = E/p. It does not answer the question: what if p is zero? What if we are looking at some particle at rest? It is an intriguing question: we get an infinitely long wavelength, and an infinite phase velocity. Now, we know phase velocities can be superluminal, but they should not be infinite. So what does the mathematical inconsistency tell us? Do these infinitely long wavelengths and infinite wave velocities tell us that our particle has to move? Do they tell us our notion of a particle at rest is mathematically inconsistent?

Maybe. But maybe not. Perhaps the inconsistency just tells us our elementary wavefunction – or the concept of a precise energy, and a precise momentum – does not make sense. This is where the Uncertainty Principle comes in: stating that p = 0, implies zero uncertainty. Hence, the σp factor in the σp∙σx ≤ ħ/2 would be zero and, therefore, σp∙σx would be zero which, according to the Uncertainty Principle, it cannot be: it can be very small, but it cannot be zero.

It is interesting to note here that σp refers to the standard deviation from the mean, as illustrated below. Of course, the distribution may be or may not be normal – we don’t know – but a normal distribution makes intuitive sense, of course. Also, if we assume the mean is zero, then the uncertainty is basically about the direction in which our particle is moving, as the momentum might then be positive or negative.

Standard_deviation_diagram

The question of natural units may pop up. The Uncertainty Principle suggests a numerical value of the natural unit for momentum and distance that is equal to the square root of ħ/2, so that’s about 0.726×10−17 m for the distance unit and 0.726×10−17 N∙s for the momentum unit, as the product of both gives us ħ/2. To make this somewhat more real, we may note that 0.726×10−17 m is the attometer scale (1 am = 1×10−18 m), so that is very small but not unreasonably small.[3]

Hence, we need to superimpose a potentially infinite number of waves with energies and momenta centered on some mean value. It is only then that we get meaningful results. For example, the idea of a group velocity – which should correspond to the classical idea of the velocity of our particle – only makes sense in the context of wave packet. Indeed, the group velocity of a wave packet (vg) is calculated as follows:

vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi)

This assumes the existence of a dispersion relation which gives us ωi as a function of ki – what amounts to the same – Ei as a function of pi. How do we get that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as the following pair of equations[4]:

  1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)

These equations imply the following dispersion relation:

ω = ħ·k2/(2m)

Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, about m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too, and the two will, obviously, be related as follows: σm = σE/c2. We are tempted to do a few substitutions here. Let’s first check what we get when doing the mi = Ei/c2 substitution:

ωi = ħ·ki2/(2mi) = (1/2)∙ħ·ki2c2/Ei = (1/2)∙ħ·ki2c2/(ωi∙ħ) = (1/2)∙ħ·ki2c2i

⇔ ωi2/ki2 = c2/2 ⇔ ωi/ki = vp = c/2 !?

We get a very interesting but nonsensical condition for the dispersion relation here. I wonder what mistake I made. 😦

Let us try another substitution. The group velocity is what it is, right? It is the velocity of the group, so we can write: ki = p/ħ = mi ·vg. This gives us the following result:

ωi = ħ·(mi ·vg)2/(2mi) = ħ·mi·vg2/2

It is yet another interesting condition for the dispersion relation. Does it make any more sense? I am not so sure. That factor 1/2 troubles us. It only makes sense when we drop it. Now you will object that Schrödinger’s equation gives us the electron orbitals – and many other correct descriptions of quantum-mechanical stuff – so, surely, Schrödinger’s equation cannot be wrong. You’re right. It’s just that… Well… When we are splitting in up in two equations, as we are doing here, then we are looking at one of the two dimensions of the oscillation only and, therefore, it’s only half of the mass that counts. Complicated explanation but… Well… It should make sense, because the results that come out make sense. Think of it. So we write this:

  • Re(∂ψ/∂t) = −(ħ/meffIm(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·(ħ/meff)·cos(kx − ωt)
  • Im(∂ψ/∂t) = (ħ/meffRe(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·(ħ/meff)·sin(kx − ωt)

We then get the dispersion relation without that 1/2 factor:

ωi = ħ·ki2/mi

The mi = Ei/c2 substitution then gives us the result we sort of expected to see:

ωi = ħ·ki2/mi = ħ·ki2c2/Ei = ħ·ki2c2/(ωi∙ħ) ⇔ ωi/ki = vp = c

Likewise, the other calculation also looks more meaningful now:

ωi = ħ·(mi ·vg)2/mi = ħ·mi·vg2

Sweet ! 🙂

Let us put this aside for the moment and focus on something else. If you look at the illustrations above, you see we can sort of distinguish (1) a linear velocity – the speed with which those wave crests (or troughs) move – and (2) some kind of circular or tangential velocity – the velocity along the red contour line above. We’ll need the formula for a tangential velocity: vt = a∙ω.

Now, if λ is zero, then vt = a∙ω = a∙E/ħ is just all there is. We may double-check this as follows: the distance traveled in one period will be equal to 2πa, and the period of the oscillation is T = 2π·(ħ/E). Therefore, vt will, effectively, be equal to vt = 2πa/(2πħ/E) = a∙E/ħ. However, if λ is non-zero, then the distance traveled in one period will be equal to 2πa + λ. The period remains the same: T = 2π·(ħ/E). Hence, we can write:

F1

For an electron, we did this weird calculation. We had an angular momentum formula (for an electron) which we equated with the real-life +ħ/2 or −ħ/2 values of its spin, and we got a numerical value for a. It was the Compton radius: the scattering radius for an electron. Let us write it out:

F2

Using the right numbers, you’ll find the numerical value for a: 3.8616×10−13 m. But let us just substitute the formula itself here: F3

This is fascinating ! And we just calculated that vp is equal to c. For the elementary wavefunction, that is. Hence, we get this amazing result:

vt = 2c

This tangential velocity is twice the linear velocity !

Of course, the question is: what is the physical significance of this? I need to further look at this. Wave velocities are, essentially, mathematical concepts only: the wave propagates through space, but nothing else is really moving. However, the geometric implications are obviously quite interesting and, hence, need further exploration.

One conclusion stands out: all these results reinforce our interpretation of the speed of light as a property of the vacuum – or of the fabric of spacetime itself. 🙂

[1] Even neutrinos should have some (rest) mass. In fact, the mass of the known neutrino flavors was estimated to be smaller than 0.12 eV/c2. This mass combines the three known neutrino flavors.

[2] Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as KE = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As v approaches c, γ approaches infinity and, therefore, the kinetic energy would become infinite as well.

[3] It is, of course, extremely small, but 1 am is the current sensitivity of the LIGO detector for gravitational waves. It is also thought of as the upper limit for the length of an electron, for quarks, and for fundamental strings in string theory. It is, in any case, 1,000,000,000,000,000,000 times larger than the order of magnitude of the Planck length (1.616229(38)×10−35 m).

[4] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.

The geometry of the wavefunction

My posts and article on the wavefunction as a gravitational wave are rather short on the exact geometry of the wavefunction, so let us explore that a bit here. By now, you know the formula for the elementary wavefunction by heart:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis. The geometry of the wavefunction is illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. If the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively. I am convinced these correspond to the two possible values for the quantum-mechanical spin of the wave: +ħ/2 or −ħ/2. But… Well… Who am I? The cosine and sine components are shown below. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2)  circular polarizaton with components

Surely, Nature doesn’t care a hoot about our conventions for measuring the phase angle clockwise or counterclockwise and therefore, the ψ = a·ei[E·t − px]/ħ function should, effectively, also be permitted. We know that cos(θ) = cos(θ) and sinθ = sin(θ), so we can write:    

ψ = a·ei[E·t − p∙x]/ħa·cos(E∙t/ħ − p∙x/ħ) + i·a·sin(E∙t/ħ − p∙x/ħ)

= a·cos(p∙x/ħ − E∙t/ħ) − i·a·sin(p∙x/ħ − E∙t/ħ)

E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). Of course, we may write: f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time is easy to interpret (a particle will always have some mass and, therefore, some energy), but the wavelength is inversely proportional to the momentum: λ = h/p. Hence, if p goes to zero, then the wavelength becomes infinitely long: if p → 0, then λ → ∞. For the limit situation, a particle with zero rest mass (m0 = 0), the velocity may be c and, therefore, we find that p = mvv = m∙c  and, therefore, p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass, the wavelength can be written as:

λ = h/p = hc/E = h/mc

However, we argued that the physical dimension of the components of the wavefunction may be usefully expressed in N/kg units (force per unit mass), while the physical dimension of the electromagnetic wave are expressed in N/C (force per unit charge). This, in fact, explains the dichotomy between bosons (photons) and fermions (spin-1/2 particles). Hence, all matter-particles should have some mass.[1] But how we interpret the inverse proportionality between λ and p?

We should probably first ask ourselves what wavelength we are talking about. The wave only has a phase velocity here, which is equal to vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, we know that, classically, the momentum will be equal to the group velocity times the mass: p = m·vg. However, when p is zero, we have a division by zero once more: if p → 0, then vp = E/p → ∞. Infinite wavelengths and infinite phase velocities probably tell us that our particle has to move: our notion of a particle at rest is mathematically inconsistent. If we associate this elementary wavefunction with some particle, and if we then imagine it to move, somehow, then we get an interesting relation between the group and the phase velocity:

vp = ω/k = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg

We can re-write this as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. But what is the group velocity of the elementary wavefunction? Is it a meaningful concept?

The phase velocity is just the ratio of ω/k. In contrast, the group velocity is the derivative of ω with respect to k. So we need to write ω as a function of k. Can we do that even if we have only one wave? We do not have a wave packet here, right? Just some hypothetical building block of a real-life wavefunction, right? Right. So we should introduce uncertainty about E and p and build up the wave packet, right? Well… Yes. But let’s wait with that, and see how far we can go in our interpretation of this elementary wavefunction. Let’s first get that ω = ω(k) relation. You’ll remember we can write Schrödinger’s equation – the equation that describes the propagation mechanism for matter-waves – as the following pair of equations:

  1. Re(∂ψ/∂t) = −[ħ/(2m)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2m)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2m)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2m)]·sin(kx − ωt)

This tells us that ω = ħ·k2/(2m). Therefore, we can calculate ∂ω/∂k as:

∂ω/∂k = ħ·k/m = p/m = vg

We learn nothing new. We are going round and round in circles here, and we always end up with a tautology: as soon as we have a non-zero momentum, we have a mathematical formula for the group velocity – but we don’t know what it represents – and a finite wavelength. In fact, using the p = ħ∙k = h/λ relation, we can write one as a function of the other:

λ = h/p = h/mvg ⇔ vg = h/mλ

What does this mean? It resembles the c = h/mλ relation we had for a particle with zero rest mass. Of course, it does: the λ = h/mc relation is, once again, a limit for vg going to c. By the way, it is interesting to note that the vp·vg = c2 relation implies that the phase velocity is always superluminal. That’ easy to see when you re-write the equation in terms of relative velocities: (vp/c)·(vg/c) = βphase·βgroup = 1. Hence, if βgroup < 1, then βphase > 1.

So what is the geometry, really? Let’s look at the ψ = a·cos(p∙x/ħ – E∙t/ħ) i·a·sin(p∙x/ħ – E∙t/ħ) formula once more. If we write p∙x/ħ as Δ, then we will be interested to know for what x this phase factor will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ  

So now we get a meaningful interpretation for that wavelength: it’s that distance between the crests of the wave, so to speak, as illustrated below.

wavelength

Can we now find a meaningful (i.e. geometric) interpretation for the group and phase velocity? If you look at the illustration above, you see we can sort of distinguish (1) a linear velocity (the speed with which those wave crests move) and (2) some kind of circular or tangential velocity (the velocity along the red contour line above). We’ll probably need the formula for the tangential velocity: v = a∙ω. If p = 0 (so we have that weird infinitesimally long wavelength), then we have two velocities:

  1. The tangential velocity around the a·ei·E·t  circle, so to speak, and that will just be equal to v = a∙ω = a∙E/ħ.
  2. The red contour line sort of gets stretched out, like infinitely long, and the velocity becomes… What does it do? Does it go to ∞ , or to c?

Let’s think about this. For a particle at rest, we had this weird calculation. We had an angular momentum formula (for an electron) which we equated with the real-life +ħ/2 or −ħ/2 values of its spin. And so we got a numerical value for a. It was the Compton radius: the scattering radius for an electron. Let me copy it once again:

Compton radius formula

Just to bring this story a bit back to Earth, you should note the calculated value: = 3.8616×10−13 m. We did then another weird calculation. We said all of the energy of the electron had to be packed in this cylinder that might of might not be there. The point was: the energy is finite, so that elementary wavefunction cannot have an infinite length in space. Indeed, assuming that the energy was distributed uniformly, we jotted down this formula, which reflects the formula for the volume of a cylinder:

E = π·a2·l ⇔ = E/(π·a2)

Using the value we got for the Compton scattering radius (= 3.8616×10−13 m), we got an astronomical value for l. Let me write it out:

= (8.19×10−14)/(π·14.9×10−26) ≈ 0.175×1012 m

It is, literally, an astronomical value: 0.175×1012 m is 175 million kilometer, so that’s like the distance between the Sun and the Earth. We wrote, jokingly, that such space is too large to look for an electron and, hence, that we should really build a proper packet by making use of the Uncertainty Principle: allowing for uncertainty in the energy should, effectively, reduce the uncertainty in position.

But… Well… What if we use that value as the value for λ? We’d get that linear velocity, right? Let’s try it. The period is equal to T = T = 2π·(ħ/E) = h/E and λ = E/(π·a2), so we write:formula for vWe can write this as a function of m and the and ħ constants only:velocitiy 2

A weird formula but not necessarily nonsensical: we get a finite speed. Now, if the wavelength becomes somewhat less astronomical, we’ll get different values of course. I have a strange feeling that, with these formula, we should, somehow, be able to explain relativistic length contraction. But I will let you think about that as for now. Here I just wanted to show the geometry of the wavefunction a bit more in detail.

[1] The discussions on the mass of neutrinos are interesting in this regard. Scientists all felt the neutrino had to have some (rest) mass, so my instinct on this is theirs. In fact, only recently experimental confirmation came in, and the mass of the known neutrino flavors was estimated to be something like 0.12 eV/c2. This mass combines the three known neutrino flavors. To understand this number, you should note it is the same order of magnitude of the equivalent mass of low-energy photons, like infrared or microwave radiation.