The speed of light as an angular velocity

Over the weekend, I worked on a revised version of my paper on a physical interpretation of the wavefunction. However, I forgot to add the final remarks on the speed of light as an angular velocity. I know… This post is for my faithful followers only. It is dense, but let me add the missing bits here:

12

Advertisements

A physical explanation for relativistic length contraction?

My last posts were all about a possible physical interpretation of the quantum-mechanical wavefunction. To be precise, we have been interpreting the wavefunction as a gravitational wave. In this interpretation, the real and imaginary component of the wavefunction get a physical dimension: force per unit mass (newton per kg). The inspiration here was the structural similarity between Coulomb’s and Newton’s force laws. They both look alike: it’s just that one gives us a force per unit charge (newton per coulomb), while the other gives us a force per unit mass.

So… Well… Many nice things came out of this – and I wrote about that at length – but last night I was thinking this interpretation may also offer an explanation of relativistic length contraction. Before we get there, let us re-visit our hypothesis.

The geometry of the wavefunction

The elementary wavefunction is written as:

ψ = a·ei(E·t − px)/ħa·cos(px/ħ – E∙t/ħ) + i·a·sin(px/ħ – E∙t/ħ)

Nature should not care about our conventions for measuring the phase angle clockwise or counterclockwise and, therefore, the ψ = a·ei[E·t − px]/ħ function may also be permitted. We know that cos(θ) = cos(θ) and sinθ = sin(θ), so we can write:    

ψ = a·ei(E·t − p∙x)/ħa·cos(E∙t/ħ – px/ħ) + i·a·sin(E∙t/ħ – px/ħ)

= a·cos(px/ħ – E∙t/ħ) i·a·sin(px/ħ – E∙t/ħ)

The vectors p and x are the the momentum and position vector respectively: p = (px, py, pz) and x = (x, y, z). However, if we assume there is no uncertainty about p – not about the direction nor the magnitude – then we may choose an x-axis which reflects the direction of p. As such, x = (x, y, z) reduces to (x, 0, 0), and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis or, if p = 0, that our particle is located somewhere on the x-axis. Hence, the analysis is one-dimensional only.

The geometry of the elementary wavefunction is illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. You can check this as follows: if the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively. We speculated this should correspond to the two possible values for the quantum-mechanical spin of the wave: +ħ/2 or −ħ/2. The cosine and sine components for the left-handed wave are shown below. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2).

circular polarizaton with components

As for the wave velocity, and its direction of propagation, we know that the (phase) velocity of any wave F(kx – ωt) is given by vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, the momentum might also be in the negative x-direction, in which case k would be equal to -p and, therefore, we would get a negative phase velocity: vp = ω/k = E/p.

The de Broglie relations

E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). Of course, we may write: f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time is easy to interpret. The wavefunction of a particle with more energy, or more mass, will have a higher density in time than a particle with less energy.

In contrast, the second de Broglie relation is somewhat harder to interpret. According to the p = h/λ relation, the wavelength is inversely proportional to the momentum: λ = h/p. The velocity of a photon, or a (theoretical) particle with zero rest mass (m0 = 0), is c and, therefore, we find that p = mvv = mcc = m∙c (all of the energy is kinetic). Hence, we can write: p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass, the wavelength can be written as:

λ = h/p = hc/E = h/mc

However, this is a limiting situation – applicable to photons only. Real-life matter-particles should have some mass[1] and, therefore, their velocity will never be c.[2]

Hence, if p goes to zero, then the wavelength becomes infinitely long: if p → 0 then λ → ∞. How should we interpret this inverse proportionality between λ and p? To answer this question, let us first see what this wavelength λ actually represents.

If we look at the ψ = a·cos(p∙x/ħ – E∙t/ħ) – i·a·sin(p∙x/ħ – E∙t/ħ) once more, and if we write p∙x/ħ as Δ, then we can look at p∙x/ħ as a phase factor, and so we will be interested to know for what x this phase factor Δ = p∙x/ħ will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ

So now we get a meaningful interpretation for that wavelength. It is the distance between the crests (or the troughs) of the wave, so to speak, as illustrated below. Of course, this two-dimensional wave has no real crests or troughs: we measure crests and troughs against the y-axis here. Hence, our definition depend on the frame of reference.

wavelength

Now we know what λ actually represents for our one-dimensional elementary wavefunction. Now, the time that is needed for one cycle is equal to T = 1/f = 2π·(ħ/E). Hence, we can now calculate the wave velocity:

v = λ/T = (h/p)/[2π·(ħ/E)] = E/p

Unsurprisingly, we just get the phase velocity that we had calculated already: v = vp = E/p. The question remains: what if p is zero? What if we are looking at some particle at rest? It is an intriguing question: we get an infinitely long wavelength, and an infinite wave velocity.

Now, re-writing the v = E/p as v = m∙c2/m∙vg  = cg, in which βg is the relative classical velocity[3] of our particle βg = vg/c) tells us that the phase velocities will effectively be superluminal (βg  < 1 so 1/ βg > 1), but what if βg approaches zero? The conclusion seems unavoidable: for a particle at rest, we only have a frequency in time, as the wavefunction reduces to:

ψ = a·e−i·E·t/ħ = a·cos(E∙t/ħ) – i·a·sin(E∙t/ħ)

How should we interpret this?

A physical interpretation of relativistic length contraction?

In my previous posts, we argued that the oscillations of the wavefunction pack energy. Because the energy of our particle is finite, the wave train cannot be infinitely long. If we assume some definite number of oscillations, then the string of oscillations will be shorter as λ decreases. Hence, the physical interpretation of the wavefunction that is offered here may explain relativistic length contraction.

🙂

Yep. Think about it. 🙂

[1] Even neutrinos have some (rest) mass. This was first confirmed by the US-Japan Super-Kamiokande collaboration in 1998. Neutrinos oscillate between three so-called flavors: electron neutrinos, muon neutrinos and tau neutrinos. Recent data suggests that the sum of their masses is less than a millionth of the rest mass of an electron. Hence, they propagate at speeds that are very near to the speed of light.

[2] Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as KE = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As v approaches c, γ approaches infinity and, therefore, the kinetic energy would become infinite as well.

[3] Because our particle will be represented by a wave packet, i.e. a superimposition of elementary waves with different E and p, the classical velocity of the particle becomes the group velocity of the wave, which is why we denote it by vg.

The geometry of the wavefunction (2)

This post further builds on the rather remarkable results we got in our previous posts. Let us start with the basics once again. The elementary wavefunction is written as:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

Of course, Nature (or God, as Einstein would put it) does not care about our conventions for measuring an angle (i.e. the phase of our wavefunction) clockwise or counterclockwise and, therefore, the ψ = a·ei[E·t − px]/ħ function is also permitted. We know that cos(θ) = cos(−θ) and sinθ = −sin(θ), so we can write:    

ψ = a·ei[E·t − p∙x]/ħa·cos(E∙t/ħ − px/ħ) + i·a·sin(E∙t/ħ − px/ħ)

= a·cos(px/ħ − E∙t/ħ) − i·a·sin(px/ħ − E∙t/ħ)

The vectors p and x are the momentum and position vector respectively: p = (px, py, pz) and x = (x, y, z). However, if we assume there is no uncertainty about p – not about the direction, and not about the magnitude – then the direction of p can be our x-axis. In this reference frame, x = (x, y, z) reduces to (x, 0, 0), and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis or, if p = 0, that our particle is located somewhere on the x-axis. So we have an analysis in one dimension only then, which facilitates our calculations. The geometry of the wavefunction is then as illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. [You can check this as follows: if the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively.]

Now, you will remember that we speculated the two polarizations (left- versus right-handed) should correspond to the two possible values for the quantum-mechanical spin of the wave (+ħ/2 or −ħ/2). We will come back to this at the end of this post. Just focus on the essentials first: the cosine and sine components for the left-handed wave are shown below. Look at it carefully and try to understand. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2).

circular polarizaton with components

As for the wave velocity, and its direction of propagation, we know that the (phase) velocity of any waveform F(kx − ωt) is given by vp = ω/k. In our case, we find that vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, the momentum might also be in the negative x-direction, in which case k would be equal to −p and, therefore, we would get a negative phase velocity: vp = ω/k = (E/ħ)/(−p/ħ) = −E/p.

As you know, E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). [If in doubt, check my post on essential wave math.] Now, you also know that f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time (oscillations or radians per second) is easy to interpret. A particle will always have some mass and, therefore, some energy, and it is easy to appreciate the fact that the wavefunction of a particle with more energy (or more mass) will have a higher density in time than a particle with less energy.

However, the second de Broglie relation is somewhat harder to interpret. Note that the wavelength is inversely proportional to the momentum: λ = h/p. Hence, if p goes to zero, then the wavelength becomes infinitely long, so we write:

If p → 0 then λ → ∞.

For the limit situation, a particle with zero rest mass (m0 = 0), the velocity may be c and, therefore, we find that p = mvv = mcc = m∙c (all of the energy is kinetic) and, therefore, p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass (m0 = 0), the wavelength can be written as:

λ = h/p = hc/E = h/mc

Of course, we are talking a photon here. We get the zero rest mass for a photon. In contrast, all matter-particles should have some mass[1] and, therefore, their velocity will never equal c.[2] The question remains: how should we interpret the inverse proportionality between λ and p?

Let us first see what this wavelength λ actually represents. If we look at the ψ = a·cos(p∙x/ħ − E∙t/ħ) − i·a·sin(p∙x/ħ – E∙t/ħ) once more, and if we write p∙x/ħ as Δ, then we can look at p∙x/ħ as a phase factor, and so we will be interested to know for what x this phase factor Δ = p∙x/ħ will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ

So now we get a meaningful interpretation for that wavelength. It is the distance between the crests (or the troughs) of the wave, so to speak, as illustrated below. Of course, this two-dimensional wave has no real crests or troughs: they depend on your frame of reference.

wavelength

So now we know what λ actually represent for our one-dimensional elementary wavefunction. Now, the time that is needed for one cycle is equal to T = 1/f = 2π·(ħ/E). Hence, we can now calculate the wave velocity:

v = λ/T = (h/p)/[2π·(ħ/E)] = E/p

Unsurprisingly, we just get the phase velocity that we had calculated already: v = vp = E/p. It does not answer the question: what if p is zero? What if we are looking at some particle at rest? It is an intriguing question: we get an infinitely long wavelength, and an infinite phase velocity. Now, we know phase velocities can be superluminal, but they should not be infinite. So what does the mathematical inconsistency tell us? Do these infinitely long wavelengths and infinite wave velocities tell us that our particle has to move? Do they tell us our notion of a particle at rest is mathematically inconsistent?

Maybe. But maybe not. Perhaps the inconsistency just tells us our elementary wavefunction – or the concept of a precise energy, and a precise momentum – does not make sense. This is where the Uncertainty Principle comes in: stating that p = 0, implies zero uncertainty. Hence, the σp factor in the σp∙σx ≤ ħ/2 would be zero and, therefore, σp∙σx would be zero which, according to the Uncertainty Principle, it cannot be: it can be very small, but it cannot be zero.

It is interesting to note here that σp refers to the standard deviation from the mean, as illustrated below. Of course, the distribution may be or may not be normal – we don’t know – but a normal distribution makes intuitive sense, of course. Also, if we assume the mean is zero, then the uncertainty is basically about the direction in which our particle is moving, as the momentum might then be positive or negative.

Standard_deviation_diagram

The question of natural units may pop up. The Uncertainty Principle suggests a numerical value of the natural unit for momentum and distance that is equal to the square root of ħ/2, so that’s about 0.726×10−17 m for the distance unit and 0.726×10−17 N∙s for the momentum unit, as the product of both gives us ħ/2. To make this somewhat more real, we may note that 0.726×10−17 m is the attometer scale (1 am = 1×10−18 m), so that is very small but not unreasonably small.[3]

Hence, we need to superimpose a potentially infinite number of waves with energies and momenta centered on some mean value. It is only then that we get meaningful results. For example, the idea of a group velocity – which should correspond to the classical idea of the velocity of our particle – only makes sense in the context of wave packet. Indeed, the group velocity of a wave packet (vg) is calculated as follows:

vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi)

This assumes the existence of a dispersion relation which gives us ωi as a function of ki – what amounts to the same – Ei as a function of pi. How do we get that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as the following pair of equations[4]:

  1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)

These equations imply the following dispersion relation:

ω = ħ·k2/(2m)

Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, about m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too, and the two will, obviously, be related as follows: σm = σE/c2. We are tempted to do a few substitutions here. Let’s first check what we get when doing the mi = Ei/c2 substitution:

ωi = ħ·ki2/(2mi) = (1/2)∙ħ·ki2c2/Ei = (1/2)∙ħ·ki2c2/(ωi∙ħ) = (1/2)∙ħ·ki2c2i

⇔ ωi2/ki2 = c2/2 ⇔ ωi/ki = vp = c/2 !?

We get a very interesting but nonsensical condition for the dispersion relation here. I wonder what mistake I made. 😦

Let us try another substitution. The group velocity is what it is, right? It is the velocity of the group, so we can write: ki = p/ħ = mi ·vg. This gives us the following result:

ωi = ħ·(mi ·vg)2/(2mi) = ħ·mi·vg2/2

It is yet another interesting condition for the dispersion relation. Does it make any more sense? I am not so sure. That factor 1/2 troubles us. It only makes sense when we drop it. Now you will object that Schrödinger’s equation gives us the electron orbitals – and many other correct descriptions of quantum-mechanical stuff – so, surely, Schrödinger’s equation cannot be wrong. You’re right. It’s just that… Well… When we are splitting in up in two equations, as we are doing here, then we are looking at one of the two dimensions of the oscillation only and, therefore, it’s only half of the mass that counts. Complicated explanation but… Well… It should make sense, because the results that come out make sense. Think of it. So we write this:

  • Re(∂ψ/∂t) = −(ħ/meffIm(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·(ħ/meff)·cos(kx − ωt)
  • Im(∂ψ/∂t) = (ħ/meffRe(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·(ħ/meff)·sin(kx − ωt)

We then get the dispersion relation without that 1/2 factor:

ωi = ħ·ki2/mi

The mi = Ei/c2 substitution then gives us the result we sort of expected to see:

ωi = ħ·ki2/mi = ħ·ki2c2/Ei = ħ·ki2c2/(ωi∙ħ) ⇔ ωi/ki = vp = c

Likewise, the other calculation also looks more meaningful now:

ωi = ħ·(mi ·vg)2/mi = ħ·mi·vg2

Sweet ! 🙂

Let us put this aside for the moment and focus on something else. If you look at the illustrations above, you see we can sort of distinguish (1) a linear velocity – the speed with which those wave crests (or troughs) move – and (2) some kind of circular or tangential velocity – the velocity along the red contour line above. We’ll need the formula for a tangential velocity: vt = a∙ω.

Now, if λ is zero, then vt = a∙ω = a∙E/ħ is just all there is. We may double-check this as follows: the distance traveled in one period will be equal to 2πa, and the period of the oscillation is T = 2π·(ħ/E). Therefore, vt will, effectively, be equal to vt = 2πa/(2πħ/E) = a∙E/ħ. However, if λ is non-zero, then the distance traveled in one period will be equal to 2πa + λ. The period remains the same: T = 2π·(ħ/E). Hence, we can write:

F1

For an electron, we did this weird calculation. We had an angular momentum formula (for an electron) which we equated with the real-life +ħ/2 or −ħ/2 values of its spin, and we got a numerical value for a. It was the Compton radius: the scattering radius for an electron. Let us write it out:

F2

Using the right numbers, you’ll find the numerical value for a: 3.8616×10−13 m. But let us just substitute the formula itself here: F3

This is fascinating ! And we just calculated that vp is equal to c. For the elementary wavefunction, that is. Hence, we get this amazing result:

vt = 2c

This tangential velocity is twice the linear velocity !

Of course, the question is: what is the physical significance of this? I need to further look at this. Wave velocities are, essentially, mathematical concepts only: the wave propagates through space, but nothing else is really moving. However, the geometric implications are obviously quite interesting and, hence, need further exploration.

One conclusion stands out: all these results reinforce our interpretation of the speed of light as a property of the vacuum – or of the fabric of spacetime itself. 🙂

[1] Even neutrinos should have some (rest) mass. In fact, the mass of the known neutrino flavors was estimated to be smaller than 0.12 eV/c2. This mass combines the three known neutrino flavors.

[2] Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as KE = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As v approaches c, γ approaches infinity and, therefore, the kinetic energy would become infinite as well.

[3] It is, of course, extremely small, but 1 am is the current sensitivity of the LIGO detector for gravitational waves. It is also thought of as the upper limit for the length of an electron, for quarks, and for fundamental strings in string theory. It is, in any case, 1,000,000,000,000,000,000 times larger than the order of magnitude of the Planck length (1.616229(38)×10−35 m).

[4] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.

This year’s Nobel Prize for Physics…

One of my beloved brothers just sent me the news on this year’s Nobel Prize for Physics. Of course, it went to the MIT/Caltech LIGO scientists – who confirmed the reality of gravitational waves. That’s exactly the topic that I am exploring when trying to digest all this quantum math and stuff. Brilliant !

I actually sent the physicists a congratulatory message – and my paper ! I can’t believe I actually did that.

In the best case, I just made a fool of myself. In the worst case… Well… I just made a fool of myself. 🙂

Electron and photon strings

In my previous posts, I’ve been playing with… Well… At the very least, a new didactic approach to understanding the quantum-mechanical wavefunction. I just boldly assumed the matter-wave is a gravitational wave. I did so by associating its components with the dimension of gravitational field strength: newton per kg, which is the dimension of acceleration (N/kg = m/s2). Why? When you remember the physical dimension of the electromagnetic field is N/C (force per unit charge), then that’s kinda logical, right? 🙂 The math is beautiful. Key consequences include the following:

  1. Schrodinger’s equation becomes an energy diffusion equation.
  2. Energy densities give us probabilities.
  3. The elementary wavefunction for the electron gives us the electron radius.
  4. Spin angular momentum can be interpreted as reflecting the right- or left-handedness of the wavefunction.
  5. Finally, the mysterious boson-fermion dichotomy is no longer “deep down in relativistic quantum mechanics”, as Feynman famously put it.

It’s all great. Every day brings something new. 🙂 Today I want to focus on our weird electron model and how we get God’s number (aka the fine-structure constant) out of it. Let’s recall the basics of it. We had the elementary wavefunction:

ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

In one-dimensional space (think of a particle traveling along some line), the vectors (p and x) become scalars, and so we simply write:

ψ = a·ei[E·t − p∙x]/ħa·ei[E·t − p∙x]/ħ = a·cos(p∙x/ħ − E∙t/ħ) + i·a·sin(p∙x/ħ − E∙t/ħ)

This wavefunction comes with constant probabilities |ψ|2  = a2, so we need to define a space outside of which ψ = 0. Think of the particle-in-a-box model. This is obvious oscillations pack energy, and the energy of our particle is finite. Hence, each particle – be it a photon or an electron – will pack a finite number of oscillations. It will, therefore, occupy a finite amount of space. Mathematically, this corresponds to the normalization condition: all probabilities have to add up to one, as illustrated below.probability in a boxNow, all oscillations of the elementary wavefunction have the same amplitude: a. [Terminology is a bit confusing here because we use the term amplitude to refer to two very different things here: we may say a is the amplitude of the (probability) amplitude ψ. So how many oscillations do we have? What is the size of our box? Let us assume our particle is an electron, and we will reduce its motion to a one-dimensional motion only: we’re thinking of it as traveling along the x-axis. We can then use the y- and z-axes as mathematical axes only: they will show us how the magnitude and direction of the real and imaginary component of ψ. The animation below (for which I have to credit Wikipedia) shows how it looks like.wavicle animationOf course, we can have right- as well as left-handed particle waves because, while time physically goes by in one direction only (we can’t reverse time), we can count it in two directions: 1, 2, 3, etcetera or −1, −2, −3, etcetera. In the latter case, think of time ticking away. 🙂 Of course, in our physical interpretation of the wavefunction, this should explain the (spin) angular momentum of the electron, which is – for some mysterious reason that we now understand 🙂 – always equal to = ± ħ/2.

Now, because a is some constant here, we may think of our box as a cylinder along the x-axis. Now, the rest mass of an electron is about 0.510 MeV, so that’s around 8.19×10−14 N∙m, so it will pack some 1.24×1020 oscillations per second. So how long is our cylinder here? To answer that question, we need to calculate the phase velocity of our wave. We’ll come back to that in a moment. Just note how this compares to a photon: the energy of a photon will typically be a few electronvolt only (1 eV ≈ 1.6 ×10−19 N·m) and, therefore, it will pack like 1015 oscillations per second, so that’s a density (in time) that is about 100,000 times less.

Back to the angular momentum. The classical formula for it is L = I·ω, so that’s angular frequency times angular mass. What’s the angular velocity here? That’s easy: ω = E/ħ. What’s the angular mass? If we think of our particle as a tiny cylinder, we may use the formula for its angular mass: I = m·r2/2. We have m: that’s the electron mass, right? Right? So what is r? That should be the magnitude of the rotating vector, right? So that’s a. Of course, the mass-energy equivalence relation tells us that E = mc2, so we can write:

L = I·ω = (m·r2/2)·(E/ħ) = (1/2)·a2·m·(mc2/ħ) = (1/2)·a2·m2·c2

Does it make sense? Maybe. Maybe not. You can check the physical dimensions on both sides of the equation, and that works out: we do get something that is expressed in N·m·s, so that’s action or angular momentum units. Now, we know L must be equal to = ± ħ/2. [As mentioned above, the plus or minus sign depends on the left- or right-handedness of our wavefunction, so don’t worry about that.] How do we know that? Because of the Stern-Gerlach experiment, which has been repeated a zillion times, if not more. Now, if L = J, then we get the following equation for a:  Compton radius formulaThis is the formula for the radius of an electron. To be precise, it is the Compton scattering radius, so that’s the effective radius of an electron as determined by scattering experiments. You can calculate it: it is about 3.8616×10−13 m, so that’s the picometer scale, as we would expect.

This is a rather spectacular result. As far as I am concerned, it is spectacular enough for me to actually believe my interpretation of the wavefunction makes sense.

Let us now try to think about the length of our cylinder once again. The period of our wave is equal to T = 1/f = 1/(ω/2π) = 1/[(E/ħ)·2π] = 1/(E/h) = h/E. Now, the phase velocity (vp) will be given by:

vp = λ·= (2π/k)·(ω/2π) = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg

This is very interesting, because it establishes an inverse proportionality between the group and the phase velocity of our wave, with c2 as the coefficient of inverse proportionality. In fact, this equation looks better if we write as vp·vg = c2. Of course, the group velocity (vg) is the classical velocity of our electron. This equation shows us the idea of an electron at rest doesn’t make sense: if vg = 0, then vp times zero must equal c2, which cannot be the case: electrons must move in space. More generally, speaking, matter-particles must move in space, with the photon as our limiting case: it moves at the speed of light. Hence, for a photon, we find that vp = vg = E/p = c.

How can we calculate the length of a photon or an electron? It is an interesting question. The mentioned orders or magnitude of the frequency (1015 or 1020) gives us the number of oscillations per second. But how many do we have in one photon, or in one electron?

Let’s first think about photons, because we have more clues here. Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. We know how to calculate to calculate the Q of these atomic oscillators (see, for example, Feynman I-32-3): it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). Now, the frequency of sodium light, for example, is 0.5×1015 oscillations per second, and the decay time is about 3.2×10–8 seconds, so that makes for (0.5×1015)·(3.2×10–8) = 16 million oscillations. Now, the wavelength is 600 nanometer (600×10–9) m), so that gives us a wavetrain with a length of (600×10–9)·(16×106) = 9.6 m.

These oscillations may or may not have the same amplitude and, hence, each of these oscillations may pack a different amount of energies. However, if the total energy of our sodium light photon (i.e. about 2 eV ≈ 3.3×10–19 J) are to be packed in those oscillations, then each oscillation would pack about 2×10–26 J, on average, that is. We speculated in other posts on how we might imagine the actual wave pulse that atoms emit when going from one energy state to another, so we don’t do that again here. However, the following illustration of the decay of a transient signal dies out may be useful.decay-time1

This calculation is interesting. It also gives us an interesting paradox: if a photon is a pointlike particle, how can we say its length is like 10 meter or more? Relativity theory saves us here. We need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom. Now, because the photon travels at the speed of light, relativistic length contraction will make it look like a pointlike particle.

What about the electron? Can we use similar assumptions? For the photon, we can use the decay time to calculate the effective number of oscillations. What can we use for an electron? We will need to make some assumption about the phase velocity or, what amounts to the same, the group velocity of the particle. What formulas can we use? The p = m·v is the relativistically correct formula for the momentum of an object if m = mv, so that’s the same m we use in the E = mc2 formula. Of course, v here is, obviously, the group velocity (vg), so that’s the classical velocity of our particle. Hence, we can write:

p = m·vg = (E/c2vg ⇔ vg = p/m =  p·c2/E

This is just another way of writing that vg = c2/vp or vp = c2/vg so it doesn’t help, does it? Maybe. Maybe not. Let us substitute in our formula for the wavelength:

λ = vp/f = vp·T = vp⋅(h/E) = (c2/vg)·(h/E) = h/(m·vg) = h/p 

This gives us the other de Broglie relation: λ = h/p. This doesn’t help us much, although it is interesting to think about it. The = E/h relation is somewhat intuitive: higher energy, higher frequency. In contrast, what the λ = h/p relation tells us that we get an infinite wavelength if the momentum becomes really small. What does this tell us? I am not sure. Frankly, I’ve look at the second de Broglie relation like a zillion times now, and I think it’s rubbish. It’s meant to be used for the group velocity, I feel. I am saying that because we get a non-sensical energy formula out of it. Look at this:

  1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h.
  2. v = λ = (E/h)∙(p/h) = E/p
  3. p = m·v. Therefore, E = v·p = m·v2

E = m·v2? This formula is only correct if c, in which case it becomes the E = mc2 equation. So it then describes a photon, or a massless matter-particle which… Well… That’s a contradictio in terminis. 🙂 In all other cases, we get nonsense.

Let’s try something differently.  If our particle is at rest, then p = 0 and the p·x/ħ term in our wavefunction vanishes, so it’s just:

ψ = a·ei·E·t/ħa·cos(E∙t/ħ) − i·a·sin(E∙t/ħ)

Hence, our wave doesn’t travel. It has the same amplitude at every point in space at any point in time. Both the phase and group velocity become meaningless concepts. The amplitude varies – because of the sine and cosine – but the probability remains the same: |ψ|2  = a2. Hmm… So we need to find another way to define the size of our box. One of the formulas I jotted down in my paper in which I analyze the wavefunction as a gravitational wave was this one:F1

It was a physical normalization condition: the energy contributions of the waves that make up a wave packet need to add up to the total energy of our wave. Of course, for our elementary wavefunction here, the subscripts vanish and so the formula reduces to E = (E/c2a2·(E22), out of which we get our formula for the scattering radius: = ħ/mc. Now how do we pack that energy in our cylinder? Assuming that energy is distributed uniformly, we’re tempted to write something like E = a2·l or, looking at the geometry of the situation:

E = π·a2·l ⇔ = E/(π·a2)

It’s just the formula for the volume of a cylinder. Using the value we got for the Compton scattering radius (= 3.8616×10−13 m), we find an l that’s equal to (8.19×10−14)/(π·14.9×10−26) =≈ 0.175×1012Meter? Yes. We get the following formula:

length formula

0.175×1012 m is 175 million kilometer. That’s – literally – astronomic. It corresponds to 583 light-seconds, or 9.7 light-minutes. So that’s about 1.17 times the (average) distance between the Sun and the Earth. You can see that we do need to build a wave packet: that space is a bit too large to look for an electron, right? 🙂

Could we possibly get some less astronomic proportions? What if we impose that should equal a? We get the following condition:l over aWe find that m would have to be equal to m ≈ 1.11×10−36 kg. That’s tiny. In fact, it’s equivalent to an energy of about  equivalent to 0.623 eV (which you’ll see written as 623 milli-eV. This corresponds to light with a wavelength of about 2 micro-meter (μm), so that’s in the infrared spectrum. It’s a funny formula: we find, basically, that the l/ratio is proportional to m4. Hmm… What should we think of this? If you have any ideas, let me know !

Post scriptum (3 October 2017): The paper is going well. Getting lots of downloads, and the views on my blog are picking up too. But I have been vicious. Substituting B for (1/c)∙iE or for −(1/c)∙iE implies a very specific choice of reference frame. The imaginary unit is a two-dimensional concept: it only makes sense when giving it a plane view. Literally. Indeed, my formulas assume the i (or −i) plane is perpendicular to the direction of propagation of the elementary quantum-mechanical wavefunction. So… Yes. The need for rotation matrices is obvious. But my physical interpretation of the wavefunction stands. 🙂

Wavefunctions as gravitational waves

This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas.

It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂

Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension.

The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively.

While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter.

Introduction

This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries.

The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way?

We show the answer is positive and remarkably straightforward.  If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both?

The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself.

As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4]

Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6]

Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7]

We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us. 

I. Energy as a two-dimensional oscillation of mass

The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking:

  1. E = mc2
  2. E = mω2/2
  3. E = mv2/2

In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2?

That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs.

Figure 1: Oscillations in two dimensionsV-2 engine

If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11]

At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t).

The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as:

  1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ)
  2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ)

The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy is equal to:

E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2

To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ

Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:

2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2.

We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity?

These are sensible questions. Let us explore them. 

II. The wavefunction as a two-dimensional oscillation

The elementary wavefunction is written as:

ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ)

When considering a particle at rest (p = 0) this reduces to:

ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ) 

Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise.

Figure 2: Euler’s formula760px-eulers_formula

If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine  – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this.

Figure 3: Geometric representation of the wavefunction5d_euler_f

Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time.

Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2.

Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well.

However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter.

What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:F1We can re-write this as:F2What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass?

Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5. 

III. What is mass?

We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2:

[m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg

This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:F3This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here.

 

The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2.

However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly.

When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it.

What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation. 

IV. Schrödinger’s equation as an energy diffusion equation

The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows:

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17]

Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

The ubiquitous diffusion equation in physics is:

∂φ(x, t)/∂t = D·∇2φ(x, t)

The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

The above equations effectively describe a propagation mechanism in spacetime, as illustrated below.

Figure 4: Propagation mechanismspropagation

The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor:

  1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian);
  2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible.

Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction?

At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number.

To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be?

We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20]

Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:F4Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes:

N/kg = N/(N·s2/m)= m/s2

What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction?

My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent.

In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves.

V. Energy densities and flows

Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:F5E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:F6Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S.

We can analyze the dimensions of the equation for the energy density as follows:

  1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2.
  2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2.
  3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21]

Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them.

Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:F7Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same.

Figure 5: Electromagnetic wave: E and BEM field

Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:F8We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density !

|ψ|2  = |a·ei∙E·t/ħ|2 = a2 = u

This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible.

As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman:

“Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1)

The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more. 

VI. Group and phase velocity of the matter-wave

The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle.

Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again:

ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to:

ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)

E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following:

T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h

This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg

This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as:

vp/= βp = c/vp = 1/βg = 1/(c/vp)

Figure 6: Reciprocal relation between phase and group velocitygraph

We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24]

Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move.

For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c

This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity.

To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations:

  1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)

Both equations imply the following dispersion relation:

ω = ħ·k2/(2meff)

Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in. 

VII. Explaining spin

The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5).

We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave.

The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here.

Figure 7: Torque and angular momentum vectorsTorque_animation

A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write:

L = I·ω

Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to:

ω = 2π/[2π·(ħ/E0)] = E0

We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get:

L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2)

Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as:

L = a2·E02/(2·ħ·c2)

Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a?

We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius.

Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number.

The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m.

This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:F9

In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:F10

This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper. 

VIII. The boson-fermion dichotomy

Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this.

Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as:

ψ(θi= ai·(cosθi + i·sinθi)

In contrast, an elementary left-handed wave would be written as:

ψ(θi= ai·(cosθii·sinθi)

How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like:

ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)

If we count time like −1, −2, −3, etcetera then we write it as:

 ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ)

Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it.

It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there.

Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles:

(vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2

The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26] 

IX. Concluding remarks

There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle.

Figure 8: Two-dimensional circular movementoscillation-of-a-ball

The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition.

The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus.

The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak?

The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant.

Appendix 1: The de Broglie relations and energy

The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept:

  1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h.
  2. v = λ = (E/h)∙(p/h) = E/p
  3. p = m·v. Therefore, E = v·p = m·v2

E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:F11Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27]

However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave.

Appendix 2: The concept of the effective mass

The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:F12This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following:

a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ 

⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2

It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better:

meff = m∙(v/c)2 = m∙β2

We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]).

In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply.

References

This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3.

Notes

[1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition.

[2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law.

[3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs.

[4] The author re-derives the equation for the Compton scattering radius in section VII of the paper.

[5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism.

[6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3)

[7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979)

[8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator.

[9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that.

[10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft.

[11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2.

[12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation.

[13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring.

[14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively.

[15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom.

[16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator.

[17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation.

[18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.

[19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).     

[20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do.

[21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area.

[22] The illustration shows a linearly polarized wave, but the obtained result is general.

[23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2).

[24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017).

[25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase.

[26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks.

[27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited.

[28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017).

Math, physics and reality

This blog has been nice. It doesn’t get an awful lot of traffic (about a thousand visitors a week) but, from time to time, I do get a response or a question that fires me up, if only because it tells me someone is actually reading what I write.

Looking at the site now, I feel like I need to reorganize it completely. It’s just chaos, right? But then that’s what gets me the positive feedback: my readers are in the same boat. We’re trying to make sense of what physicists tell us is reality. The interference model I presented in my previous post is really nice. It has all the ingredients of quantum mechanics, which I would group under two broad categories: uncertainty and duality. Both are related, obviously. I will not talk about the reality of the wavefunction here, because I am biased: I firmly believe the wavefunction represents something real. Why? Because Einstein’s E = m·c2 formula tells us so: energy is a two-dimensional oscillation of mass. Two-dimensional, because it’s got twice the energy of the classroom oscillator (think of a mass on a spring). More importantly, the real and imaginary dimension of the oscillation are both real: they’re perpendicular to the direction of motion of the wave-particle. Photon or electron. It doesn’t matter. Of course, we have all of the transformation formulas, but… Well… These are not real: they are only there to accommodate our perspective: the state of the observer.

The distinction between the group and phase velocity of a wave packet is probably the best example of the failure of ordinary words to describe reality: particles are not waves, and waves are not particles. They are both… Well… Both at the same time. To calculate the action along some path, we assume there is some path, and we assume there is some particle following some path. The path and the particle are just figments of our mind. Useful figments of the mind, but… Well… There is no such thing as an infinitesimally small particle, and the concept of some one-dimensional line in spacetime does not make sense either. Or… Well… They do. Because they help us to make sense of the world. Of what is, whatever it is. 🙂

The mainstream views on the physical significance of the wavefunction are probably best summed up in the Encyclopædia Britannica, which says the wavefunction has no physical significance. Let me quote the relevant extract here:

“The wave functionin quantum mechanics, is a variable quantity that mathematically describes the wave characteristics of a particle. The value of the wave function of a particle at a given point of space and time is related to the likelihood of the particle’s being there at the time. By analogy with waves such as those of sound, a wave function, designated by the Greek letter psi, Ψ, may be thought of as an expression for the amplitude of the particle wave (or de Broglie wave), although for such waves amplitude has no physical significance. The square of the wave function, Ψ2, however, does have physical significance: the probability of finding the particle described by a specific wave function Ψ at a given point and time is proportional to the value of Ψ2.”

Really? First, this is factually wrong: the probability is given by the square of the absolute value of the wave function. These are two very different things:

  1. The square of a complex number is just another complex number: (a + ib)2 = a+ (ib)+ 2iab = ai2b+ 2iab = a– b+ 2iab.
  2. In contrast, the square of the absolute value always gives us a real number, to which we assign the mentioned physical interpretation:|a + ib|2 = [√(a+ b2)]2 = a+ b2.

But it’s not only position: using the right operators, we can also get probabilities on momentum, energy and other physical variables. Hence, the wavefunction is so much more than what the Encyclopædia Britannica suggests.

More fundamentally, what is written there is philosophically inconsistent. Squaring something – the number itself or its norm – is a mathematical operation. How can a mathematical operation suddenly yield something that has physical significance, if none of the elements it operates on, has any. One cannot just go from the mathematical to the physical space. The mathematical space describes the physical space. Always. In physics, at least. 🙂

So… Well… There is too much nonsense around. Disgusting. And the Encyclopædia Britannica should not just present the mainstream view. The truth is: the jury is still out, and there are many guys like me. We think the majority view is plain wrong. In this case, at least. 🙂

Playing with amplitudes

Let’s play a bit with the stuff we found in our previous post. This is going to be unconventional, or experimental, if you want. The idea is to give you… Well… Some ideas. So you can play yourself. 🙂 Let’s go.

Let’s first look at Feynman’s (simplified) formula for the amplitude of a photon to go from point a to point b. If we identify point by the position vector r1 and point by the position vector r2, and using Dirac’s fancy bra-ket notation, then it’s written as:

propagator

So we have a vector dot product here: pr12 = |p|∙|r12|· cosθ = p∙r12·cosα. The angle here (α) is the angle between the and r12 vector. All good. Well… No. We’ve got a problem. When it comes to calculating probabilities, the α angle doesn’t matter: |ei·θ/r|2 = 1/r2. Hence, for the probability, we get: P = | 〈r2|r1〉 |2 = 1/r122. Always ! Now that’s strange. The θ = pr12/ħ argument gives us a different phase depending on the angle (α) between p and r12. But… Well… Think of it: cosα goes from 1 to 0 when α goes from 0 to ±90° and, of course, is negative when p and r12 have opposite directions but… Well… According to this formula, the probabilities do not depend on the direction of the momentum. That’s just weird, I think. Did Feynman, in his iconic Lectures, give us a meaningless formula?

Maybe. We may also note this function looks like the elementary wavefunction for any particle, which we wrote as:

ψ(x, t) = a·e−i∙θ = a·e−i(E∙t − px)/ħ= a·ei(E∙t)/ħ·ei(px)/ħ

The only difference is that the 〈r2|r1〉 sort of abstracts away from time, so… Well… Let’s get a feel for the quantities. Let’s think of a photon carrying some typical amount of energy. Hence, let’s talk visible light and, therefore, photons of a few eV only – say 5.625 eV = 5.625×1.6×10−19 J = 9×10−19 J. Hence, their momentum is equal to p = E/c = (9×10−19 N·m)/(3×105 m/s) = 3×10−24 N·s. That’s tiny but that’s only because newtons and seconds are enormous units at the (sub-)atomic scale. As for the distance, we may want to use the thickness of a playing card as a starter, as that’s what Young used when establishing the experimental fact of light interfering with itself. Now, playing cards in Young’s time were obviously rougher than those today, but let’s take the smaller distance: modern cards are as thin as 0.3 mm. Still, that distance is associated with a value of θ that is equal to 13.6 million. Hence, the density of our wavefunction is enormous at this scale, and it’s a bit of a miracle that Young could see any interference at all ! As shown in the table below, we only get meaningful values (remember: θ is a phase angle) when we go down to the nanometer scale (10−9 m) or, even better, the angstroms scale ((10−9 m). table action

So… Well… Again: what can we do with Feynman’s formula? Perhaps he didn’t give us a propagator function but something that is more general (read: more meaningful) at our (limited) level of knowledge. As I’ve been reading Feynman for quite a while now – like three or four years 🙂 – I think… Well… Yes. That’s it. Feynman wants us to think about it. 🙂 Are you joking again, Mr. Feynman? 🙂 So let’s assume the reasonable thing: let’s assume it gives us the amplitude to go from point a to point by the position vector along some path r. So, then, in line with what we wrote in our previous post, let’s say p·r (momentum over a distance) is the action (S) we’d associate with this particular path (r) and then see where we get. So let’s write the formula like this:

ψ = a·ei·θ = (1/rei·S = ei·p∙r/r

We’ll use an index to denote the various paths: r0 is the straight-line path and ri is any (other) path. Now, quantum mechanics tells us we should calculate this amplitude for every possible path. The illustration below shows the straight-line path and two nearby paths. So each of these paths is associated with some amount of action, which we measure in Planck units: θ = S/ħalternative paths

The time interval is given by = tr0/c, for all paths. Why is the time interval the same for all paths? Because we think of a photon going from some specific point in space and in time to some other specific point in space and in time. Indeed, when everything is said and done, we do think of light as traveling from point a to point at the speed of light (c). In fact, all of the weird stuff here is all about trying to explain how it does that. 🙂

Now, if we would think of the photon actually traveling along this or that path, then this implies its velocity along any of the nonlinear paths will be larger than c, which is OK. That’s just the weirdness of quantum mechanics, and you should actually not think of the photon actually traveling along one of these paths anyway although we’ll often put it that way. Think of something fuzzier, whatever that may be. 🙂

So the action is energy times time, or momentum times distance. Hence, the difference in action between two paths and j is given by:

δ= p·rj − p·ri = p·(rj − ri) = p·Δr

I’ll explain the δS < ħ/3 thing in a moment. Let’s first pause and think about the uncertainty and how we’re modeling it. We can effectively think of the variation in as some uncertainty in the action: δ= ΔS = p·Δr. However, if S is also equal to energy times time (= E·t), and we insist is the same for all paths, then we must have some uncertainty in the energy, right? Hence, we can write δas ΔS = ΔE·t. But, of course, E = E = m·c2 = p·c, so we will have an uncertainty in the momentum as well. Hence, the variation in should be written as:

δ= ΔS = Δp·Δr

That’s just logical thinking: if we, somehow, entertain the idea of a photon going from some specific point in spacetime to some other specific point in spacetime along various paths, then the variation, or uncertainty, in the action will effectively combine some uncertainty in the momentum and the distance. We can calculate Δp as ΔE/c, so we get the following:

δ= ΔS = Δp·Δr = ΔE·Δr/c = ΔE·Δt with ΔtΔr/c

So we have the two expressions for the Uncertainty Principle here: ΔS = Δp·Δr = ΔE·Δt. Just be careful with the interpretation of Δt: it’s just the equivalent of Δr. We just express the uncertainty in distance in seconds using the (absolute) speed of light. We are not changing our spacetime interval: we’re still looking at a photon going from to in seconds, exactly. Let’s now look at the δS < ħ/3 thing. If we’re adding two amplitudes (two arrows or vectors, so to speak) and we want the magnitude of the result to be larger than the magnitude of the two contributions, then the angle between them should be smaller than 120 degrees, so that’s 2π/3 rad. The illustration below shows how you can figure that out geometrically.angles 2Hence, if S0 is the action for r0, then S1 = S0 + ħ and S2 = S0 + 2·ħ are still good, but S3 = S0 + 3·ħ is not good. Why? Because the difference in the phase angles is Δθ = S1/ħ − S0/ħ = (S0 + ħ)/ħ − S0/ħ = 1 and Δθ = S2/ħ − S0/ħ = (S0 + 2·ħ)/ħ − S0/ħ = 2 respectively, so that’s 57.3° and 114.6° respectively and that’s, effectively, less than 120°. In contrast, for the next path, we find that Δθ = S3/ħ − S0/ħ = (S0 + 3·ħ)/ħ − S0/ħ = 3, so that’s 171.9°. So that amplitude gives us a negative contribution.

Let’s do some calculations using a spreadsheet. To simplify things, we will assume we measure everything (time, distance, force, mass, energy, action,…) in Planck units. Hence, we can simply write: Sn = S0 + n. Of course, = 1, 2,… etcetera, right? Well… Maybe not. We are measuring action in units of ħ, but do we actually think action comes in units of ħ? I am not sure. It would make sense, intuitively, but… Well… There’s uncertainty on the energy (E) and the momentum (p) of our photon, right? And how accurately can we measure the distance? So there’s some randomness everywhere. 😦 So let’s leave that question open as for now.

We will also assume that the phase angle for S0 is equal to 0 (or some multiple of 2π, if you want). That’s just a matter of choosing the origin of time. This makes it really easy: ΔSn = Sn − S0 = n, and the associated phase angle θn = Δθn is the same. In short, the amplitude for each path reduces to ψn = ei·n/r0. So we need to add these first and then calculate the magnitude, which we can then square to get a probability. Of course, there is also the issue of normalization (probabilities have to add up to one) but let’s tackle that later. For the calculations, we use Euler’s r·ei·θ = r·(cosθ + i·sinθ) = r·cosθ + i·r·sinθ formula. Needless to say, |r·ei·θ|2 = |r|2·|ei·θ|2 = |r|2·(cos2θ + sin2θ) = r. Finally, when adding complex numbers, we add the real and imaginary parts respectively, and we’ll denote the ψ0 + ψ1 +ψ2 + … sum as Ψ.

Now, we also need to see how our ΔS = Δp·Δr works out. We may want to assume that the uncertainty in p and in r will both be proportional to the overall uncertainty in the action. For example, we could try writing the following: ΔSn = Δpn·Δrn = n·Δp1·Δr1. It also makes sense that you may want Δpn and Δrn to be proportional to Δp1 and Δr1 respectively. Combining both, the assumption would be this:

Δpn = √n·Δpand Δrn = √n·Δr1

So now we just need to decide how we will distribute ΔS1 = ħ = 1 over Δp1 and Δr1 respectively. For example, if we’d assume Δp1 = 1, then Δr1 = ħ/Δp1 = 1/1 = 1. These are the calculations. I will let you analyze them. 🙂newnewWell… We get a weird result. It reminds me of Feynman’s explanation of the partial reflection of light, shown below, but… Well… That doesn’t make much sense, does it?

partial reflection

Hmm… Maybe it does. 🙂 Look at the graph more carefully. The peaks sort of oscillate out so… Well… That might make sense… 🙂

Does it? Are we doing something wrong here? These amplitudes should reflect the ones that are reflected in those nice animations (like this one, for example, which is part of that’s part of the Wikipedia article on Feynman’s path integral formulation of quantum mechanics). So what’s wrong, if anything? Well… Our paths differ by some fixed amount of action, which doesn’t quite reflect the geometric approach that’s used in those animations. The graph below shows how the distance varies as a function of ngeometry

If we’d use a model in which the distance would increase linearly or, preferably, exponentially, then we’d get the result we want to get, right?

Well… Maybe. Let’s try it. Hmm… We need to think about the geometry here. Look at the triangle below. triangle sideIf is the straight-line path (r0), then ac could be one of the crooked paths (rn). To simplify, we’ll assume isosceles triangles, so equals c and, hence, rn = 2·a = 2·c. We will also assume the successive paths are separated by the same vertical distance (h = h1) right in the middle, so hb = hn = n·h1. It is then easy to show the following:r formulaThis gives the following graph for rn = 10 and h= 0.01.r graph

Is this the right step increase? Not sure. We can vary the values in our spreadsheet. Let’s first build it. The photon will have to travel faster in order to cover the extra distance in the same time, so its momentum will be higher. Let’s think about the velocity. Let’s start with the first path (= 1). In order to cover the extra distance Δr1, the velocity c1 must be equal to (r0 + Δr1)/= r0/+ Δr1/t = + Δr1/= c0 + Δr1/t. We can write c1 as c1 = c0 + Δc1, so Δc1 = Δr1/t. Now, the ratio of p1  and p0 will be equal to the ratio of c1 and c0 because p1/p= (mc1)/mc0) = c1/c0. Hence, we have the following formula for p1:

p1 = p0·c1/c0 = p0·(c0 + Δc1)/c0 = p0·[1 + Δr1/(c0·t) = p0·(1 + Δr1/r0)

For pn, the logic is the same, so we write:

pn = p0·cn/c0 = p0·(c0 + Δcn)/c0 = p0·[1 + Δrn/(c0·t) = p0·(1 + Δrn/r0)

Let’s do the calculations, and let’s use meaningful values, so the nanometer scale and actual values for Planck’s constant and the photon momentum. The results are shown below. original

Pretty interesting. In fact, this looks really good. The probability first swings around wildly, because of these zones of constructive and destructive interference, but then stabilizes. [Of course, I would need to normalize the probabilities, but you get the idea, right?] So… Well… I think we get a very meaningful result with this model. Sweet ! 🙂 I’m lovin’ it ! 🙂 And, here you go, this is (part of) the calculation table, so you can see what I am doing. 🙂newnew

The graphs below look even better: I just changed the h1/r0 ratio from 1/100 to 1/10. The probability stabilizes almost immediately. 🙂 So… Well… It’s not as fancy as the referenced animation, but I think the educational value of this thing here is at least as good ! 🙂great

🙂 This is good stuff… 🙂

Post scriptum (19 September 2017): There is an obvious inconsistency in the model above, and in the calculations. We assume there is a path r1 = , r2, r2,etcetera, and then we calculate the action for it, and the amplitude, and then we add the amplitude to the sum. But, surely, we should count these paths twice, in two-dimensional space, that is. Think of the graph: we have positive and negative interference zones that are sort of layered around the straight-line path, as shown below.zones

In three-dimensional space, these lines become surfaces. Hence, rather than adding one arrow for every δ  having one contribution only, we may want to add… Well… In three-dimensional space, the formula for the surface around the straight-line path would probably look like π·hn·r1, right? Hmm… Interesting idea. I changed my spreadsheet to incorporate that idea, and I got the graph below. It’s a nonsensical result, because the probability does swing around, but it gradually spins out of control: it never stabilizes.revisedThat’s because we increase the weight of the paths that are further removed from the center. So… Well… We shouldn’t be doing that, I guess. 🙂 I’ll you look for the right formula, OK? Let me know when you found it. 🙂

Some thoughts on the nature of reality

Some other comment on an article on my other blog, inspired me to structure some thoughts that are spread over various blog posts. What follows below, is probably the first draft of an article or a paper I plan to write. Or, who knows, I might re-write my two introductory books on quantum physics and publish a new edition soon. 🙂

Physical dimensions and Uncertainty

The physical dimension of the quantum of action (h or ħ = h/2π) is force (expressed in newton) times distance (expressed in meter) times time (expressed in seconds): N·m·s. Now, you may think this N·m·s dimension is kinda hard to imagine. We can imagine its individual components, right? Force, distance and time. We know what they are. But the product of all three? What is it, really?

It shouldn’t be all that hard to imagine what it might be, right? The N·m·s unit is also the unit in which angular momentum is expressed – and you can sort of imagine what that is, right? Think of a spinning top, or a gyroscope. We may also think of the following:

  1. [h] = N·m·s = (N·m)·s = [E]·[t]
  2. [h] = N·m·s = (N·s)·m = [p]·[x]

Hence, the physical dimension of action is that of energy (E) multiplied by time (t) or, alternatively, that of momentum (p) times distance (x). To be precise, the second dimensional equation should be written as [h] = [p]·[x], because both the momentum and the distance traveled will be associated with some direction. It’s a moot point for the discussion at the moment, though. Let’s think about the first equation first: [h] = [E]·[t]. What does it mean?

Energy… Hmm… In real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the power of a system, and it’s expressed in J/s, or watt. Power is also defined as the (time) rate at which work is done. Hmm… But so here we’re multiplying energy and time. So what’s that? After Hiroshima and Nagasaki, we can sort of imagine the energy of an atomic bomb. We can also sort of imagine the power that’s being released by the Sun in light and other forms of radiation, which is about 385×1024 joule per second. But energy times time? What’s that?

I am not sure. If we think of the Sun as a huge reservoir of energy, then the physical dimension of action is just like having that reservoir of energy guaranteed for some time, regardless of how fast or how slow we use it. So, in short, it’s just like the Sun – or the Earth, or the Moon, or whatever object – just being there, for some definite amount of time. So, yes: some definite amount of mass or energy (E) for some definite amount of time (t).

Let’s bring the mass-energy equivalence formula in here: E = mc2. Hence, the physical dimension of action can also be written as [h] = [E]·[t] = [mc]2·[t] = (kg·m2/s2)·s = kg·m2/s. What does that say? Not all that much – for the time being, at least. We can get this [h] = kg·m2/s through some other substitution as well. A force of one newton will give a mass of 1 kg an acceleration of 1 m/s per second. Therefore, 1 N = 1 kg·m/s2 and, hence, the physical dimension of h, or the unit of angular momentum, may also be written as 1 N·m·s = 1 (kg·m/s2)·m·s = 1 kg·m2/s, i.e. the product of mass, velocity and distance.

Hmm… What can we do with that? Nothing much for the moment: our first reading of it is just that it reminds us of the definition of angular momentum – some mass with some velocity rotating around an axis. What about the distance? Oh… The distance here is just the distance from the axis, right? Right. But… Well… It’s like having some amount of linear momentum available over some distance – or in some space, right? That’s sufficiently significant as an interpretation for the moment, I’d think…

Fundamental units

This makes one think about what units would be fundamental – and what units we’d consider as being derived. Formally, the newton is a derived unit in the metric system, as opposed to the units of mass, length and time (kg, m, s). Nevertheless, I personally like to think of force as being fundamental:  a force is what causes an object to deviate from its straight trajectory in spacetime. Hence, we may want to think of the quantum of action as representing three fundamental physical dimensions: (1) force, (2) time and (3) distance – or space. We may then look at energy and (linear) momentum as physical quantities combining (1) force and distance and (2) force and time respectively.

Let me write this out:

  1. Force times length (think of a force that is acting on some object over some distance) is energy: 1 joule (J) = 1 newton·meter (N). Hence, we may think of the concept of energy as a projection of action in space only: we make abstraction of time. The physical dimension of the quantum of action should then be written as [h] = [E]·[t]. [Note the square brackets tell us we are looking at a dimensional equation only, so [t] is just the physical dimension of the time variable. It’s a bit confusing because I also use square brackets as parentheses.]
  2. Conversely, the magnitude of linear momentum (p = m·v) is expressed in newton·seconds: 1 kg·m/s = 1 (kg·m/s2)·s = 1 N·s. Hence, we may think of (linear) momentum as a projection of action in time only: we make abstraction of its spatial dimension. Think of a force that is acting on some object during some time. The physical dimension of the quantum of action should then be written as [h] = [p]·[x]

Of course, a force that is acting on some object during some time, will usually also act on the same object over some distance but… Well… Just try, for once, to make abstraction of one of the two dimensions here: time or distance.

It is a difficult thing to do because, when everything is said and done, we don’t live in space or in time alone, but in spacetime and, hence, such abstractions are not easy. [Of course, now you’ll say that it’s easy to think of something that moves in time only: an object that is standing still does just that – but then we know movement is relative, so there is no such thing as an object that is standing still in space in an absolute sense: Hence, objects never stand still in spacetime.] In any case, we should try such abstractions, if only because of the principle of least action is so essential and deep in physics:

  1. In classical physics, the path of some object in a force field will minimize the total action (which is usually written as S) along that path.
  2. In quantum mechanics, the same action integral will give us various values S – each corresponding to a particular path – and each path (and, therefore, each value of S, really) will be associated with a probability amplitude that will be proportional to some constant times e−i·θ = ei·(S/ħ). Because ħ is so tiny, even a small change in S will give a completely different phase angle θ. Therefore, most amplitudes will cancel each other out as we take the sum of the amplitudes over all possible paths: only the paths that nearly give the same phase matter. In practice, these are the paths that are associated with a variation in S of an order of magnitude that is equal to ħ.

The paragraph above summarizes, in essence, Feynman’s path integral formulation of quantum mechanics. We may, therefore, think of the quantum of action expressing itself (1) in time only, (2) in space only, or – much more likely – (3) expressing itself in both dimensions at the same time. Hence, if the quantum of action gives us the order of magnitude of the uncertainty – think of writing something like S ± ħ, we may re-write our dimensional [ħ] = [E]·[t] and [ħ] = [p]·[x] equations as the uncertainty equations:

  • ΔE·Δt = ħ 
  • Δp·Δx = ħ

You should note here that it is best to think of the uncertainty relations as a pair of equations, if only because you should also think of the concept of energy and momentum as representing different aspects of the same reality, as evidenced by the (relativistic) energy-momentum relation (E2 = p2c2 – m02c4). Also, as illustrated below, the actual path – or, to be more precise, what we might associate with the concept of the actual path – is likely to be some mix of Δx and Δt. If Δt is very small, then Δx will be very large. In order to move over such distance, our particle will require a larger energy, so ΔE will be large. Likewise, if Δt is very large, then Δx will be very small and, therefore, ΔE will be very small. You can also reason in terms of Δx, and talk about momentum rather than energy. You will arrive at the same conclusions: the ΔE·Δt = h and Δp·Δx = relations represent two aspects of the same reality – or, at the very least, what we might think of as reality.

Uncertainty

Also think of the following: if ΔE·Δt = and Δp·Δx = h, then ΔE·Δt = Δp·Δx and, therefore, ΔE/Δp must be equal to Δx/Δt. Hence, the ratio of the uncertainty about x (the distance) and the uncertainty about t (the time) equals the ratio of the uncertainty about E (the energy) and the uncertainty about p (the momentum).

Of course, you will note that the actual uncertainty relations have a factor 1/2 in them. This may be explained by thinking of both negative as well as positive variations in space and in time.

We will obviously want to do some more thinking about those physical dimensions. The idea of a force implies the idea of some object – of some mass on which the force is acting. Hence, let’s think about the concept of mass now. But… Well… Mass and energy are supposed to be equivalent, right? So let’s look at the concept of energy too.

Action, energy and mass

What is energy, really? In real life, we are usually not interested in the energy of a system as such, but by the energy it can deliver, or absorb, per second. This is referred to as the power of a system, and it’s expressed in J/s. However, in physics, we always talk energy – not power – so… Well… What is the energy of a system?

According to the de Broglie and Einstein – and so many other eminent physicists, of course – we should not only think of the kinetic energy of its parts, but also of their potential energy, and their rest energy, and – for an atomic system – we may add some internal energy, which may be binding energy, or excitation energy (think of a hydrogen atom in an excited state, for example). A lot of stuff. 🙂 But, obviously, Einstein’s mass-equivalence formula comes to mind here, and summarizes it all:

E = m·c2

The m in this formula refers to mass – not to meter, obviously. Stupid remark, of course… But… Well… What is energy, really? What is mass, really? What’s that equivalence between mass and energy, really?

I don’t have the definite answer to that question (otherwise I’d be famous), but… Well… I do think physicists and mathematicians should invest more in exploring some basic intuitions here. As I explained in several posts, it is very tempting to think of energy as some kind of two-dimensional oscillation of mass. A force over some distance will cause a mass to accelerate. This is reflected in the dimensional analysis:

[E] = [m]·[c2] = 1 kg·m2/s2 = 1 kg·m/s2·m = 1 N·m

The kg and m/sfactors make this abundantly clear: m/s2 is the physical dimension of acceleration: (the change in) velocity per time unit.

Other formulas now come to mind, such as the Planck-Einstein relation: E = h·f = ω·ħ. We could also write: E = h/T. Needless to say, T = 1/f is the period of the oscillation. So we could say, for example, that the energy of some particle times the period of the oscillation gives us Planck’s constant again. What does that mean? Perhaps it’s easier to think of it the other way around: E/f = h = 6.626070040(81)×10−34 J·s. Now, is the number of oscillations per second. Let’s write it as = n/s, so we get:

E/= E/(n/s) = E·s/n = 6.626070040(81)×10−34 J·s ⇔ E/= 6.626070040(81)×10−34 J

What an amazing result! Our wavicle – be it a photon or a matter-particle – will always pack 6.626070040(81)×10−34 joule in one oscillation, so that’s the numerical value of Planck’s constant which, of course, depends on our fundamental units (i.e. kg, meter, second, etcetera in the SI system).

Of course, the obvious question is: what’s one oscillation? If it’s a wave packet, the oscillations may not have the same amplitude, and we may also not be able to define an exact period. In fact, we should expect the amplitude and duration of each oscillation to be slightly different, shouldn’t we? And then…

Well… What’s an oscillation? We’re used to counting them: oscillations per second, so that’s per time unit. How many do we have in total? We wrote about that in our posts on the shape and size of a photon. We know photons are emitted by atomic oscillators – or, to put it simply, just atoms going from one energy level to another. Feynman calculated the Q of these atomic oscillators: it’s of the order of 10(see his Lectures, I-33-3: it’s a wonderfully simple exercise, and one that really shows his greatness as a physics teacher), so… Well… This wave train will last about 10–8 seconds (that’s the time it takes for the radiation to die out by a factor 1/e). To give a somewhat more precise example, for sodium light, which has a frequency of 500 THz (500×1012 oscillations per second) and a wavelength of 600 nm (600×10–9 meter), the radiation will lasts about 3.2×10–8 seconds. [In fact, that’s the time it takes for the radiation’s energy to die out by a factor 1/e, so(i.e. the so-called decay time τ), so the wavetrain will actually last longer, but so the amplitude becomes quite small after that time.] So… Well… That’s a very short time but… Still, taking into account the rather spectacular frequency (500 THz) of sodium light, that makes for some 16 million oscillations and, taking into the account the rather spectacular speed of light (3×10m/s), that makes for a wave train with a length of, roughly, 9.6 meter. Huh? 9.6 meter!? But a photon is supposed to be pointlike, isn’it it? It has no length, does it?

That’s where relativity helps us out: as I wrote in one of my posts, relativistic length contraction may explain the apparent paradox. Using the reference frame of the photon – so if we’d be traveling at speed c,’ riding’ with the photon, so to say, as it’s being emitted – then we’d ‘see’ the electromagnetic transient as it’s being radiated into space.

However, while we can associate some mass with the energy of the photon, none of what I wrote above explains what the (rest) mass of a matter-particle could possibly be. There is no real answer to that, I guess. You’ll think of the Higgs field now but… Then… Well. The Higgs field is a scalar field. Very simple: some number that’s associated with some position in spacetime. That doesn’t explain very much, does it? 😦 When everything is said and done, the scientists who, in 2013 only, got the Nobel Price for their theory on the Higgs mechanism, simply tell us mass is some number. That’s something we knew already, right? 🙂

The reality of the wavefunction

The wavefunction is, obviously, a mathematical construct: a description of reality using a very specific language. What language? Mathematics, of course! Math may not be universal (aliens might not be able to decipher our mathematical models) but it’s pretty good as a global tool of communication, at least.

The real question is: is the description accurate? Does it match reality and, if it does, how good is the match? For example, the wavefunction for an electron in a hydrogen atom looks as follows:

ψ(r, t) = ei·(E/ħ)·t·f(r)

As I explained in previous posts (see, for example, my recent post on reality and perception), the f(r) function basically provides some envelope for the two-dimensional ei·θ = ei·(E/ħ)·t = cosθ + i·sinθ oscillation, with r = (x, y, z), θ = (E/ħ)·t = ω·t and ω = E/ħ. So it presumes the duration of each oscillation is some constant. Why? Well… Look at the formula: this thing has a constant frequency in time. It’s only the amplitude that is varying as a function of the r = (x, y, z) coordinates. 🙂 So… Well… If each oscillation is to always pack 6.626070040(81)×10−34 joule, but the amplitude of the oscillation varies from point to point, then… Well… We’ve got a problem. The wavefunction above is likely to be an approximation of reality only. 🙂 The associated energy is the same, but… Well… Reality is probably not the nice geometrical shape we associate with those wavefunctions.

In addition, we should think of the Uncertainty Principle: there must be some uncertainty in the energy of the photons when our hydrogen atom makes a transition from one energy level to another. But then… Well… If our photon packs something like 16 million oscillations, and the order of magnitude of the uncertainty is only of the order of h (or ħ = h/2π) which, as mentioned above, is the (average) energy of one oscillation only, then we don’t have much of a problem here, do we? 🙂

Post scriptum: In previous posts, we offered some analogies – or metaphors – to a two-dimensional oscillation (remember the V-2 engine?). Perhaps it’s all relatively simple. If we have some tiny little ball of mass – and its center of mass has to stay where it is – then any rotation – around any axis – will be some combination of a rotation around our x- and z-axis – as shown below. Two axes only. So we may want to think of a two-dimensional oscillation as an oscillation of the polar and azimuthal angle. 🙂

oscillation of a ball

Thinking again…

One of the comments on my other blog made me think I should, perhaps, write something on waves again. The animation below shows the elementary wavefunction ψ = a·eiθ = ψ = a·ei·θ  = a·ei(ω·t−k·x) = a·e(i/ħ)·(E·t−p·x) .AnimationWe know this elementary wavefunction cannot represent a real-life particle. Indeed, the a·ei·θ function implies the probability of finding the particle – an electron, a photon, or whatever – would be equal to P(x, t) = |ψ(x, t)|2 = |a·e(i/ħ)·(E·t−p·x)|2 = |a|2·|e(i/ħ)·(E·t−p·x)|2 = |a|2·12= a2 everywhere. Hence, the particle would be everywhere – and, therefore, nowhere really. We need to localize the wave – or build a wave packet. We can do so by introducing uncertainty: we then add a potentially infinite number of these elementary wavefunctions with slightly different values for E and p, and various amplitudes a. Each of these amplitudes will then reflect the contribution to the composite wave, which – in three-dimensional space – we can write as:

ψ(r, t) = ei·(E/ħ)·t·f(r)

As I explained in previous posts (see, for example, my recent post on reality and perception), the f(r) function basically provides some envelope for the two-dimensional ei·θ = ei·(E/ħ)·t = cosθ + i·sinθ oscillation, with r = (x, y, z), θ = (E/ħ)·t = ω·t and ω = E/ħ.

Note that it looks like the wave propagates from left to right – in the positive direction of an axis which we may refer to as the x-axis. Also note this perception results from the fact that, naturally, we’d associate time with the rotation of that arrow at the center – i.e. with the motion in the illustration, while the spatial dimensions are just what they are: linear spatial dimensions. [This point is, perhaps, somewhat less self-evident than you may think at first.]

Now, the axis which points upwards is usually referred to as the z-axis, and the third and final axis – which points towards us – would then be the y-axis, obviously. Unfortunately, this definition would violate the so-called right-hand rule for defining a proper reference frame: the figures below shows the two possibilities – a left-handed and a right-handed reference frame – and it’s the right-handed reference (i.e. the illustration on the right) which we have to use in order to correctly define all directions, including the direction of rotation of the argument of the wavefunction.400px-Cartesian_coordinate_system_handednessHence, if we don’t change the direction of the y– and z-axes – so we keep defining the z-axis as the axis pointing upwards, and the y-axis as the axis pointing towards us – then the positive direction of the x-axis would actually be the direction from right to left, and we should say that the elementary wavefunction in the animation above seems to propagate in the negative x-direction. [Note that this left- or right-hand rule is quite astonishing: simply swapping the direction of one axis of a left-handed frame makes it right-handed, and vice versa.]

Note my language when I talk about the direction of propagation of our wave. I wrote: it looks like, or it seems to go in this or that direction. And I mean that: there is no real traveling here. At this point, you may want to review a post I wrote for my son, which explains the basic math behind waves, and in which I also explained the animation below.

wave_opposite-group-phase-velocity

Note how the peaks and troughs of this pulse seem to move leftwards, but the wave packet (or the group or the envelope of the wave—whatever you want to call it) moves to the right. The point is: the pulse itself doesn’t travel left or right. Think of the horizontal axis in the illustration above as an oscillating guitar string: each point on the string just moves up and down. Likewise, if our repeated pulse would represent a physical wave in water, for example, then the water just stays where it is: it just moves up and down. Likewise, if we shake up some rope, the rope is not going anywhere: we just started some motion that is traveling down the rope. In other words, the phase velocity is just a mathematical concept. The peaks and troughs that seem to be traveling are just mathematical points that are ‘traveling’ left or right. That’s why there’s no limit on the phase velocity: it can – and, according to quantum mechanics, actually will – exceed the speed of light. In contrast, the group velocity – which is the actual speed of the particle that is being represented by the wavefunction – may approach – or, in the case of a massless photon, will actually equal – the speed of light, but will never exceed it, and its direction will, obviously, have a physical significance as it is, effectively, the direction of travel of our particle – be it an electron, a photon (electromagnetic radiation), or whatever.

Hence, you should not think the spin of a particle – integer or half-integer – is somehow related to the direction of rotation of the argument of the elementary wavefunction. It isn’t: Nature doesn’t give a damn about our mathematical conventions, and that’s what the direction of rotation of the argument of that wavefunction is: just some mathematical convention. That’s why we write a·ei(ω·t−k·x) rather than a·ei(ω·t+k·x) or a·ei(ω·t−k·x): it’s just because of the right-hand rule for coordinate frames, and also because Euler defined the counter-clockwise direction as the positive direction of an angle. There’s nothing more to it.

OK. That’s obvious. Let me now return to my interpretation of Einstein’s E = m·c2 formula (see my previous posts on this). I noted that, in the reference frame of the particle itself (see my basics page), the elementary wavefunction a·e(i/ħ)·(E·t−p·x) reduces to a·e(i/ħ)·(E’·t’): the origin of the reference frame then coincides with (the center of) our particle itself, and the wavefunction only varies with the time in the inertial reference frame (i.e. the proper time t’), with the rest energy of the object (E’) as the time scale factor. How should we interpret this?

Well… Energy is force times distance, and force is defined as that what causes some mass to accelerate. To be precise, the newton – as the unit of force – is defined as the magnitude of a force which would cause a mass of one kg to accelerate with one meter per second per second. Per second per second. This is not a typo: 1 N corresponds to 1 kg times 1 m/s per second, i.e. 1 kg·m/s2. So… Because energy is force times distance, the unit of energy may be expressed in units of kg·m/s2·m, or kg·m2/s2, i.e. the unit of mass times the unit of velocity squared. To sum it all up:

1 J = 1 N·m = 1 kg·(m/s)2

This reflects the physical dimensions on both sides of the E = m·c2 formula again but… Well… How should we interpret this? Look at the animation below once more, and imagine the green dot is some tiny mass moving around the origin, in an equally tiny circle. We’ve got two oscillations here: each packing half of the total energy of… Well… Whatever it is that our elementary wavefunction might represent in reality – which we don’t know, of course.

circle_cos_sin

Now, the blue and the red dot – i.e. the horizontal and vertical projection of the green dot – accelerate up and down. If we look carefully, we see these dots accelerate towards the zero point and, once they’ve crossed it, they decelerate, so as to allow for a reversal of direction: the blue dot goes up, and then down. Likewise, the red dot does the same. The interplay between the two oscillations, because of the 90° phase difference, is interesting: if the blue dot is at maximum speed (near or at the origin), the red dot reverses speed (its speed is, therefore, (almost) nil), and vice versa. The metaphor of our frictionless V-2 engine, our perpetuum mobile, comes to mind once more.

The question is: what’s going on, really?

My answer is: I don’t know. I do think that, somehow, energy should be thought of as some two-dimensional oscillation of something – something which we refer to as mass, but we didn’t define mass very clearly either. It also, somehow, combines linear and rotational motion. Each of the two dimensions packs half of the energy of the particle that is being represented by our wavefunction. It is, therefore, only logical that the physical unit of both is to be expressed as a force over some distance – which is, effectively, the physical dimension of energy – or the rotational equivalent of them: torque over some angle. Indeed, the analogy between linear and angular movement is obvious: the kinetic energy of a rotating object is equal to K.E. = (1/2)·I·ω2. In this formula, I is the rotational inertia – i.e. the rotational equivalent of mass – and ω is the angular velocity – i.e. the rotational equivalent of linear velocity. Noting that the (average) kinetic energy in any system must be equal to the (average) potential energy in the system, we can add both, so we get a formula which is structurally similar to the E = m·c2 formula. But is it the same? Is the effective mass of some object the sum of an almost infinite number of quanta that incorporate some kind of rotational motion? And – if we use the right units – is the angular velocity of these infinitesimally small rotations effectively equal to the speed of light?

I am not sure. Not at all, really. But, so far, I can’t think of any explanation of the wavefunction that would make more sense than this one. I just need to keep trying to find better ways to articulate or imagine what might be going on. 🙂 In this regard, I’d like to add a point – which may or may not be relevant. When I talked about that guitar string, or the water wave, and wrote that each point on the string – or each water drop – just moves up and down, we should think of the physicality of the situation: when the string oscillates, its length increases. So it’s only because our string is flexible that it can vibrate between the fixed points at its ends. For a rope that’s not flexible, the end points would need to move in and out with the oscillation. Look at the illustration below, for example: the two kids who are holding rope must come closer to each other, so as to provide the necessary space inside of the oscillation for the other kid. 🙂kid in a ropeThe next illustration – of how water waves actually propagate – is, perhaps, more relevant. Just think of a two-dimensional equivalent – and of the two oscillations as being transverse waves, as opposed to longitudinal. See how string theory starts making sense? 🙂

rayleighwaveThe most fundamental question remains the same: what is it, exactly, that is oscillating here? What is the field? It’s always some force on some charge – but what charge, exactly? Mass? What is it? Well… I don’t have the answer to that. It’s the same as asking: what is electric charge, really? So the question is: what’s the reality of mass, of electric charge, or whatever other charge that causes a force to act on it?

If you know, please let me know. 🙂

Post scriptum: The fact that we’re talking some two-dimensional oscillation here – think of a surface now – explains the probability formula: we need to square the absolute value of the amplitude to get it. And normalize, of course. Also note that, when normalizing, we’d expect to get some factor involving π somewhere, because we’re talking some circular surface – as opposed to a rectangular one. But I’ll let you figure that out. 🙂

An introduction to virtual particles (2)

When reading quantum mechanics, it often feels like the more you know, the less you understand. My reading of the Yukawa theory of force, as an exchange of virtual particles (see my previous post), must have left you with many questions. Questions I can’t answer because… Well… I feel as much as a fool as you do when thinking about it all. Yukawa first talks about some potential – which we usually think of as being some scalar function – and then suddenly this potential becomes a wavefunction. Does that make sense? And think of the mass of that ‘virtual’ particle: the rest mass of a neutral pion is about 135 MeV. That’s an awful lot – at the (sub-)atomic scale that is: it’s equivalent to the rest mass of some 265 electrons!

But… Well… Think of it: the use of a static potential when solving Schrödinger’s equation for the electron orbitals around a hydrogen nucleus (a proton, basically) also raises lots of questions: if we think of our electron as a point-like particle being first here and then there, then that’s also not very consistent with a static (scalar) potential either!

One of the weirdest aspects of the Yukawa theory is that these emissions and absorptions of virtual particles violate the energy conservation principle. Look at the animation once again (below): it sort of assumes a rather heavy particle – consisting of a d- or u-quark and its antiparticle – is emitted – out of nothing, it seems – to then vanish as the antiparticle is destroyed when absorbed. What about the energy balance here: are we talking six quarks (the proton and the neutron), or six plus two?Nuclear_Force_anim_smallerNow that we’re talking mass, note a neutral pion (π0) may either be a uū or a dđ combination, and that the mass of a u-quark and a d-quark is only 2.4 and 4.8 MeV – so the binding energy of the constituent parts of this πparticle is enormous: it accounts for most of its mass.

The thing is… While we’ve presented the πparticle as a virtual particle here, you should also note we find πparticles in cosmic rays. Cosmic rays are particle rays, really: beams of highly energetic particles. Quite a bunch of them are just protons that are being ejected by our Sun. [The Sun also ejects electrons – as you might imagine – but let’s think about the protons here first.] When these protons hit an atom or a molecule in our atmosphere, they usually break up in various particles, including our πparticle, as shown below. 850px-Atmospheric_Collision

 

So… Well… How can we relate these things? What is going on, really, inside of that nucleus?

Well… I am not sure. Aitchison and Hey do their utmost to try to explain the pion – as a virtual particle, that is – in terms of energy fluctuations that obey the Uncertainty Principle for energy and time: ΔE·Δt ≥ ħ/2. Now, I find such explanations difficult to follow. Such explanations usually assume any measurement instrument – measuring energy, time, momentum of distance – measures those variables on some discrete scale, which implies some uncertainty indeed. But that uncertainty is more like an imprecision, in my view. Not something fundamental. Let me quote Aitchison and Hey:

“Suppose a device is set up capable of checking to see whether energy is, in fact, conserved while the pion crosses over.. The crossing time Δt must be at least r/c, where r is the distance apart of the nucleons. Hence, the device must be capable of operating on a time scale smaller than Δt to be able to detect the pion, but it need not be very much less than this. Thus the energy uncertainty in the reading by the device will be of the order ΔE ∼ ħ/Δt) = ħ·(c/r).”

As said, I find such explanations really difficult, although I can sort of sense some of the implicit assumptions. As I mentioned a couple of times already, the E = m·c2 equation tells us energy is mass in motion, somehow: some weird two-dimensional oscillation in spacetime. So, yes, we can appreciate we need some time unit to count the oscillations – or, equally important, to measure their amplitude.

[…] But… Well… This falls short of a more fundamental explanation of what’s going on. I like to think of Uncertainty in terms of Planck’s constant itself: ħ or h or – as you’ll usually see it – as half of that value: ħ/2. [The Stern-Gerlach experiment implies it’s ħ/2, rather than h/2 or ħ or h itself.] The physical dimension of Planck’s constant is action: newton times distance times time. I also like to think action can express itself in two ways: as (1) some amount of energy (ΔE: some force of some distance) over some time (Δt) or, else, as (2) some momentum (Δp: some force during some time) over some distance (Δs). Now, if we equate ΔE with the energy of the pion (135 MeV), then we may calculate the order of magnitude of Δt from ΔE·Δt ≥ ħ/2 as follows:

 Δt = (ħ/2)/(135 MeV) ≈ (3.291×10−16 eV·s)/(134.977×10eV) ≈ 0.02438×10−22 s

Now, that’s an unimaginably small time unit – but much and much larger than the Planck time (the Planck time unit is about 5.39 × 10−44 s). The corresponding distance is equal to = Δt·c = (0.02438×10−22 s)·(2.998×10m/s) ≈ 0.0731×10−14 m = 0.731 fm. So… Well… Yes. We got the answer we wanted… So… Well… We should be happy about that but…

Well… I am not. I don’t like this indeterminacy. This randomness in the approach. For starters, I am very puzzled by the fact that the lifetime of the actual πparticle we see in the debris of proton collisions with other particles as cosmic rays enter the atmosphere is like 8.4×10−17 seconds, so that’s like 35 million times longer than the Δt = 0.02438×10−22 s we calculated above.

Something doesn’t feel right. I just can’t see the logic here. Sorry. I’ll be back. :-/

An introduction to virtual particles

We are going to venture beyond quantum mechanics as it is usually understood – covering electromagnetic interactions only. Indeed, all of my posts so far – a bit less than 200, I think 🙂 – were all centered around electromagnetic interactions – with the model of the hydrogen atom as our most precious gem, so to speak.

In this post, we’ll be talking the strong force – perhaps not for the first time but surely for the first time at this level of detail. It’s an entirely different world – as I mentioned in one of my very first posts in this blog. Let me quote what I wrote there:

“The math describing the ‘reality’ of electrons and photons (i.e. quantum mechanics and quantum electrodynamics), as complicated as it is, becomes even more complicated – and, important to note, also much less accurate – when it is used to try to describe the behavior of  quarks. Quantum chromodynamics (QCD) is a different world. […] Of course, that should not surprise us, because we’re talking very different order of magnitudes here: femtometers (10–15 m), in the case of electrons, as opposed to attometers (10–18 m) or even zeptometers (10–21 m) when we’re talking quarks.”

In fact, the femtometer scale is used to measure the radius of both protons as well as electrons and, hence, is much smaller than the atomic scale, which is measured in nanometer (1 nm = 10−9 m). The so-called Bohr radius for example, which is a measure for the size of an atom, is measured in nanometer indeed, so that’s a scale that is a million times larger than the femtometer scale. This gap in the scale effectively separates entirely different worlds. In fact, the gap is probably as large a gap as the gap between our macroscopic world and the strange reality of quantum mechanics. What happens at the femtometer scale, really?

The honest answer is: we don’t know, but we do have models to describe what happens. Moreover, for want of better models, physicists sort of believe these models are credible. To be precise, we assume there’s a force down there which we refer to as the strong force. In addition, there’s also a weak force. Now, you probably know these forces are modeled as interactions involving an exchange of virtual particles. This may be related to what Aitchison and Hey refer to as the physicist’s “distaste for action-at-a-distance.” To put it simply: if one particle – through some force – influences some other particle, then something must be going on between the two of them.

Of course, now you’ll say that something is effectively going on: there’s the electromagnetic field, right? Yes. But what’s the field? You’ll say: waves. But then you know electromagnetic waves also have a particle aspect. So we’re stuck with this weird theoretical framework: the conceptual distinction between particles and forces, or between particle and field, are not so clear. So that’s what the more advanced theories we’ll be looking at – like quantum field theory – try to bring together.

Note that we’ve been using a lot of confusing and/or ambiguous terms here: according to at least one leading physicist, for example, virtual particles should not be thought of as particles! But we’re putting the cart before the horse here. Let’s go step by step. To better understand the ‘mechanics’ of how the strong and weak interactions are being modeled in physics, most textbooks – including Aitchison and Hey, which we’ll follow here – start by explaining the original ideas as developed by the Japanese physicist Hideki Yukawa, who received a Nobel Prize for his work in 1949.

So what is it all about? As said, the ideas – or the model as such, so to speak – are more important than Yukawa’s original application, which was to model the force between a proton and a neutron. Indeed, we now explain such force as a force between quarks, and the force carrier is the gluon, which carries the so-called color charge. To be precise, the force between protons and neutrons – i.e. the so-called nuclear force – is now considered to be a rather minor residual force: it’s just what’s left of the actual strong force that binds quarks together. The Wikipedia article on this has some good text and a really nice animation on this. But… Well… Again, note that we are only interested in the model right now. So how does that look like?

First, we’ve got the equivalent of the electric charge: the nucleon is supposed to have some ‘strong’ charge, which we’ll write as gs. Now you know the formulas for the potential energy – because of the gravitational force – between two masses, or the potential energy between two charges – because of the electrostatic force. Let me jot them down once again:

  1. U(r) = –G·M·m/r
  2. U(r) = (1/4πε0)·q1·q2/r

The two formulas are exactly the same. They both assume U = 0 for → ∞. Therefore, U(r) is always negative. [Just think of q1 and q2 as opposite charges, so the minus sign is not explicit – but it is also there!] We know that U(r) curve will look like the one below: some work (force times distance) is needed to move the two charges some distance away from each other – from point 1 to point 2, for example. [The distance r is x here – but you got that, right?]potential energy

Now, physics textbooks – or other articles you might find, like on Wikipedia – will sometimes mention that the strong force is non-linear, but that’s very confusing because… Well… The electromagnetic force – or the gravitational force – aren’t linear either: their strength is inversely proportional to the square of the distance and – as you can see from the formulas for the potential energy – that 1/r factor isn’t linear either. So that isn’t very helpful. In order to further the discussion, I should now write down Yukawa’s hypothetical formula for the potential energy between a neutron and a proton, which we’ll refer to, logically, as the n-p potential:n-p potentialThe −gs2 factor is, obviously, the equivalent of the q1·q2 product: think of the proton and the neutron having equal but opposite ‘strong’ charges. The 1/4π factor reminds us of the Coulomb constant: k= 1/4πε0. Note this constant ensures the physical dimensions of both sides of the equation make sense: the dimension of ε0 is N·m2/C2, so U(r) is – as we’d expect – expressed in newton·meter, or joule. We’ll leave the question of the units for gs open – for the time being, that is. [As for the 1/4π factor, I am not sure why Yukawa put it there. My best guess is that he wanted to remind us some constant should be there to ensure the units come out alright.]

So, when everything is said and done, the big new thing is the er/a/factor, which replaces the usual 1/r dependency on distance. Needless to say, e is Euler’s number here – not the electric charge. The two green curves below show what the er/a factor does to the classical 1/r function for = 1 and = 0.1 respectively: smaller values for a ensure the curve approaches zero more rapidly. In fact, for = 1, er/a/is equal to 0.368 for = 1, and remains significant for values that are greater than 1 too. In contrast, for = 0.1, er/a/is equal to 0.004579 (more or less, that is) for = 4 and rapidly goes to zero for all values greater than that.

graph 1graph 2Aitchison and Hey call a, therefore, a range parameter: it effectively defines the range in which the n-p potential has a significant value: outside of the range, its value is, for all practical purposes, (close to) zero. Experimentally, this range was established as being more or less equal to ≤ 2 fm. Needless to say, while this range factor may do its job, it’s obvious Yukawa’s formula for the n-p potential comes across as being somewhat random: what’s the theory behind? There’s none, really. It makes one think of the logistic function: the logistic function fits many statistical patterns, but it is (usually) not obvious why.

Next in Yukawa’s argument is the establishment of an equivalent, for the nuclear force, of the Poisson equation in electrostatics: using the E = –Φ formula, we can re-write Maxwell’s ∇•E = ρ/ε0 equation (aka Gauss’ Law) as ∇•E = –∇•∇Φ = –2Φ ⇔ 2Φ= –ρ/ε0 indeed. The divergence operator the • operator gives us the volume density of the flux of E out of an infinitesimal volume around a given point. [You may want to check one of my post on this. The formula becomes somewhat more obvious if we re-write it as ∇•E·dV = –(ρ·dV)/ε0: ∇•E·dV is then, quite simply, the flux of E out of the infinitesimally small volume dV, and the right-hand side of the equation says this is given by the product of the charge inside (ρ·dV) and 1/ε0, which accounts for the permittivity of the medium (which is the vacuum in this case).] Of course, you will also remember the Φ notation: is just the gradient (or vector derivative) of the (scalar) potential Φ, i.e. the electric (or electrostatic) potential in a space around that infinitesimally small volume with charge density ρ. So… Well… The Poisson equation is probably not so obvious as it seems at first (again, check my post on it on it for more detail) and, yes, that • operator – the divergence operator – is a pretty impressive mathematical beast. However, I must assume you master this topic and move on. So… Well… I must now give you the equivalent of Poisson’s equation for the nuclear force. It’s written like this:Poisson nuclearWhat the heck? Relax. To derive this equation, we’d need to take a pretty complicated détour, which we won’t do. [See Appendix G of Aitchison and Grey if you’d want the details.] Let me just point out the basics:

1. The Laplace operator (∇2) is replaced by one that’s nearly the same: ∇2 − 1/a2. And it operates on the same concept: a potential, which is a (scalar) function of the position r. Hence, U(r) is just the equivalent of Φ.

2. The right-hand side of the equation involves Dirac’s delta function. Now that’s a weird mathematical beast. Its definition seems to defy what I refer to as the ‘continuum assumption’ in math.  I wrote a few things about it in one of my posts on Schrödinger’s equation – and I could give you its formula – but that won’t help you very much. It’s just a weird thing. As Aitchison and Grey write, you should just think of the whole expression as a finite range analogue of Poisson’s equation in electrostatics. So it’s only for extremely small that the whole equation makes sense. Outside of the range defined by our range parameter a, the whole equation just reduces to 0 = 0 – for all practical purposes, at least.

Now, of course, you know that the neutron and the proton are not supposed to just sit there. They’re also in these sort of intricate dance which – for the electron case – is described by some wavefunction, which we derive as a solution from Schrödinger’s equation. So U(r) is going to vary not only in space but also in time and we should, therefore, write it as U(r, t). Now, we will, of course, assume it’s going to vary in space and time as some wave and we may, therefore, suggest some wave equation for it. To appreciate this point, you should review some of the posts I did on waves. More in particular, you may want to review the post I did on traveling fields, in which I showed you the following: if we see an equation like:f8then the function ψ(x, t) must have the following general functional form:solutionAny function ψ like that will work – so it will be a solution to the differential equation – and we’ll refer to it as a wavefunction. Now, the equation (and the function) is for a wave traveling in one dimension only (x) but the same post shows we can easily generalize to waves traveling in three dimensions. In addition, we may generalize the analyse to include complex-valued functions as well. Now, you will still be shocked by Yukawa’s field equation for U(r, t) but, hopefully, somewhat less so after the above reminder on how wave equations generally look like:Yukawa wave equationAs said, you can look up the nitty-gritty in Aitchison and Grey (or in its appendices) but, up to this point, you should be able to sort of appreciate what’s going on without getting lost in it all. Yukawa’s next step – and all that follows – is much more baffling. We’d think U, the nuclear potential, is just some scalar-valued wave, right? It varies in space and in time, but… Well… That’s what classical waves, like water or sound waves, for example do too. So far, so good. However, Yukawa’s next step is to associate a de Broglie-type wavefunction with it. Hence, Yukawa imposes solutions of the type:potential as particleWhat? Yes. It’s a big thing to swallow, and it doesn’t help most physicists refer to U as a force field. A force and the potential that results from it are two different things. To put it simply: the force on an object is not the same as the work you need to move it from here to there. Force and potential are related but different concepts. Having said that, it sort of make sense now, doesn’t it? If potential is energy, and if it behaves like some wave, then we must be able to associate it with a de Broglie-type particle. This U-quantum, as it is referred to, comes in two varieties, which are associated with the ongoing absorption-emission process that is supposed to take place inside of the nucleus (depicted below):

p + U → n and n + U+ → p

absorption emission

It’s easy to see that the U and U+ particles are just each other’s anti-particle. When thinking about this, I can’t help remembering Feynman, when he enigmatically wrote – somewhere in his Strange Theory of Light and Matter – that an anti-particle might just be the same particle traveling back in time. In fact, the exchange here is supposed to happen within a time window that is so short it allows for the brief violation of the energy conservation principle.

Let’s be more precise and try to find the properties of that mysterious U-quantum. You’ll need to refresh what you know about operators to understand how substituting Yukawa’s de Broglie wavefunction in the complicated-looking differential equation (the wave equation) gives us the following relation between the energy and the momentum of our new particle:mass 1Now, it doesn’t take too many gimmicks to compare this against the relativistically correct energy-momentum relation:energy-momentum relationCombining both gives us the associated (rest) mass of the U-quantum:rest massFor ≈ 2 fm, mU is about 100 MeV. Of course, it’s always to check the dimensions and calculate stuff yourself. Note the physical dimension of ħ/(a·c) is N·s2/m = kg (just think of the F = m·a formula). Also note that N·s2/m = kg = (N·m)·s2/m= J/(m2/s2), so that’s the [E]/[c2] dimension. The calculation – and interpretation – is somewhat tricky though: if you do it, you’ll find that:

ħ/(a·c) ≈ (1.0545718×10−34 N·m·s)/[(2×10−15 m)·(2.997924583×108 m/s)] ≈ 0.176×10−27 kg

Now, most physics handbooks continue that terrible habit of writing particle weights in eV, rather than using the correct eV/c2 unit. So when they write: mU is about 100 MeV, they actually mean to say that it’s 100 MeV/c2. In addition, the eV is not an SI unit. Hence, to get that number, we should first write 0.176×10−27 kg as some value expressed in J/c2, and then convert the joule (J) into electronvolt (eV). Let’s do that. First, note that c2 ≈ 9×1016 m2/s2, so 0.176×10−27 kg ≈ 1.584×10−11 J/c2. Now we do the conversion from joule to electronvolt. We get: (1.584×10−11 J/c2)·(6.24215×1018 eV/J) ≈ 9.9×107 eV/c2 = 99 MeV/c2Bingo! So that was Yukawa’s prediction for the nuclear force quantum.

Of course, Yukawa was wrong but, as mentioned above, his ideas are now generally accepted. First note the mass of the U-quantum is quite considerable: 100 MeV/c2 is a bit more than 10% of the individual proton or neutron mass (about 938-939 MeV/c2). While the binding energy causes the mass of an atom to be less than the mass of their constituent parts (protons, neutrons and electrons), it’s quite remarkably that the deuterium atom – a hydrogen atom with an extra neutron – has an excess mass of about 13.1 MeV/c2, and a binding energy with an equivalent mass of only 2.2 MeV/c2. So… Well… There’s something there.

As said, this post only wanted to introduce some basic ideas. The current model of nuclear physics is represented by the animation below, which I took from the Wikipedia article on it. The U-quantum appears as the pion here – and it does not really turn the proton into a neutron and vice versa. Those particles are assumed to be stable. In contrast, it is the quarks that change color by exchanging gluons between each other. And we know look at the exchange particle – which we refer to as the pion – between the proton and the neutron as consisting of two quarks in its own right: a quark and a anti-quark. So… Yes… All weird. QCD is just a different world. We’ll explore it more in the coming days and/or weeks. 🙂Nuclear_Force_anim_smallerAn alternative – and simpler – way of representing this exchange of a virtual particle (a neutral pion in this case) is obtained by drawing a so-called Feynman diagram:Pn_scatter_pi0OK. That’s it for today. More tomorrow. 🙂

Reality and perception

It’s quite easy to get lost in all of the math when talking quantum mechanics. In this post, I’d like to freewheel a bit. I’ll basically try to relate the wavefunction we’ve derived for the electron orbitals to the more speculative posts I wrote on how to interpret the wavefunction. So… Well… Let’s go. 🙂

If there is one thing you should remember from all of the stuff I wrote in my previous posts, then it’s that the wavefunction for an electron orbital – ψ(x, t), so that’s a complex-valued function in two variables (position and time) – can be written as the product of two functions in one variable:

ψ(x, t) = ei·(E/ħ)·t·f(x)

In fact, we wrote f(x) as ψ(x), but I told you how confusing that is: the ψ(x) and ψ(x, t) functions are, obviously, very different. To be precise, the f(x) = ψ(x) function basically provides some envelope for the two-dimensional eiθ = ei·(E/ħ)·t = cosθ + i·sinθ oscillation – as depicted below (θ = −(E/ħ)·t = ω·t with ω = −E/ħ).Circle_cos_sinWhen analyzing this animation – look at the movement of the green, red and blue dots respectively – one cannot miss the equivalence between this oscillation and the movement of a mass on a spring – as depicted below.spiral_sThe ei·(E/ħ)·t function just gives us two springs for the price of one. 🙂 Now, you may want to imagine some kind of elastic medium – Feynman’s famous drum-head, perhaps 🙂 – and you may also want to think of all of this in terms of superimposed waves but… Well… I’d need to review if that’s really relevant to what we’re discussing here, so I’d rather not make things too complicated and stick to basics.

First note that the amplitude of the two linear oscillations above is normalized: the maximum displacement of the object from equilibrium, in the positive or negative direction, which we may denote by x = ±A, is equal to one. Hence, the energy formula is just the sum of the potential and kinetic energy: T + U = (1/2)·A2·m·ω2 = (1/2)·m·ω2. But so we have two springs and, therefore, the energy in this two-dimensional oscillation is equal to E = 2·(1/2)·m·ω2 = m·ω2.

This formula is structurally similar to Einstein’s E = m·c2 formula. Hence, one may want to assume that the energy of some particle (an electron, in our case, because we’re discussing electron orbitals here) is just the two-dimensional motion of its mass. To put it differently, we might also want to think that the oscillating real and imaginary component of our wavefunction each store one half of the total energy of our particle.

However, the interpretation of this rather bold statement is not so straightforward. First, you should note that the ω in the E = m·ω2 formula is an angular velocity, as opposed to the in the E = m·c2 formula, which is a linear velocity. Angular velocities are expressed in radians per second, while linear velocities are expressed in meter per second. However, while the radian measures an angle, we know it does so by measuring a length. Hence, if our distance unit is 1 m, an angle of 2π rad will correspond to a length of 2π meter, i.e. the circumference of the unit circle. So… Well… The two velocities may not be so different after all.

There are other questions here. In fact, the other questions are probably more relevant. First, we should note that the ω in the E = m·ω2 can take on any value. For a mechanical spring, ω will be a function of (1) the stiffness of the spring (which we usually denote by k, and which is typically measured in newton (N) per meter) and (2) the mass (m) on the spring. To be precise, we write: ω2 = k/m – or, what amounts to the same, ω = √(k/m). Both k and m are variables and, therefore, ω can really be anything. In contrast, we know that c is a constant: equals 299,792,458 meter per second, to be precise. So we have this rather remarkable expression: c = √(E/m), and it is valid for any particle – our electron, or the proton at the center, or our hydrogen atom as a whole. It is also valid for more complicated atoms, of course. In fact, it is valid for any system.

Hence, we need to take another look at the energy concept that is used in our ψ(x, t) = ei·(E/ħ)·t·f(x) wavefunction. You’ll remember (if not, you should) that the E here is equal to En = −13.6 eV, −3.4 eV, −1.5 eV and so on, for = 1, 2, 3, etc. Hence, this energy concept is rather particular. As Feynman puts it: “The energies are negative because we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

Now, this is the one and only issue I have with the standard physics story. I mentioned it in one of my previous posts and, just for clarity, let me copy what I wrote at the time:

Feynman gives us a rather casual explanation [on choosing a zero point for measuring energy] in one of his very first Lectures on quantum mechanics, where he writes the following: “If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like a·eiωt, with ħ·ω = E = m·c2. Hence, we can write the amplitude for the two states, for example as:

ei(E1/ħ)·t and ei(E2/ħ)·t

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be:

ei(E1+A)·t/ħ and ei(E2+A)·t/ħ

All of his amplitudes would be multiplied by the same factor ei(A/ħ)·t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Ms·c2, where Ms is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.”

It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the probabilities we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, its density in time will be very different. Hence, if we’d want to give the wavefunction some physical meaning – which is what I’ve been trying to do all along – it does make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

So… Well… There you go. If we’d want to try to interpret our ψ(x, t) = ei·(En/ħ)·t·f(x) function as a two-dimensional oscillation of the mass of our electron, the energy concept in it – so that’s the Ein it – should include all pieces. Most notably, it should also include the electron’s rest energy, i.e. its energy when it is not in a bound state. This rest energy is equal to 0.511 MeV. […] Read this again: 0.511 mega-electronvolt (106 eV), so that’s huge as compared to the tiny energy values we mentioned so far (−13.6 eV, −3.4 eV, −1.5 eV,…).

Of course, this gives us a rather phenomenal order of magnitude for the oscillation that we’re looking at. Let’s quickly calculate it. We need to convert to SI units, of course: 0.511 MeV is about 8.2×10−14 joule (J), and so the associated frequency is equal to ν = E/h = (8.2×10−14 J)/(6.626×10−34 J·s) ≈ 1.23559×1020 cycles per second. Now, I know such number doesn’t say all that much: just note it’s the same order of magnitude as the frequency of gamma rays and… Well… No. I won’t say more. You should try to think about this for yourself. [If you do, think – for starters – about the difference between bosons and fermions: matter-particles are fermions, and photons are bosons. Their nature is very different.]

The corresponding angular frequency is just the same number but multiplied by 2π (one cycle corresponds to 2π radians and, hence, ω = 2π·ν = 7.76344×1020 rad per second. Now, if our green dot would be moving around the origin, along the circumference of our unit circle, then its horizontal and/or vertical velocity would approach the same value. Think of it. We have this eiθ = ei·(E/ħ)·t = ei·ω·t = cos(ω·t) + i·sin(ω·t) function, with ω = E/ħ. So the cos(ω·t) captures the motion along the horizontal axis, while the sin(ω·t) function captures the motion along the vertical axis. Now, the velocity along the horizontal axis as a function of time is given by the following formula:

v(t) = d[x(t)]/dt = d[cos(ω·t)]/dt = −ω·sin(ω·t)

Likewise, the velocity along the vertical axis is given by v(t) = d[sin(ω·t)]/dt = ω·cos(ω·t). These are interesting formulas: they show the velocity (v) along one of the two axes is always less than the angular velocity (ω). To be precise, the velocity approaches – or, in the limit, is equal to – the angular velocity ω when ω·t is equal to ω·= 0, π/2, π or 3π/2. So… Well… 7.76344×1020 meter per second!? That’s like 2.6 trillion times the speed of light. So that’s not possible, of course!

That’s where the amplitude of our wavefunction comes in – our envelope function f(x): the green dot does not move along the unit circle. The circle is much tinier and, hence, the oscillation should not exceed the speed of light. In fact, I should probably try to prove it oscillates at the speed of light, thereby respecting Einstein’s universal formula:

c = √(E/m)

Written like this – rather than as you know it: E = m·c2 – this formula shows the speed of light is just a property of spacetime, just like the ω = √(k/m) formula (or the ω = √(1/LC) formula for a resonant AC circuit) shows that ω, the natural frequency of our oscillator, is a characteristic of the system.

Am I absolutely certain of what I am writing here? No. My level of understanding of physics is still that of an undergrad. But… Well… It all makes a lot of sense, doesn’t it? 🙂

Now, I said there were a few obvious questions, and so far I answered only one. The other obvious question is why energy would appear to us as mass in motion in two dimensions only. Why is it an oscillation in a plane? We might imagine a third spring, so to speak, moving in and out from us, right? Also, energy densities are measured per unit volume, right?

Now that‘s a clever question, and I must admit I can’t answer it right now. However, I do suspect it’s got to do with the fact that the wavefunction depends on the orientation of our reference frame. If we rotate it, it changes. So it’s like we’ve lost one degree of freedom already, so only two are left. Or think of the third direction as the direction of propagation of the wave. 🙂 Also, we should re-read what we wrote about the Poynting vector for the matter wave, or what Feynman wrote about probability currents. Let me give you some appetite for that by noting that we can re-write joule per cubic meter (J/m3) as newton per square meter: J/m3 = N·m/m3 = N/m2. [Remember: the unit of energy is force times distance. In fact, looking at Einstein’s formula, I’d say it’s kg·m2/s2 (mass times a squared velocity), but that simplifies to the same: kg·m2/s2 = [N/(m/s2)]·m2/s2.]

I should probably also remind you that there is no three-dimensional equivalent of Euler’s formula, and the way the kinetic and potential energy of those two oscillations works together is rather unique. Remember I illustrated it with the image of a V-2 engine in previous posts. There is no such thing as a V-3 engine. [Well… There actually is – but not with the third cylinder being positioned sideways.]two-timer-576-px-photo-369911-s-original

But… Then… Well… Perhaps we should think of some weird combination of two V-2 engines. The illustration below shows the superposition of two one-dimensional waves – I think – one traveling east-west and back, and the other one traveling north-south and back. So, yes, we may to think of Feynman’s drum-head again – but combining two-dimensional waves – two waves that both have an imaginary as well as a real dimension

dippArticle-14

Hmm… Not sure. If we go down this path, we’d need to add a third dimension – so w’d have a super-weird V-6 engine! As mentioned above, the wavefunction does depend on our reference frame: we’re looking at stuff from a certain direction and, therefore, we can only see what goes up and down, and what goes left or right. We can’t see what comes near and what goes away from us. Also think of the particularities involved in measuring angular momentum – or the magnetic moment of some particle. We’re measuring that along one direction only! Hence, it’s probably no use to imagine we’re looking at three waves simultaneously!

In any case… I’ll let you think about all of this. I do feel I am on to something. I am convinced that my interpretation of the wavefunction as an energy propagation mechanism, or as energy itself – as a two-dimensional oscillation of mass – makes sense. 🙂

Of course, I haven’t answered one key question here: what is mass? What is that green dot – in reality, that is? At this point, we can only waffle – probably best to just give its standard definition: mass is a measure of inertia. A resistance to acceleration or deceleration, or to changing direction. But that doesn’t say much. I hate to say that – in many ways – all that I’ve learned so far has deepened the mystery, rather than solve it. The more we understand, the less we understand? But… Well… That’s all for today, folks ! Have fun working through it for yourself. 🙂

Post scriptum: I’ve simplified the wavefunction a bit. As I noted in my post on it, the complex exponential is actually equal to ei·[(E/ħ)·− m·φ], so we’ve got a phase shift because of m, the quantum number which denotes the z-component of the angular momentum. But that’s a minor detail that shouldn’t trouble or worry you here.

Re-visiting electron orbitals (III)

In my previous post, I mentioned that it was not so obvious (both from a physical as well as from a mathematical point of view) to write the wavefunction for electron orbitals – which we denoted as ψ(x, t), i.e. a function of two variables (or four: one time coordinate and three space coordinates) – as the product of two other functions in one variable only.

[…] OK. The above sentence is difficult to read. Let me write in math. 🙂 It is not so obvious to write ψ(x, t) as:

ψ(x, t) = ei·(E/ħ)·t·ψ(x)

As I mentioned before, the physicists’ use of the same symbol (ψ, psi) for both the ψ(x, t) and ψ(x) function is quite confusing – because the two functions are very different:

  • ψ(x, t) is a complex-valued function of two (real) variables: x and t. Or four, I should say, because x = (x, y, z) – but it’s probably easier to think of x as one vector variable – a vector-valued argument, so to speak. And then t is, of course, just a scalar variable. So… Well… A function of two variables: the position in space (x), and time (t).
  • In contrast, ψ(x) is a real-valued function of one (vector) variable only: x, so that’s the position in space only.

Now you should cry foul, of course: ψ(x) is not necessarily real-valued. It may be complex-valued. You’re right. You know the formula:wavefunctionNote the derivation of this formula involved a switch from Cartesian to polar coordinates here, so from = (x, y, z) to r = (r, θ, φ), and that the function is also a function of the two quantum numbers l and m now, i.e. the orbital angular momentum (l) and its z-component (m) respectively. In my previous post(s), I gave you the formulas for Yl,m(θ, φ) and Fl,m(r) respectively. Fl,m(r) was a real-valued function alright, but the Yl,m(θ, φ) had that ei·m·φ factor in it. So… Yes. You’re right: the Yl,m(θ, φ) function is real-valued if – and only if – m = 0, in which case ei·m·φ = 1. Let me copy the table from Feynman’s treatment of the topic once again:spherical harmonics 2The Plm(cosθ) functions are the so-called (associated) Legendre polynomials, and the formula for these functions is rather horrible:Legendre polynomialDon’t worry about it too much: just note the Plm(cosθ) is a real-valued function. The point is the following:the ψ(x, t) is a complex-valued function because – and only because – we multiply a real-valued envelope function – which depends on position only – with ei·(E/ħ)·t·ei·m·φ = ei·[(E/ħ)·− m·φ].

[…]

Please read the above once again and – more importantly – think about it for a while. 🙂 You’ll have to agree with the following:

  • As mentioned in my previous post, the ei·m·φ factor just gives us phase shift: just a re-set of our zero point for measuring time, so to speak, and the whole ei·[(E/ħ)·− m·φ] factor just disappears when we’re calculating probabilities.
  • The envelope function gives us the basic amplitude – in the classical sense of the word: the maximum displacement from the zero value. And so it’s that ei·[(E/ħ)·− m·φ] that ensures the whole expression somehow captures the energy of the oscillation.

Let’s first look at the envelope function again. Let me copy the illustration for n = 5 and = 2 from Wikimedia Commons article. Note the symmetry planes:

  • Any plane containing the z-axis is a symmetry plane – like a mirror in which we can reflect one half of the shape to get the other half. [Note that I am talking the shape only here. Forget about the colors for a while – as these reflect the complex phase of the wavefunction.]
  • Likewise, the plane containing both the x– and the y-axis is a symmetry plane as well.

n = 5

The first symmetry plane – or symmetry line, really (i.e. the z-axis) – should not surprise us, because the azimuthal angle φ is conspicuously absent in the formula for our envelope function if, as we are doing in this article here, we merge the ei·m·φ factor with the ei·(E/ħ)·t, so it’s just part and parcel of what the author of the illustrations above refers to as the ‘complex phase’ of our wavefunction. OK. Clear enough – I hope. 🙂 But why is the the xy-plane a symmetry plane too? We need to look at that monstrous formula for the Plm(cosθ) function here: just note the cosθ argument in it is being squared before it’s used in all of the other manipulation. Now, we know that cosθ = sin(π/2 − θ). So we can define some new angle – let’s just call it α – which is measured in the way we’re used to measuring angle, which is not from the z-axis but from the xy-plane. So we write: cosθ = sin(π/2 − θ) = sinα. The illustration below may or may not help you to see what we’re doing here.angle 2So… To make a long story short, we can substitute the cosθ argument in the Plm(cosθ) function for sinα = sin(π/2 − θ). Now, if the xy-plane is a symmetry plane, then we must find the same value for Plm(sinα) and Plm[sin(−α)]. Now, that’s not obvious, because sin(−α) = −sinα ≠ sinα. However, because the argument in that Plm(x) function is being squared before any other operation (like subtracting 1 and exponentiating the result), it is OK: [−sinα]2 = [sinα]sin2α. […] OK, I am sure the geeks amongst my readers will be able to explain this more rigorously. In fact, I hope they’ll have a look at it, because there’s also that dl+m/dxl+m operator, and so you should check what happens with the minus sign there. 🙂

[…] Well… By now, you’re probably totally lost, but the fact of the matter is that we’ve got a beautiful result here. Let me highlight the most significant results:

  • definite energy state of a hydrogen atom (or of an electron orbiting around some nucleus, I should say) appears to us as some beautifully shaped orbital – an envelope function in three dimensions, really – which has the z-axis – i.e. the vertical axis – as a symmetry line and the xy-plane as a symmetry plane.
  • The ei·[(E/ħ)·− m·φ] factor gives us the oscillation within the envelope function. As such, it’s this factor that, somehow, captures the energy of the oscillation.

It’s worth thinking about this. Look at the geometry of the situation again – as depicted below. We’re looking at the situation along the x-axis, in the direction of the origin, which is the nucleus of our atom.

spherical

The ei·m·φ factor just gives us phase shift: just a re-set of our zero point for measuring time, so to speak. Interesting, weird – but probably less relevant than the ei·[(E/ħ)·t factor, which gives us the two-dimensional oscillation that captures the energy of the state.

Circle_cos_sin

Now, the obvious question is: the oscillation of what, exactly? I am not quite sure but – as I explained in my Deep Blue page – the real and imaginary part of our wavefunction are really like the electric and magnetic field vector of an oscillating electromagnetic field (think of electromagnetic radiation – if that makes it easier). Hence, just like the electric and magnetic field vector represent some rapidly changing force on a unit charge, the real and imaginary part of our wavefunction must also represent some rapidly changing force on… Well… I am not quite sure on what though. The unit charge is usually defined as the charge of a proton – rather than an electron – but then forces act on some mass, right? And the mass of a proton is hugely different from the mass of an electron. The same electric (or magnetic) force will, therefore, give a hugely different acceleration to both.

So… Well… My guts instinct tells me the real and imaginary part of our wavefunction just represent, somehow, a rapidly changing force on some unit of mass, but then I am not sure how to define that unit right now (it’s probably not the kilogram!).

Now, there is another thing we should note here: we’re actually sort of de-constructing a rotation (look at the illustration above once again) in two linearly oscillating vectors – one along the z-axis and the other along the y-axis. Hence, in essence, we’re actually talking about something that’s spinning. In other words, we’re actually talking some torque around the x-axis. In what direction? I think that shouldn’t matter – that we can write E or −E, in other words, but… Well… I need to explore this further – as should you! 🙂

Let me just add one more note on the ei·m·φ factor. It sort of defines the geometry of the complex phase itself. Look at the illustration below. Click on it to enlarge it if necessary – or, better still, visit the magnificent Wikimedia Commons article from which I get these illustrations. These are the orbitals = 4 and = 3. Look at the red hues in particular – or the blue – whatever: focus on one color only, and see how how – for m = ±1, we’ve got one appearance of that color only. For m = ±1, the same color appears at two ends of the ‘tubes’ – or tori (plural of torus), I should say – just to sound more professional. 🙂 For m = ±2, the torus consists of three parts – or, in mathematical terms, we’d say the order of its rotational symmetry is equal to 3. Check that Wikimedia Commons article for higher values of and l: the shapes become very convoluted, but the observation holds. 🙂

l = 3

Have fun thinking all of this through for yourself – and please do look at those symmetries in particular. 🙂

Post scriptum: You should do some thinking on whether or not these = ±1, ±2,…, ±orbitals are really different. As I mentioned above, a phase difference is just what it is: a re-set of the t = 0 point. Nothing more, nothing less. So… Well… As far as I am concerned, that’s not real difference, is it? 🙂 As with other stuff, I’ll let you think about this for yourself.

Some more on symmetries…

In our previous post, we talked a lot about symmetries in space – in a rather playful way. Let’s try to take it further here by doing some more thinking on symmetries in spacetime. This post will pick up some older stuff – from my posts on states and the related quantum math in November 2015, for example – but that shouldn’t trouble you too much. On the contrary, I actually hope to tie up some loose ends here.

Let’s first review some obvious ideas. Think about the direction of time. On a time axis, time goes from left to right. It will usually be measured from some zero point – like when we started our experiment or something 🙂 – to some +point but we may also think of some point in time before our zero point, so the minus (−t) points – the left side of the axis – make sense as well. So the direction of time is clear and intuitive. Now, what does it mean to reverse the direction of time? We need to distinguish two things here: the convention, and… Well… Reality. If we would suddenly decide to reverse the direction in which we measure time, then that’s just another convention. We don’t change reality: trees and kids would still grow the way they always did. 🙂 We would just have to change the numbers on our clocks or, alternatively, the direction of rotation of the hand(s) of our clock, as shown below. [I only showed the hour hand because… Well… I don’t want to complicate things by introducing two time units. But adding the minute hand doesn’t make any difference.]

clock problemNow, imagine you’re the dictator who decided to change our time measuring convention. How would you go about it? Would you change the numbers on the clock or the direction of rotation? Personally, I’d be in favor of changing the direction of rotation. Why? Well… First, we wouldn’t have to change expressions such as: “If you are looking north right now, then west is in the 9 o’clock direction, so go there.” 🙂 More importantly, it would align our clocks with the way we’re measuring angles. On the other hand, it would not align our clocks with the way the argument (θ) of our elementary wavefunction ψ = a·eiθ = ei·(E·t – p·x)/ħ is measured, because that’s… Well… Clockwise.

So… What are the implications here? We would need to change t for −t in our wavefunction as well, right? Yep. Good point. So that’s another convention that would change: we should write our elementary wavefunction now as ψ = a·ei·(E·t – p·x)/ħ. So we would have to re-define θ as θ = –E·t + p·x = p·x –E·t. So… Well… Done!

So… Well… What’s next? Nothing. Note that we’re not changing reality here. We’re just adapting our formulas to a new dictatorial convention according to which we should count time from positive to negative – like 2, 1, 0, -1, -2 etcetera, as shown below. Fortunately, we can fix all of our laws and formulas in physics by swapping for -t. So that’s great. No sweat. time reversal

Is that all? Yes. We don’t need to do anything else. We’ll still measure the argument of our wavefunction as an angle, so that’s… Well… After changing our convention, it’s now clockwise. 🙂 Whatever you want to call it: it’s still the same direction. Our dictator can’t change physical reality 🙂

Hmm… But so we are obviously interested in changing physical reality. I mean… Anyone can become a dictator, right? In contrast, we – enlightened scientists – want to really change the world, don’t we? 🙂 So what’s a time reversal in reality? Well… I don’t know… You tell me. 🙂 We may imagine some movie being played backwards, or trees and kids shrinking instead of growing, or some bird flying backwards – and I am not talking the hummingbird here. 🙂

Hey! The latter illustration – that bird flying backwards – is probably the better one: if we reverse the direction of time – in reality, that is – then we should also reverse all directions in space. But… Well… What does that mean, really? We need to think in terms of force fields here. A stone that’d be falling must now go back up. Two opposite charges that were going towards each other, should now move away from each other. But… My God! Such world cannot exist, can it?

No. It cannot. And we don’t need to invoke the second law of thermodynamics for that. 🙂 None of what happens in a movie that’s played backwards makes sense: a heavy stone does not suddenly fly up and decelerate upwards. So it is not like the anti-matter world we described in our previous post. No. We can effectively imagine some world in which all charges have been replaced by their opposite: we’d have positive electrons (positrons) around negatively charged nuclei consisting of antiprotons and antineutrons and, somehow, negative masses. But Coulomb’s law would still tell us two opposite charges – q1 and –q2 , for example – don’t repel but attract each other, with a force that’s proportional to the product of their charges, i.e. q1·(-q2) = –q1·q2. Likewise, Newton’s law of gravitation would still tell us that two masses m1 and m2 – negative or positive – will attract each other with a force that’s proportional to the product of their masses, i.e. m1·m= (-m1)·(-m2). If you’d make a movie in the antimatter world, it would look just like any other movie. It would definitely not look like a movie being played backwards.

In fact, the latter formula – m1·m= (-m1)·(-m2) – tells us why: we’re not changing anything by putting a minus sign in front of all of our variables, which are time (t), position (x), mass (m) and charge (q). [Did I forget one? I don’t think so.] Hence, the famous CPT Theorem – which tells us that a world in which (1) time is reversed, (2) all charges have been conjugated (i.e. all particles have been replaced by their antiparticles), and (3) all spatial coordinates now have the opposite sign, is entirely possible (because it would obey the same Laws of Nature that we, in our world, have discovered over the past few hundred years) – is actually nothing but a tautology. Now, I mean that literally: a tautology is a statement that is true by necessity or by virtue of its logical form. Well… That’s the case here: if we flip the signs of all of our variables, we basically just agreed to count or measure everything from positive to negative. That’s it. Full stop. Such exotic convention is… Well… Exotic, but it cannot change the real world. Full stop.

Of course, this leaves the more intriguing questions entirely open. Partial symmetries. Like time reversal only. 🙂 Or charge conjugation only. 🙂 So let’s think about that.

We know that the world that we see in a mirror must be made of anti-matter but, apart from that particularity, that world makes sense: if we drop a stone in front of the mirror, the stone in the mirror will drop down too. Two like charges will be seen as repelling each other in the mirror too, and concepts such as kinetic or potential energy look just the same. So time just seems to tick away in both worlds – no time reversal here! – and… Well… We’ve got two CP-symmetrical worlds here, don’t we? We only flipped the sign of the coordinate frame and of the charges. Both are possible, right? And what’s possible must exist, right? Well… Maybe. That’s the next step. Let’s first see if both are possible. 🙂

Now, when you’ve read my previous post, you’ll note that I did not flip the z-coordinate when reflecting my world in the mirror. That’s true. But… Well… That’s entirely beside the point. We could flip the z-axis too and so then we’d have a full parity inversion. [Or parity transformation – sounds more serious, doesn’t it? But it’s only a simple inversion, really.] It really doesn’t matter. The point is: axial vectors have the opposite sign in the mirror world, and so it’s not only about whether or not an antimatter world is possible (it should be, right?): it’s about whether or not the sign reversal of all of those axial vectors makes sense in each and every situation. The illustration below, for example, shows how a left-handed neutrino should be a right-handed antineutrino in the mirror world.right-handed antineutrinoI hope you understand the left- versus right-handed thing. Think, for example, of how the left-circularly polarized wavefunction below would look like in the mirror. Just apply the customary right-hand rule to determine the direction of the angular momentum vector. You’ll agree it will be right-circularly polarized in the mirror, right? That’s why we need the charge conjugation: think of the magnetic moment of a circulating charge! So… Well… I can’t dwell on this too much but – if Maxwell’s equations are to hold – then that world in the mirror must be made of antimatter.animation

Now, we know that some processes – in our world – are not entirely CP-symmetrical. I wrote about this at length in previous posts, so I won’t dwell on these experiments here. The point is: these experiments – which are not easy to understand – lead physicists, philosophers, bloggers and what have you to solemnly state that the world in the mirror cannot really exist. And… Well… They’re right. However, I think their observations are beside the point. Literally.

So… Well… I would just like to make a very fundamental philosophical remark about all those discussions. My point is quite simple:

We should realize that the mirror world and our world are effectively separated by the mirror. So we should not be looking at stuff in the mirror from our perspective, because that perspective is well… Outside of the mirror. A different world. 🙂 In my humble opinion, the valid point of reference would be the observer in the mirror, like the photographer in the image below. Now note the following: if the real photographer, on this side of the mirror, would have a left-circularly polarized beam in front of him, then the imaginary photographer, on the other side of the mirror, would see the mirror image of this left-circularly polarized beam as a left-circularly polarized beam too. 🙂 I know that sounds complicated but re-read it a couple of times and – I hope – you’ll see the point. If you don’t… Well… Let me try to rephrase it: the point is that the observer in the mirror would be seeing our world – just the same laws and what have you, all makes sense! – but he would see our world in his world, so he’d see it in the mirror world. 🙂

Mirror

Capito? If you would actually be living in the mirror world, then all the things you would see in the mirror world would make perfectly sense. But you would be living in the mirror world. You would not look at it from outside, i.e. from the other side of the mirror. In short, I actually think the mirror world does exist – but in the mirror only. 🙂 […] I am, obviously, joking here. Let me be explicit: our world is our world, and I think those CP violations in Nature are telling us that it’s the only real world. The other worlds exist in our mind only – or in some mirror. 🙂

Post scriptum: I know the Die Hard philosophers among you will now have an immediate rapid-backfire question. [Hey – I just invented a new word, didn’t I? A rapid-backfire question. Neat.] How would the photographer in the mirror look at our world? The answer to that question is simple: symmetry! He (or she) would think it’s a mirror world only. His world and our world would be separated by the same mirror. So… What are the implications here?

Well… That mirror is only a piece of glass with a coating. We made it. Or… Well… Some man-made company made it. 🙂 So… Well… If you think that observer in the mirror – I am talking about that image of the photographer in that picture above now – would actually exist, then… Well… Then you need to be aware of the consequences: the corollary of his existence is that you do not exist. 🙂 And… Well… No. I won’t say more. If you’re reading stuff like this, then you’re smart enough to figure it out for yourself. We live in one world. Quantum mechanics tells us the perspective on that world matters very much – amplitudes are different in different reference frames – but… Well… Quantum mechanics – or physics in general – does not give us many degrees of freedoms. None, really. It basically tells us the world we live in is the only world that’s possible, really. But… Then… Well… That’s just because physics… Well… When everything is said and done, it’s just mankind’s drive to ensure our perception of the Universe lines up with… Well… What we perceive it to be. 😦 or 🙂 Whatever your appreciation of it. Those Great Minds did an incredible job. 🙂

Symmetries and transformations

In my previous post, I promised to do something on symmetries. Something simple but then… Well… You know how it goes: one question always triggers another one. 🙂

Look at the situation in the illustration on the left below. We suppose we have something real going on there: something is moving from left to right (so that’s in the 3 o’clock direction), and then something else is going around clockwise (so that’s not the direction in which we measure angles (which also include the argument θ of our wavefunction), because that’s always counter-clockwise, as I note at the bottom of the illustration). To be precise, we should note that the angular momentum here is all about the y-axis, so the angular momentum vector L points in the (positive) y-direction. We get that direction from the familiar right-hand rule, which is illustrated in the top right corner.

mirrorNow, suppose someone else is looking at this from the other side – or just think of yourself going around a full 180° to look at the same thing from the back side. You’ll agree you’ll see the same thing going from right to left (so that’s in the 9 o’clock direction now – or, if our clock is transparent, the 3 o’clock direction of our reversed clock). Likewise, the thing that’s turning around will now go counter-clockwise.

Note that both observers – so that’s me and that other person (or myself after my walk around this whole thing) – use a regular coordinate system, which implies the following:

  1. We’ve got regular 90° degree angles between our coordinates axes.
  2. Our x-axis goes from negative to positive from left to right, and our y-axis does the same going away from us.
  3. We also both define our z-axis using, once again, the ubiquitous right-hand rule, so our z-axis points upwards.

So we have two observers looking at the same reality – some linear as well as some angular momentum – but from opposite sides. And so we’ve got a reversal of both the linear as well as the angular momentum. Not in reality, of course, because we’re looking at the same thing. But we measure it differently. Indeed, if we use the subscripts 1 and 2 to denote the measurements in the two coordinate systems, we find that p2 = –p1. Likewise, we also find that L2 = –L1.

Now, when you see these two equations, you will probably not worry about that p2 = –p1 equation – although you should, because it’s actually only valid for this rather particular orientation of the linear momentum (I’ll come back to that in a moment). It’s the L2 = –L1 equation which should surprise you most. Why? Because you’ve always been told there is a big difference between (1) real vectors (aka polar vectors), like the momentum p, or the velocity v, or the force F, and (2) pseudo-vectors (aka axial vectors), like the angular momentum L. You may also remember how to distinguish between the two: if you change the direction of the axes of your reference frame, polar vectors will change sign too, as opposed to axial vectors: axial vectors do not swap sign if we swap the coordinate signs.

So… Well… How does that work here? In fact, what we should ask ourselves is: why does that not work here? Well… It’s simple, really. We’re not changing the direction of the axes here. Or… Well… Let me be more precise: we’re only swapping the sign of the x– and y-axis. We did not flip the z-axis. So we turned things around, but we didn’t turn them upside down. It makes a huge difference. Note, for example, that if all of the linear momentum would have been in the z-direction only (so our p vector would have been pointing in the z-direction, and in the z-direction only), it would not swap sign. The illustration below shows what really happens with the coordinates of some vector when we’re doing a rotation. It’s, effectively, only the x– and y-coordinates that flip sign.reflection symmetry

It’s easy to see that this rotation about the z-axis here preserves our deep sense of ‘up’ versus ‘down’, but that it swaps ‘left’ for ‘right’, and vice versa. Note that this is not a reflection. We are not looking at some mirror world here. The difference between a reflection (a mirror world) and a rotation (the real world seen from another angle) is illustrated below. It’s quite confusing but, unlike what you might think, a reflection does not swap left for right. It does turn things inside out, but that’s what a rotation does as well: near becomes far, and far becomes near.difference between reflection and rotation

Before we move on, let me say a few things about the mirror world and, more in particular, about the obvious question: could it possibly exist? Well… What do you think? Your first reaction might well be: “Of course! What nonsense question! We just walk around whatever it is that we’re seeing – or, what amounts to the same, we just turn it around – and there it is: that’s the mirror world, right? So of course it exists!” Well… No. That’s not the mirror world. That’s just the real world seen from the opposite direction, and that world… Well… That’s just the real world. 🙂 The mirror world is, literally, the world in the mirror – like the photographer in the illustration below. We don’t swap left for right here: some object going from left to right in the real world is still going from left to right in the mirror world!MirrorOf course, you may now involve the photographer in the picture above and observe – note that you’re now an observer of the observer of the mirror 🙂 – that, if he would move his left arm in the real world, the photographer in the mirror world would be moving his right arm. But… Well… No. You’re saying that because you’re now imaging that you’re the photographer in the mirror world yourself now, who’s looking at the real world from inside, so to speak. So you’ve rotated the perspective in your mind and you’re saying it’s his right arm because you imagine yourself to be the photographer in the mirror. We usually do that because… Well… Because we look in a mirror every day, right? So we’re used to seeing ourselves that way and we always think it’s us we’re seeing. 🙂 However, the illustration above is correct: the mirror world only swaps near for far, and far for near, so it only swaps the sign of the y-axis.

So the question is relevant: could the mirror world actually exist? What we’re really asking here is the following: can we swap the sign of one coordinate axis only in all of our physical laws and equations and… Well… Do we then still get the same laws and equations? Do we get the same Universe – because that’s what those laws and equations describe? If so, our mirror world can exist. If not, then not.

Now, I’ve done a post on that, in which I explain that mirror world can only exist if it would consist of anti-matter. So if our real world and the mirror world would actually meet, they would annihilate each other. 🙂 But that post is quite technical. Here I want to keep it very simple: I basically only want to show what the rotation operation implies for the wavefunction. There is no doubt whatsoever that the rotated world exists. In fact, the rotated world is just our world. We walk around some object, or we turn it around, but so we’re still watching the same object. So we’re not thinking about the mirror world here. We just want to know how things look like when adopting some other perspective.

So, back to the starting point: we just have two observers here, who look at the same thing but from opposite directions. Mathematically, this corresponds to a rotation of our reference frame about the z-axis of 180°. Let me spell out – somewhat more precisely – what happens to the linear and angular momentum here:

  1. The direction of the linear momentum in the xy-plane swaps direction.
  2. The angular momentum about the y-axis, as well as about the x-axis, swaps direction too.

Note that the illustration only shows angular momentum about the y-axis, but you can easily verify the statement about the angular momentum about the x-axis. In fact, the angular momentum about any line in the xy-plane will swap direction.

Of course, the x-, y-, z-axes in the other reference frame are different than mine, and so I should give them a subscript, right? Or, at the very least, write something like x’, y’, z’, so we have a primed reference frame here, right? Well… Maybe. Maybe not. Think about it. 🙂 A coordinate system is just a mathematical thing… Only the momentum is real… Linear or angular… Equally real… And then Nature doesn’t care about our position, does it? So… Well… No subscript needed, right? Or… Well… What do you think? 🙂

It’s just funny, isn’t it? It looks like we can’t really separate reality and perception here. Indeed, note how our p2 = –pand L2 = –L1 equations already mix reality with how we perceive it. It’s the same thing in reality but the coordinates of p1 and L1 are positive, while the coordinates of p2 and Lare negative. To be precise, these coordinates will look like this:

  1. p1 = (p, 0, 0) and L1 = (0, L, 0)
  2. p2 = (−p, 0, 0) and L1 = (0, −L, 0)

So are they two different things or are they not? 🙂 Think about it. I’ll move on in the meanwhile. 🙂

Now, you probably know a thing or two about parity symmetry, or P-symmetry: if if we flip the sign of all coordinates, then we’ll still find the same physical laws, like F = m·a and what have you. [It works for all physical laws, including quantum-mechanical laws – except those involving the weak force (read: radioactive decay processes).] But so here we are talking rotational symmetry. That’s not the same as P-symmetry. If we flip the signs of all coordinates, we’re also swapping ‘up’ for ‘down’, so we’re not only turning around, but we’re also getting upside down. The difference between rotational symmetry and P-symmetry is shown below.up and down swap

As mentioned, we’ve talked about P-symmetry at length in other posts, and you can easily google a lot more on that. The question we want to examine here – just as a fun exercise – is the following:

How does that rotational symmetry work for a wavefunction?

The very first illustration in this post gave you the functional form of the elementary wavefunction  eiθ = ei·(E·t p·x)/ħ. We should actually use a bold type x = (x, y, z) in this formula but we’ll assume we’re talking something similar to that p vector: something moving in the x-direction only – or in the xy-plane only. The z-component doesn’t change. Now, you know that we can reduce all actual wavefunctions to some linear combination of such elementary wavefunctions by doing a Fourier decomposition, so it’s fine to look at the elementary wavefunction only – so we don’t make it too complicated here. Now think of the following.

The energy E in the eiθ = ei·(E·t – p·x)/ħ function is a scalar, so it doesn’t have any direction and we’ll measure it the same from both sides – as kinetic or potential energy or, more likely, by adding both. But… Well… Writing ei·(E·t – p·x)/ħ or ei·(E·t + p·x)/ħ is not the same, right? No, it’s not. However, think of it as follows: we won’t be changing the direction of time, right? So it’s OK to not change the sign of E. In fact, we can re-write the two expressions as follows:

  1. ei·(E·t – p·x)/ħ = ei·(E/ħ)·t·ei·(p/ħ)·x
  2. ei·(E·t + p·x)/ħ = ei·(E/ħ)·t·ei·(p/ħ)·x

The first wavefunction describes some particle going in the positive x-direction, while the second wavefunction describes some particle going in the negative x-direction, so… Well… That’s exactly what we see in those two reference frames, so there is no issue whatsoever. 🙂 It’s just… Well… I just wanted to show the wavefunction does look different too when looking at something from another angle.

So why am I writing about this? Why am I being fussy? Well.. It’s just to show you that those transformations are actually quite natural – just as natural as it is to see some particle go in one direction in one reference frame and see it go in the other in the other. 🙂 It also illustrates another point that I’ve been trying to make: the wavefunction is something real. It’s not just a figment of our imagination. The real and imaginary part of our wavefunction have a precise geometrical meaning – and I explained what that might be in my more speculative posts, which I’ve brought together in the Deep Blue page of this blog. But… Well… I can’t dwell on that here because… Well… You should read that page. 🙂

The point to note is the following: we do have different wavefunctions in different reference frames, but these wavefunctions describe the same physical reality, and they also do respect the symmetries we’d expect them to respect, except… Well… The laws describing the weak force don’t, but I wrote about that a very long time ago, and it was not in the context of trying to explain the relatively simple basic laws of quantum mechanics. 🙂 If you’re interested, you should check out my post(s) on that or, else, just google a bit. It’s really exciting stuff, but not something that will help you much to understand the basics, which is what we’re trying to do here. 🙂

The second point to note is that those transformations of the wavefunction – or of quantum-mechanical states – which we go through when rotating our reference frame, for example – are really quite natural. There’s nothing special about them. We had such transformations in classical mechanics too! But… Well… Yes, I admit they do look complicated. But then that’s why you’re so fascinated and why you’re reading this blog, isn’t it? 🙂

Post scriptum: It’s probably useful to be somewhat more precise on all of this. You’ll remember we visualized the wavefunction in some of our posts using the animation below. It uses a left-handed coordinate system, which is rather unusual but then it may have been made with a software which uses a left-handed coordinate system (like RenderMan, for example). Now the rotating arrow at the center moves with time and gives us the polarization of our wave. Applying our customary right-hand rule,you can see this beam is left-circularly polarized. [I know… It’s quite confusing, but just go through the motions here and be consistent.]AnimationNow, you know that ei·(p/ħ)·x and ei·(p/ħ)·x are each other’s complex conjugate:

  1. ei·k·x cos(k·x) + i·sin(k·x)
  2. ei·k·x cos(-k·x) + i·sin(-k·x) = cos(k·x) − i·sin(k·x)

Their real part – the cosine function – is the same, but the imaginary part – the sine function – has the opposite sign. So, assuming the direction of propagation is, effectively, the x-direction, then what’s the polarization of the mirror image? Well… The wave will now go from right to left, and its polarization… Hmm… Well… What? 

Well… If you can’t figure it out, then just forget about those signs and just imagine you’re effectively looking at the same thing from the backside. In fact, if you have a laptop, you can push the screen down and go around your computer. 🙂 There’s no shame in that. In fact, I did that just to make sure I am not talking nonsense here. 🙂 If you look at this beam from the backside, you’ll effectively see it go from right to left – instead of from what you see on this side, which is a left-to-right direction. And as for its polarization… Well… The angular momentum vector swaps direction too but the beam is still left-circularly polarized. So… Well… That’s consistent with what we wrote above. 🙂 The real world is real, and axial vectors are as real as polar vectors. This real beam will only appear to be right-circularly polarized in a mirror. Now, as mentioned above, that mirror world is not our world. If it would exist – in some other Universe – then it would be made up of anti-matter. 🙂

So… Well… Might it actually exist? Is there some other world made of anti-matter out there? I don’t know. We need to think about that reversal of ‘near’ and ‘far’ too: as mentioned, a mirror turns things inside out, so to speak. So what’s the implication of that? When we walk around something – or do a rotation – then the reversal between ‘near’ and ‘far’ is something physical: we go near to what was far, and we go away from what was near. But so how would we get into our mirror world, so to speak? We may say that this anti-matter world in the mirror is entirely possible, but then how would we get there? We’d need to turn ourselves, literally, inside out – like short of shrink to the zero point and then come back out of it to do that parity inversion along our line of sight. So… Well… I don’t see that happen, which is why I am a fan of the One World hypothesis. 🙂 So think the mirror world is just what it is: the mirror world. Nothing real. But… Then… Well… What do you think? 🙂

Quantum-mechanical magnitudes

As I was writing about those rotations in my previous post (on electron orbitals), I suddenly felt I should do some more thinking on (1) symmetries and (2) the concept of quantum-mechanical magnitudes of vectors. I’ll write about the first topic (symmetries) in some other post. Let’s first tackle the latter concept. Oh… And for those I frightened with my last post… Well… This should really be an easy read. More of a short philosophical reflection about quantum mechanics. Not a technical thing. Something intuitive. At least I hope it will come out that way. 🙂

First, you should note that the fundamental idea that quantities like energy, or momentum, may be quantized is a very natural one. In fact, it’s what the early Greek philosophers thought about Nature. Of course, while the idea of quantization comes naturally to us (I think it’s easier to understand than, say, the idea of infinity), it is, perhaps, not so easy to deal with it mathematically. Indeed, most mathematical ideas – like functions and derivatives – are based on what I’ll loosely refer to as continuum theory. So… Yes, quantization does yield some surprising results, like that formula for the magnitude of some vector J:Magnitude formulasThe J·J in the classical formula above is, of course, the equally classical vector dot product, and the formula itself is nothing but Pythagoras’ Theorem in three dimensions. Easy. I just put a + sign in front of the square roots so as to remind you we actually always have two square roots and that we should take the positive one. 🙂

I will now show you how we get that quantum-mechanical formula. The logic behind it is fairly straightforward but, at the same time… Well… You’ll see. 🙂 We know that a quantum-mechanical variable – like the spin of an electron, or the angular momentum of an atom – is not continuous but discrete: it will have some value = jj-1, j-2, …, -(j-2), -(j-1), –j. Our here is the maximum value of the magnitude of the component of our vector (J) in the direction of measurement, which – as you know – is usually written as Jz. Why? Because we will usually choose our coordinate system such that our z-axis is aligned accordingly. 🙂 Those values jj-1, j-2, …, -(j-2), -(j-1), –j are separated by one unit. That unit would be Planck’s quantum of action ħ ≈ 1.0545718×10−34 N·m·s – by the way, isn’t it amazing we can actually measure such tiny stuff in some experiment? 🙂 – if J would happen to be the angular momentum, but the approach here is more general – action can express itself in various ways 🙂 – so the unit doesn’t matter: it’s just the unit, so that’s just one. 🙂 It’s easy to see that this separation implies must be some integer or half-integer. [Of course, now you might think the values of a series like 2.4, 1.4, 0.4, -0.6, -1.6 are also separated by one unit, but… Well… That would violate the most basic symmetry requirement so… Well… No. Our has to be an integer or a half-integer. Please also note that the number of possible values for is equal to 2j+1, as we’ll use that in a moment.]

OK. You’re familiar with this by now and so I should not repeat the obvious. To make things somewhat more real, let’s assume = 3/2, so =  3/2, 1/2, -1/2 or +3/2. Now, we don’t know anything about the system and, therefore, these four values are all equally likely. Now, you may not agree with this assumption but… Well… You’ll have to agree that, at this point, you can’t come up with anything else that would make sense, right? It’s just like a classical situation: J might point in any direction, so we have to give all angles an equal probability. [In fact, I’ll show you – in a minute or so – that you actually have a point here: we should think some more about this assumption – but so that’s for later. I am asking you to just go along with this story as for now.]

So the expected value of Jz is E[Jz] is equal to E[Jz] = (1/4)·(3/2)+(1/4)·(1/2)+(1/4)·(-1/2)+(1/4)·(-3/2) = 0. Nothing new here. We just multiply probabilities with all of the possible values to get an expected value. So we get zero here because our values are distributed symmetrically around the zero point. No surprise. Now, to calculate a magnitude, we don’t need Jbut Jz2. In case you wonder, that’s what this squaring business is all about: we’re abstracting away from the direction and so we’re going to square both positive as well as negative values to then add it all up and take a square root. Now, the expected value of Jz2 is equal to E[Jz] = (1/4)·(3/2)2+(1/4)·(1/2)2+(1/4)·(-1/2)2+(1/4)·(-3/2)2 = 5/4 = 1.25. Some positive value.

You may note that it’s a bit larger than the average of the absolute value of our variable, which is equal to (|3/2|+|1/2|+|-1/2|+|-3/2|)/4 = 1, but that’s just because the squaring favors larger values 🙂 Also note that, of course, we’d also get some positive value if Jwould be a continuous variable over the [-3/2, +3/2] interval, but I’ll let you think about what positive value we’d get for Jzassuming Jz is uniform distributed over the [-3/2, +3/2] interval, because that calculation is actually not so straightforward as it may seem at first. In any case, these considerations are not very relevant to our story here, so let’s move on.

Of course, our z-direction was random, and so we get the same thing for whatever direction. More in particular, we’ll also get it for the x– and y-directions: E[Jx] = E[Jy] = E[Jz] = 5/4. Now, at this point it’s probably good to give you a more generalized formula for these quantities. I think you’ll easily agree to the following one:magnitude squared formulaSo now we can apply our classical J·J = JxJyJzformula to these quantities by calculating the expected value of JJ·J, which is equal to:

E[J·J] = E[Jx2] + E[Jy2] + E[Jz2] = 3·E[Jx2] = 3·E[Jy2] = 3·E[Jz2]

You should note we’re making use of the E[X Y] = E[X]+ E[Y] property here: the expected value of the sum of two variables is equal to the sum of the expected values of the variables, and you should also note this is true even if the individual variables would happen to be correlated – which might or might not be the case. [What do you think is the case here?]

For = 3/2, it’s easy to see we get E[J·J] = 3·E[Jx] = 3·5/4 = (3/2)·(3/2+1) = j·(j+1). We should now generalize this formula for other values of j,  which is not so easy… Hmm… It obviously involves some formula for a series, and I am not good at that… So… Well… I just checked if it was true for = 1/2 and = 1 (please check that at least for yourself too!) and then I just believe the authorities on this for all other values of j. 🙂

Now, in a classical situation, we know that J·J product will be the same for whatever direction J would happen to have, and so its expected value will be equal to its constant value J·J. So we can write: E[J·J] = J·J. So… Well… That’s why we write what we wrote above:Magnitude formulas

Makes sense, no? E[J·J] = E[Jx2+Jy2+Jz2] = E[Jx2]+E[Jy2]+E[Jz2] = j·(j+1) = J·J = J2, so = +√[j(j+1)], right?

Hold your horses, man! Think! What are we doing here, really? We didn’t calculate all that much above. We only found that E[Jx2]+E[Jy2]+E[Jz2] = E[Jx2+Jy2+Jz2] =  j·(j+1). So what? Well… That’s not a proof that the J vector actually exists.

Huh? 

Yes. That J vector might just be some theoretical concept. When everything is said and done, all we’ve been doing – or at least, we imagined we did – is those repeated measurements of JxJy and Jz here – or whatever subscript you’d want to use, like Jθ,φ, for example (the example is not random, of course) – and so, of course, it’s only natural that we assume these things are the magnitude of the component (in the direction of measurement) of some real vector that is out there, but then… Well… Who knows? Think of what we wrote about the angular momentum in our previous post on electron orbitals. We imagine – or do like to think – that there’s some angular momentum vector J out there, which we think of as being “cocked” at some angle, so its projection onto the z-axis gives us those discrete values for m which, for = 2, for example, are equal to 0, 1 or 2 (and -1 and -2, of course) – like in the illustration below. 🙂cocked angle 2But… Well… Note those weird angles: we get something close to 24.1° and then another value close to 54.7°. No symmetry here. 😦 The table below gives some more values for larger j. They’re easy to calculate – it’s, once again, just Pythagoras’ Theorem – but… Well… No symmetries here. Just weird values. [I am not saying the formula for these angles is not straightforward. That formula is easy enough: θ = sin-1(m/√[j(j+1)]). It’s just… Well… No symmetry. You’ll see why that matters in a moment.]CaptureI skipped the half-integer values for in the table above so you might think they might make it easier to come up with some kind of sensible explanation for the angles. Well… No. They don’t. For example, for = 1/2 and m = ± 1/2, the angles are ±35.2644° – more or less, that is. 🙂 As you can see, these angles do not nicely cut up our circle in equal pieces, which triggers the obvious question: are these angles really equally likely? Equal angles do not correspond to equal distances on the z-axis (in case you don’t appreciate the point, look at the illustration below).  angles distance

So… Well… Let me summarize the issue on hand as follows: the idea of the angle of the vector being randomly distributed is not compatible with the idea of those Jz values being equally spaced and equally likely. The latter idea – equally spaced and equally likely Jz values – relates to different possible states of the system being equally likely, so… Well… It’s just a different idea. 😦

Now there is another thing which we should mention here. The maximum value of the z-component of our J vector is always smaller than that quantum-mechanical magnitude, and quite significantly so for small j, as shown in the table below. It is only for larger values of that the ratio of the two starts to converge to 1. For example, for = 25, it is about 1.02, so that’s only 2% off. convergenceThat’s why physicists tell us that, in quantum mechanics, the angular momentum is never “completely along the z-direction.” It is obvious that this actually challenges the idea of a very precise direction in quantum mechanics, but then that shouldn’t surprise us, does it? After, isn’t this what the Uncertainty Principle is all about?

Different states, rather than different directions… And then Uncertainty because… Well… Because of discrete variables that won’t split in the middle. Hmm… 😦

Perhaps. Perhaps I should just accept all of this and go along with it… But… Well… I am really not satisfied here, despite Feynman’s assurance that that’s OK: “Understanding of these matters comes very slowly, if at all. Of course, one does get better able to know what is going to happen in a quantum-mechanical situation—if that is what understanding means—but one never gets a comfortable feeling that these quantum-mechanical rules are ‘natural’.”

I do want to get that comfortable feeling – on some sunny day, at least. 🙂 And so I’ll keep playing with this, until… Well… Until I give up. 🙂 In the meanwhile, if you’d feel you’ve got some better or some more intuitive explanation for all of this, please do let me know. I’d be very grateful to you. 🙂

Post scriptum: Of course, we would all want to believe that J somehow exists because… Well… We want to explain those states somehow, right? I, for one, am not happy with being told to just accept things and shut up. So let me add some remarks here. First, you may think that the narrative above should distinguish between polar and axial vectors. You’ll remember polar vectors are the real vectors, like a radius vector r, or a force F, or velocity or (linear) momentum. Axial vectors (also known as pseudo-vectors) are vectors like the angular momentum vector: we sort of construct them from… Well… From real vectors. The angular momentum L, for example, is the vector cross product of the radius vector r and the linear momentum vector p: we write L = r×p. In that sense, they’re a figment of our imagination. But then… What’s real and unreal? The magnitude of L, for example, does correspond to something real, doesn’t it? And its direction does give us the direction of circulation, right? You’re right. Hence, I think polar and axial vectors are both real – in whatever sense you’d want to define real. Their reality is just different, and that’s reflected in their mathematical behavior: if you change the direction of the axes of your reference frame, polar vectors will change sign too, as opposed to axial vectors: they don’t swap sign. They do something else, which I’ll explain in my next post, where I’ll be talking symmetries.

But let us, for the sake of argument, assume whatever I wrote about those angles applies to axial vectors only. Let’s be even more specific, and say it applies to the angular momentum vector only. If that’s the case, we may want to think of a classical equivalent for the mentioned lack of a precise direction: free nutation. It’s a complicated thing – even more complicated than the phenomenon of precession, which we should be familiar with by now. Look at the illustration below (which I took from an article of a physics professor from Saint Petersburg), which shows both precession as well as nutation. Think of the movement of a spinning top when you release it: its axis will, at first, nutate around the axis of precession, before it settles in a more steady precession.nutationThe nutation is caused by the gravitational force field, and the nutation movement usually dies out quickly because of dampening forces (read: friction). Now, we don’t think of gravitational fields when analyzing angular momentum in quantum mechanics, and we shouldn’t. But there is something else we may want to think of. There is also a phenomenon which is referred to as free nutation, i.e. a nutation that is not caused by an external force field. The Earth, for example, nutates slowly because of a gravitational pull from the Sun and the other planets – so that’s not a free nutation – but, in addition to this, there’s an even smaller wobble – which is an example of free nutation – because the Earth is not exactly spherical. In fact, the Great Mathematician, Leonhard Euler, had already predicted this, back in 1765, but it took another 125 years or so before an astronomist, Seth Chandler, could finally experimentally confirm and measure it. So they named this wobble the Chandler wobble (Euler already has too many things named after him). 🙂

Now I don’t have much backup here – none, actually 🙂 – but why wouldn’t we imagine our electron would also sort of nutate freely because of… Well… Some symmetric asymmetry – something like the slightly elliptical shape of our Earth. 🙂 We may then effectively imagine the angular momentum vector as continually changing direction between a minimum and a maximum angle – something like what’s shown below, perhaps, between 0 and 40 degrees. Think of it as a rotation within a rotation, or an oscillation within an oscillation – or a standing wave within a standing wave. 🙂wobblingI am not sure if this approach would solve the problem of our angles and distances – the issue of whether we should think in equally likely angles or equally likely distances along the z-axis, really – but… Well… I’ll let you play with this. Please do send me some feedback if you think you’ve found something. 🙂

Whatever your solution is, it is likely to involve the equipartition theorem and harmonics, right? Perhaps we can, indeed, imagine standing waves within standing waves, and then standing waves within standing waves. How far can we go? 🙂

Post scriptum 2: When re-reading this post, I was thinking I should probably do something with the following idea. If we’ve got a sphere, and we’re thinking of some vector pointing to some point on the surface of that sphere, then we’re doing something which is referred to as point picking on the surface of a sphere, and the probability distributions – as a function of the polar and azimuthal angles θ and φ – are quite particular. See the article on the Wolfram site on this, for example. I am not sure if it’s going to lead to some easy explanation of the ‘angle problem’ we’ve laid out here but… Well… It’s surely an element in the explanation. The key idea here is shown in the illustration below: if the direction of our momentum in three-dimensional space is really random, there may still be more of a chance of an orientation towards the equator, rather than towards the pole. So… Well… We need to study the math of this. 🙂 But that’s for later.density

The Aharonov-Bohm effect

This title sounds very exciting. It is – or was, I should say – one of these things I thought I would never ever understand, until I started studying physics, that is. 🙂

Having said that, there is – incidentally – nothing very special about the Aharonov-Bohm effect. As Feynman puts it: “The theory was known from the beginning of quantum mechanics in 1926. […] The implication was there all the time, but no one paid attention to it.”

To be fair, he also admits the experiment itself – proving the effect – is “very, very difficult”, which is why the first experiment that claimed to confirm the predicted effect was set up in 1960 only. In fact, some claim the results of that experiment were ambiguous, and that it was only in 1986, with the experiment of Akira Tonomura, that the Aharonov-Bohm effect was unambiguously demonstrated. So what is it about?

In essence, it proves the reality of the vector potential—and of the (related) magnetic field. What do we mean with a real field? To put it simply, a real field cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’: real fields must be specified at the position of the particle itself and describe what happens there. Now you’ll immediately wonder: so what’s a non-real field? Well… Some field that does act through some kind of spooky ‘action-at-a-distance.’ As for an example… Well… I can’t give you one because we’ve only been discussing real fields so far. 🙂

So it’s about what a magnetic (or an electric) field does in terms influencing motion and/or quantum-mechanical amplitudes. In fact, we discussed this matter  quite a while ago (check my 2015 post on it). Now, I don’t want to re-write that post, but let me just remind you of the essentials. The two equations for the magnetic field (B) in Maxwell’s set of four equations (the two others specify the electric field E) are: (1) B = 0 and (2) c2×B = j0 + ∂E/ ∂t. Now, you can temporarily forget about the second equation, but you should note that the B = 0 equation is always true (unlike the ×E = 0 expression, which is true for electrostatics only, when there are no moving charges). So it says that the divergence of B is zero, always.

Now, from our posts on vector calculus, you may or may not remember that the divergence of the curl of a vector field is always zero. We wrote: div (curl A) = •(×A) = 0, always. Now, there is another theorem that we can now apply, which says the following: if the divergence of a vector field, say D, is zero – so if D = 0, then D will be the curl of some other vector field C, so we can write: D×C. When we now apply this to our B = 0 equation, we can confidently state the following: 

If B = 0, then there is an A such that B×A

We can also write this as follows:·B = ·(×A) = 0 and, hence, B×A. Now, it’s this vector field A that is referred to as the (magnetic) vector potential, and so that’s what we want to talk about here. As a start, it may be good to write out all of the components of our B×A vector:

formula for B

In that 2015 post, I answered the question as to why we’d need this new vector field in a way that wasn’t very truthful: I just said that, in many situations, it would be more convenient – from a mathematical point of view, that is – to first find A, and then calculate the derivatives above to get B.

Now, Feynman says the following about this argument in his Lecture on the topic: “It is true that in many complex problems it is easier to work with A, but it would be hard to argue that this ease of technique would justify making you learn about one more vector field. […] We have introduced A because it does have an important physical significance: it is a real physical field.” Let us follow his argument here.

Quantum-mechanical interference effects

Let us first remind ourselves of the quintessential electron interference experiment illustrated below. [For a much more modern rendering of this experiment, check out the  Tout Est Quantique video on it. It’s much more amusing than my rather dry exposé here, but it doesn’t give you the math.]

interference

We have electrons, all of (nearly) the same energy, which leave the source – one by one – and travel towards a wall with two narrow slits. Beyond the wall is a backstop with a movable detector which measures the rate, which we call I, at which electrons arrive at a small region of the backstop at the distance x from the axis of symmetry. The rate (or intensityI is proportional to the probability that an individual electron that leaves the source will reach that region of the backstop. This probability has the complicated-looking distribution shown in the illustration, which we understand is due to the interference of two amplitudes, one from each slit. So we associate the two trajectories with two amplitudes, which Feynman writes as A1eiΦ1 and A2eiΦ2 respectively.

As usual, Feynman abstracts away from the time variable here because it is, effectively, not relevant: the interference pattern depends on distances and angles only. Having said that, for a good understanding, we should – perhaps – write our two wavefunctions as A1ei(ωt + Φ1and A2ei(ωt + Φ2respectively. The point is: we’ve got two wavefunctions – one for each trajectory – even if it’s only one electron going through the slit: that’s the mystery of quantum mechanics. 🙂 We need to add these waves so as to get the interference effect:

R = A1ei(ωt + Φ1A2ei(ωt + Φ2= [A1eiΦ1 A2eiΦ2eiωt

Now, we know we need to take the absolute square of this thing to get the intensity – or probability (before normalization). The absolute square of a product, is the product of the absolute squares of the factors, and we also know that the absolute square of any complex number is just the product of the same number with its complex conjugate. Hence, the absolute square of the eiωt factor is equal to |eiωt|2 = eiωteiωt = e= 1. So the time-dependent factor doesn’t matter: that’s why we can always abstract away from it. Let us now take the absolute square of the [A1eiΦ1 A2eiΦ2] factor, which we can write as:

|R|= |A1eiΦ1 A2eiΦ2|= (A1eiΦ1 A2eiΦ2)·(A1eiΦ1 A2eiΦ2)

= A1+ A2+ 2·A1·A2·cos(Φ1−Φ2) = A1+ A2+ 2·A1·A2·cosδ with δ = Φ1−Φ2

OK. This is probably going a bit quick, but you should be able to figure it out, especially when remembering that eiΦ eiΦ = 2·cosΦ and cosΦ = cos(−Φ). The point to note is that the intensity is equal to the sum of the intensities of both waves plus a correction factor, which is equal to 2·A1·A2·cos(Φ1−Φ2) and, hence, ranges from −2·A1·A2 to +2·A1·A2. Now, it takes a bit of geometrical wizardry to be able to write the phase difference δ = Φ1−Φas

δ = 2π·a/λ = 2π·(x/L)·d/λ

—but it can be done. 🙂 Well… […] OK. 🙂 Let me quickly help you here by copying another diagram from Feynman – one he uses to derive the formula for the phase difference on arrival between the signals from two oscillators. A1 and A2 are equal here (A1 = A2 = A) so that makes the situation below somewhat simpler to analyze. However, instead, we have the added complication of a phase difference (α) at the origin – which Feynman refers to as an intrinsic relative phasetriangle

When we apply the geometry shown above to our electron passing through the slits, we should, of course, equate α to zero. For the rest, the picture is pretty similar as the two-slit picture. The distance in the two-slit – i.e. the difference in the path lengths for the two trajectories of our electron(s) – is, obviously, equal to the d·sinθ factor in the oscillator picture. Also, because L is huge as compared to x, we may assume that trajectory 1 and 2 are more or less parallel and, importantly, that the triangles in the picture – small and large – are rectangular. Now, trigonometry tells us that sinθ is equal to the ratio of the opposite side of the triangle and the hypotenuse (i.e. the longest side of the rectangular triangle). The opposite side of the triangle is x and, because is very, very small as compared to L, we may approximate the length of the hypotenuse with L. [I know—a lot of approximations here, but… Well… Just go along with it as for now…] Hence, we can equate sinθ to x/L and, therefore, d·x/L. Now we need to calculate the phase difference. How many wavelengths do we have in a? That’s simple: a/λ, i.e. the total distance divided by the wavelength. Now these wavelengths correspond to 2π·aradians (one cycle corresponds to one wavelength which, in turn, corresponds to 2π radians). So we’re done. We’ve got the formula: δ = Φ1−Φ= 2π·a/λ = 2π·(x/L)·d/λ.

Huh? Yes. Just think about it. I need to move on. The point is: when is equal to zero, the two waves are in phase, and the probability will have a maximum. When δ = π, then the waves are out of phase and interfere destructively (cosπ = −1), so the intensity (and, hence, the probability) reaches a minimum. 

So that’s pretty obvious – or should be pretty obvious if you’ve understood some of the basics we presented in this blog. We now move to the non-standard stuff, i.e. the Aharonov-Bohm effect(s).

Interference in the presence of an electromagnetic field

In essence, the Aharonov-Bohm effect is nothing special: it is just a law – two laws, to be precise – that tells us how the phase of our wavefunction changes because of the presence of a magnetic and/or electric field. As such, it is not very different from previous analyses and presentations, such as those showing how amplitudes are affected by a potential − such as an electric potential, or a gravitational field, or a magnetic field − and how they relate to a classical analysis of the situation (see, for example, my November 2015 post on this topic). If anything, it’s just a more systematic approach to the topic and – importantly – an approach centered around the use of the vector potential A (and the electric potential Φ). Let me give you the formulas:

f1

f2

The first formula tells us that the phase of the amplitude for our electron (or whatever charged particle) to arrive at some location via some trajectory is changed by an amount that is equal to the integral of the vector potential along the trajectory times the charge of the particle over Planck’s constant. I know that’s quite a mouthful but just read it a couple of times.

The second formula tells us that, if there’s an electrostatic field, it will produce a phase change given by the negative of the time integral of the (scalar) potential Φ.

These two expressions – taken together – tell us what happens for any electromagnetic field, static or dynamic. In fact, they are really the (two) law(s) replacing the q(v×B) expression in classical mechanics.

So how does it work? Let me further follow Feynman’s treatment of the matter—which analyzes what happens when we’d have some magnetic field in the two-slit experiment (so we assume there’s no electric field: we only look at some magnetic field). We said Φ1 was the phase of the wave along trajectory 1, and Φ2 was the phase of the wave along trajectory 2. Without magnetic field, that is, so B = 0. Now, the (first) formula above tells us that, when the field is switched on, the new phases will be the following:

f3

f4

Hence, the phase difference δ = Φ1−Φwill now be equal to:

f5

Now, we can combine the two integrals into one that goes forward along trajectory 1 and comes back along trajectory 2. We’ll denote this path as 1-2 and write the new integral as follows:

f6

Note that we’re using a notation here which suggests that the 1-2 path is closed, which is… Well… Yet another approximation of the Master. In fact, his assumption that the new 1-2 path is closed proves to be essential in the argument that follows the one we presented above, in which he shows that the inherent arbitrariness in our choice of a vector potential function doesn’t matter, but… Well… I don’t want to get too technical here.

Let me conclude this post by noting we can re-write our grand formula above in terms of the flux of the magnetic field B:

f7

So… Well… That’s it, really. I’ll refer you to Feynman’s Lecture on this matter for a detailed description of the 1960 experiment itself, which involves a magnetized iron whisker that acts like a tiny solenoid—small enough to match the tiny scale of the interference experiment itself. I must warn you though: there is a rather long discussion in that Lecture on the ‘reality’ of the magnetic and the vector potential field which – unlike Feynman’s usual approach to discussions like this – is rather philosophical and partially misinformed, as it assumes there is zero magnetic field outside of a solenoid. That’s true for infinitely long solenoids, but not true for real-life solenoids: if we have some A, then we must also have some B, and vice versa. Hence, if the magnetic field (B) is a real field (in the sense that it cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’), then the vector potential A is an equally real field—and vice versa. Feynman admits as much as he concludes his rather lengthy philosophical excursion with the following conclusion (out of which I already quoted one line in my introduction to this post):

“This subject has an interesting history. The theory we have described was known from the beginning of quantum mechanics in 1926. The fact that the vector potential appears in the wave equation of quantum mechanics (called the Schrödinger equation) was obvious from the day it was written. That it cannot be replaced by the magnetic field in any easy way was observed by one man after the other who tried to do so. This is also clear from our example of electrons moving in a region where there is no field and being affected nevertheless. But because in classical mechanics A did not appear to have any direct importance and, furthermore, because it could be changed by adding a gradient, people repeatedly said that the vector potential had no direct physical significance—that only the magnetic and electric fields are “real” even in quantum mechanics. It seems strange in retrospect that no one thought of discussing this experiment until 1956, when Bohm and Aharonov first suggested it and made the whole question crystal clear. The implication was there all the time, but no one paid attention to it. Thus many people were rather shocked when the matter was brought up. That’s why someone thought it would be worthwhile to do the experiment to see if it was really right, even though quantum mechanics, which had been believed for so many years, gave an unequivocal answer. It is interesting that something like this can be around for thirty years but, because of certain prejudices of what is and is not significant, continues to be ignored.”

Well… That’s it, folks! Enough for today! 🙂

An interpretation of the wavefunction

This is my umpteenth post on the same topic. 😦 It is obvious that this search for a sensible interpretation is consuming me. Why? I am not sure. Studying physics is frustrating. As a leading physicist puts it:

“The teaching of quantum mechanics these days usually follows the same dogma: firstly, the student is told about the failure of classical physics at the beginning of the last century; secondly, the heroic confusions of the founding fathers are described and the student is given to understand that no humble undergraduate student could hope to actually understand quantum mechanics for himself; thirdly, a deus ex machina arrives in the form of a set of postulates (the Schrödinger equation, the collapse of the wavefunction, etc); fourthly, a bombardment of experimental verifications is given, so that the student cannot doubt that QM is correct; fifthly, the student learns how to solve the problems that will appear on the exam paper, hopefully with as little thought as possible.”

That’s obviously not the way we want to understand quantum mechanics. [With we, I mean, me, of course, and you, if you’re reading this blog.] Of course, that doesn’t mean I don’t believe Richard Feynman, one of the greatest physicists ever, when he tells us no one, including himself, understands physics quite the way we’d like to understand it. Such statements should not prevent us from trying harder. So let’s look for better metaphors. The animation below shows the two components of the archetypal wavefunction – a simple sine and cosine. They’re the same function actually, but their phases differ by 90 degrees (π/2).

circle_cos_sin

It makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below, which I took from a rather simple article on cars and engines that has nothing to do with quantum mechanics. Think of the moving pistons as harmonic oscillators, like springs.

two-timer-576-px-photo-369911-s-original

We will also think of the center of each cylinder as the zero point: think of that point as a point where – if we’re looking at one cylinder alone – the internal and external pressure balance each other, so the piston would not move… Well… If it weren’t for the other piston, because the second piston is not at the center when the first is. In fact, it is easy to verify and compare the following positions of both pistons, as well as the associated dynamics of the situation:

Piston 1

Piston 2

Motion of Piston 1

Motion Piston 2

Top

Center

Compressed air will push piston down

Piston moves down against external pressure

Center

Bottom

Piston moves down against external pressure

External air pressure will push piston up

Bottom

Center

External air pressure will push piston up

Piston moves further up and compresses the air

Center

Top

Piston moves further up and compresses the air

Compressed air will push piston down

When the pistons move, their linear motion will be described by a sinusoidal function: a sine or a cosine. In fact, the 90-degree V-2 configuration ensures that the linear motion of the two pistons will be exactly the same, except for a phase difference of 90 degrees. [Of course, because of the sideways motion of the connecting rods, our sine and cosine function describes the linear motion only approximately, but you can easily imagine the idealized limit situation. If not, check Feynman’s description of the harmonic oscillator.]

The question is: if we’d have a set-up like this, two springs – or two harmonic oscillators – attached to a shaft through a crank, would this really work as a perpetuum mobile? We obviously talk energy being transferred back and forth between the rotating shaft and the moving pistons… So… Well… Let’s model this: the total energy, potential and kinetic, in each harmonic oscillator is constant. Hence, the piston only delivers or receives kinetic energy from the rotating mass of the shaft.

Now, in physics, that’s a bit of an oxymoron: we don’t think of negative or positive kinetic (or potential) energy in the context of oscillators. We don’t think of the direction of energy. But… Well… If we’ve got two oscillators, our picture changes, and so we may have to adjust our thinking here.

Let me start by giving you an authoritative derivation of the various formulas involved here, taking the example of the physical spring as an oscillator—but the formulas are basically the same for any harmonic oscillator.

energy harmonic oscillator

The first formula is a general description of the motion of our oscillator. The coefficient in front of the cosine function (a) is the maximum amplitude. Of course, you will also recognize ω0 as the natural frequency of the oscillator, and Δ as the phase factor, which takes into account our t = 0 point. In our case, for example, we have two oscillators with a phase difference equal to π/2 and, hence, Δ would be 0 for one oscillator, and –π/2 for the other. [The formula to apply here is sinθ = cos(θ – π/2).] Also note that we can equate our θ argument to ω0·t. Now, if = 1 (which is the case here), then these formulas simplify to:

  1. K.E. = T = m·v2/2 = m·ω02·sin2(θ + Δ) = m·ω02·sin20·t + Δ)
  2. P.E. = U = k·x2/2 = k·cos2(θ + Δ)

The coefficient k in the potential energy formula characterizes the force: F = −k·x. The minus sign reminds us our oscillator wants to return to the center point, so the force pulls back. From the dynamics involved, it is obvious that k must be equal to m·ω02., so that gives us the famous T + U = m·ω02/2 formula or, including once again, T + U = m·a2·ω02/2.

Now, if we normalize our functions by equating k to one (k = 1), then the motion of our first oscillator is given by the cosθ function, and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dt = 2∙sinθ∙cosθ

Let’s look at the second oscillator now. Just think of the second piston going up and down in our V-twin engine. Its motion is given by the sinθ function which, as mentioned above, is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:

2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the rotating shaft moves at constant speed. Linear motion becomes circular motion, and vice versa, in a frictionless Universe. We have the metaphor we were looking for!

Somehow, in this beautiful interplay between linear and circular motion, energy is being borrowed from one place to another, and then returned. From what place to what place? I am not sure. We may call it the real and imaginary energy space respectively, but what does that mean? One thing is for sure, however: the interplay between the real and imaginary part of the wavefunction describes how energy propagates through space!

How exactly? Again, I am not sure. Energy is, obviously, mass in motion – as evidenced by the E = m·c2 equation, and it may not have any direction (when everything is said and done, it’s a scalar quantity without direction), but the energy in a linear motion is surely different from that in a circular motion, and our metaphor suggests we need to think somewhat more along those lines. Perhaps we will, one day, able to square this circle. 🙂

Schrödinger’s equation

Let’s analyze the interplay between the real and imaginary part of the wavefunction through an analysis of Schrödinger’s equation, which we write as:

i·ħ∙∂ψ/∂t = –(ħ2/2m)∙∇2ψ + V·ψ

We can do a quick dimensional analysis of both sides:

  • [i·ħ∙∂ψ/∂t] = N∙m∙s/s = N∙m
  • [–(ħ2/2m)∙∇2ψ] = N∙m3/m2 = N∙m
  • [V·ψ] = N∙m

Note the dimension of the ‘diffusion’ constant ħ2/2m: [ħ2/2m] = N2∙m2∙s2/kg = N2∙m2∙s2/(N·s2/m) = N∙m3. Also note that, in order for the dimensions to come out alright, the dimension of V – the potential – must be that of energy. Hence, Feynman’s description of it as the potential energy – rather than the potential tout court – is somewhat confusing but correct: V must equal the potential energy of the electron. Hence, V is not the conventional (potential) energy of the unit charge (1 coulomb). Instead, the natural unit of charge is used here, i.e. the charge of the electron itself.

Now, Schrödinger’s equation – without the V·ψ term – can be written as the following pair of equations:

  1. Re(∂ψ/∂t) = −(1/2)∙(ħ/m)∙Im(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)∙(ħ/m)∙Re(∇2ψ)

This closely resembles the propagation mechanism of an electromagnetic wave as described by Maxwell’s equation for free space (i.e. a space with no charges), but E and B are vectors, not scalars. How do we get this result. Well… ψ is a complex function, which we can write as a + i∙b. Likewise, ∂ψ/∂t is a complex function, which we can write as c + i∙d, and ∇2ψ can then be written as e + i∙f. If we temporarily forget about the coefficients (ħ, ħ2/m and V), then Schrödinger’s equation – including V·ψ term – amounts to writing something like this:

i∙(c + i∙d) = –(e + i∙f) + (a + i∙b) ⇔ a + i∙b = i∙c − d + e+ i∙f  ⇔ a = −d + e and b = c + f

Hence, we can now write:

  1. V∙Re(ψ) = −ħ∙Im(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Re(∇2ψ)
  2. V∙Im(ψ) = ħ∙Re(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Im(∇2ψ)

This simplifies to the two equations above for V = 0, i.e. when there is no potential (electron in free space). Now we can bring the Re and Im operators into the brackets to get:

  1. V∙Re(ψ) = −ħ∙∂Im (ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Re(ψ)
  2. V∙Im(ψ) = ħ∙∂Re(ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Im(ψ)

This is very interesting, because we can re-write this using the quantum-mechanical energy operator H = –(ħ2/2m)∙∇2 + V· (note the multiplication sign after the V, which we do not have – for obvious reasons – for the –(ħ2/2m)∙∇2 expression):

  1. H[Re (ψ)] = −ħ∙∂Im(ψ)/∂t
  2. H[Im(ψ)] = ħ∙∂Re(ψ)/∂t

A dimensional analysis shows us both sides are, once again, expressed in N∙m. It’s a beautiful expression because – if we write the real and imaginary part of ψ as r∙cosθ and r∙sinθ, we get:

  1. H[cosθ] = −ħ∙∂sinθ/∂t = E∙cosθ
  2. H[sinθ] = ħ∙∂cosθ/∂t = E∙sinθ

Indeed, θ = (E∙t − px)/ħ and, hence, −ħ∙∂sinθ/∂t = ħ∙cosθ∙E/ħ = E∙cosθ and ħ∙∂cosθ/∂t = ħ∙sinθ∙E/ħ = E∙sinθ.  Now we can combine the two equations in one equation again and write:

H[r∙(cosθ + i∙sinθ)] = r∙(E∙cosθ + i∙sinθ) ⇔ H[ψ] = E∙ψ

The operator H – applied to the wavefunction – gives us the (scalar) product of the energy E and the wavefunction itself. Isn’t this strange?

Hmm… I need to further verify and explain this result… I’ll probably do so in yet another post on the same topic… 🙂

Post scriptum: The symmetry of our V-2 engine – or perpetuum mobile – is interesting: its cross-section has only one axis of symmetry. Hence, we may associate some angle with it, so as to define its orientation in the two-dimensional cross-sectional plane. Of course, the cross-sectional plane itself is at right angles to the crankshaft axis, which we may also associate with some angle in three-dimensional space. Hence, its geometry defines two orthogonal directions which, in turn, define a spherical coordinate system, as shown below.

558px-3d_spherical

We may, therefore, say that three-dimensional space is actually being implied by the geometry of our V-2 engine. Now that is interesting, isn’t it? 🙂

Quantum-mechanical operators

I wrote a post on quantum-mechanical operators some while ago but, when re-reading it now, I am not very happy about it, because it tries to cover too much ground in one go. In essence, I regret my attempt to constantly switch between the matrix representation of quantum physics – with the | state 〉 symbols – and the wavefunction approach, so as to show how the operators work for both cases. But then that’s how Feynman approaches this.

However, let’s admit it: while Heisenberg’s matrix approach is equivalent to Schrödinger’s wavefunction approach – and while it’s the only approach that works well for n-state systems – the wavefunction approach is more intuitive, because:

  1. Most practical examples of quantum-mechanical systems (like the description of the electron orbitals of an atomic system) involve continuous coordinate spaces, so we have an infinite number of states and, hence, we need to describe it using the wavefunction approach.
  2. Most of us are much better-versed in using derivatives and integrals, as opposed to matrix operations.
  3. A more intuitive statement of the same argument above is the following: the idea of one state flowing into another, rather than being transformed through some matrix, is much more appealing. 🙂

So let’s stick to the wavefunction approach here. So, while you need to remember that there’s a ‘matrix equivalent’ for each of the equations we’re going to use in this post, we’re not going to talk about it.

The operator idea

In classical physics – high school physics, really – we would describe a pointlike particle traveling in space by a function relating its position (x) to time (t): x = x(t). Its (instantaneous) velocity is, obviously, v(t) = dx/dt. Simple. Obvious. Let’s complicate matters now by saying that the idea of a velocity operator would sort of generalize the v(t) = dx/dt velocity equation by making abstraction of the specifics of the x = x(t) function.

Huh? Yes. We could define a velocity ‘operator’ as:

velocity operator

Now, you may think that’s a rather ridiculous way to describe what an operator does, but – in essence – it’s correct. We have some function – describing an elementary particle, or a system, or an aspect of the system – and then we have some operator, which we apply to our function, to extract the information from it that we want: its velocity, its momentum, its energy. Whatever. Hence, in quantum physics, we have an energy operator, a position operator, a momentum operator, an angular momentum operator and… Well… I guess I listed the most important ones. 🙂

It’s kinda logical. Our velocity operator looks at one particular aspect of whatever it is that’s going on: the time rate of change of position. We do refer to that as the velocity. Our quantum-mechanical operators do the same: they look at one aspect of what’s being described by the wavefunction. [At this point, you may wonder what the other properties of our classical ‘system’ – i.e. other properties than velocity – because we’re just looking at a pointlike particle here, but… Well… Think of electric charge and forces acting on it, so it accelerates and decelerates in all kinds of ways, and we have kinetic and potential energy and all that. Or momentum. So it’s just the same: the x = x(t) function may cover a lot of complexities, just like the wavefunction does!]

The Wikipedia article on the momentum operator is, for a change (I usually find Wikipedia quite abstruse on these matters), quite simple – and, therefore – quite enlightening here. It applies the following simple logic to the elementary wavefunction ψ = ei·(ω·t − k∙x), with the de Broglie relations telling us that ω = E/ħ and k = p/ħ:

mom op 1

Note we forget about the normalization coefficient a here. It doesn’t matter: we can always stuff it in later. The point to note is that we can sort of forget about ψ (or abstract away from it—as mathematicians and physicists would say) by defining the momentum operator, which we’ll write as:

mom op 2

Its three-dimensional equivalent is calculated in very much the same way:

wiki

So this operator, when operating on a particular wavefunction, gives us the (expected) momentum when we would actually catch our particle there, provided the momentum doesn’t vary in time. [Note that it may – and actually is likely to – vary in space!]

So that’s the basic idea of an operator. However, the comparison goes further. Indeed, a superficial reading of what operators are all about gives you the impression we get all these observables (or properties of the system) just by applying the operator to the (wave)function. That’s not the case. There is the randomness. The uncertainty. Actual wavefunctions are superpositions of several elementary waves with various coefficients representing their amplitudes. So we need averages, or expected values: E[X] Even our velocity operator ∂/∂t – in the classical world – gives us an instantaneous velocity only. To get the average velocity (in quantum mechanics, we’ll be interested in the the average momentum, or the average position, or the average energy – rather than the average velocity), we’re going to have the calculate the total distance traveled. Now, that’s going to involve a line integral:

= ∫ds.

The principle is illustrated below.

line integral

You’ll say: this is kids stuff, and it is. Just note how we write the same integral in terms of the x and t coordinate, and using our new velocity operator:

integral

Kids stuff. Yes. But it’s good to think about what it represents really. For example, the simplest quantum-mechanical operator is the position operator. It’s just for the x-coordinate, for the y-coordinate, and z for the z-coordinate. To get the average position of a stationary particle – represented by the wavefunction ψ(r, t) – in three-dimensional space, we need to calculate the following volume integral:

position operator 3D V2

Simple? Yes and no. The r·|ψ(r)|2 integrand is obvious: we multiply each possible position (r) by its probability (or likelihood), which is equal to P(r) = |ψ(r)|2. However, look at the assumptions: we already omitted the time variable. Hence, the particle we’re describing here must be stationary, indeed! So we’ll need to re-visit the whole subject allowing for averages to change with time. We’ll do that later. I just wanted to show you that those integrals – even with very simple operators, like the position operator – can become very complicated. So you just need to make sure you know what you’re looking at.

One wavefunction—or two? Or more?

There is another reason why, with the immeasurable benefit of hindsight, I now feel that my earlier post is confusing: I kept switching between the position and the momentum wavefunction, which gives the impression we have different wavefunctions describing different aspects of the same thing. That’s just not true. The position and momentum wavefunction describe essentially the same thing: we can go from one to the other, and back again, by a simple mathematical manipulation. So I should have stuck to descriptions in terms of ψ(x, t), instead of switching back and forth between the ψ(x, t) and φ(x, t) representations.

In any case, the damage is done, so let’s move forward. The key idea is that, when we know the wavefunction, we know everything. I tried to convey that by noting that the real and imaginary part of the wavefunction must, somehow, represent the total energy of the particle. The structural similarity between the mass-energy equivalence relation (i.e. Einstein’s formula: E = m·c2) and the energy formulas for oscillators and spinning masses is too obvious:

  1. The energy of any oscillator is given by the E = m·ω02/2. We may want to liken the real and imaginary component of our wavefunction to two oscillators and, hence, add them up. The E = m·ω02 formula we get is then identical to the E = m·c2 formula.
  2. The energy of a spinning mass is given by an equivalent formula: E = I·ω2/2 (I is the moment of inertia in this formula). The same 1/2 factor tells us our particle is, somehow, spinning in two dimensions at the same time (i.e. a ‘real’ as well as an ‘imaginary’ space—but both are equally real, because amplitudes interfere), so we get the E = I·ω2 formula. 

Hence, the formulas tell us we should imagine an electron – or an electron orbital – as a very complicated two-dimensional standing wave. Now, when I write two-dimensional, I refer to the real and imaginary component of our wavefunction, as illustrated below. What I am asking you, however, is to not only imagine these two components oscillating up and down, but also spinning about. Hence, if we think about energy as some oscillating mass – which is what the E = m·c2 formula tells us to do, we should remind ourselves we’re talking very complicated motions here: mass oscillates, swirls and spins, and it does so both in real as well as in imaginary space.  rising_circular

What I like about the illustration above is that it shows us – in a very obvious way – why the wavefunction depends on our reference frame. These oscillations do represent something in absolute space, but how we measure it depends on our orientation in that absolute space. But so I am writing this post to talk about operators, not about my grand theory about the essence of mass and energy. So let’s talk about operators now. 🙂

In that post of mine, I showed how the position, momentum and energy operator would give us the average position, momentum and energy of whatever it was that we were looking at, but I didn’t introduce the angular momentum operator. So let me do that now. However, I’ll first recapitulate what we’ve learnt so far in regard to operators.

The energy, position and momentum operators

The equation below defines the energy operator, and also shows how we would apply it to the wavefunction:

energy operator

To the purists: sorry for not (always) using the hat symbol. [I explained why in that post of mine: it’s just too cumbersome.] The others 🙂 should note the following:

  • Eaverage is also an expected value: Eav = E[E]
  • The * symbol tells us to take the complex conjugate of the wavefunction.
  • As for the integral, it’s an integral over some volume, so that’s what the d3r shows. Many authors use double or triple integral signs (∫∫ or ∫∫∫) to show it’s a surface or a volume integral, but that makes things look very complicated, and so I don’t that. I could also have written the integral as ∫ψ(r)*·H·ψ(r) dV, but then I’d need to explain that the dV stands for dVolume, not for any (differental) potential energy (V).
  • We must normalize our wavefunction for these formulas to work, so all probabilities over the volume add up to 1.

OK. That’s the energy operator. As you can see, it’s a pretty formidable beast, but then it just reflects Schrödinger’s equation which, as I explained a couple of times already, we can interpret as an energy propagation mechanism, or an energy diffusion equation, so it is actually not that difficult to memorize the formula: if you’re able to remember Schrödinger’s equation, then you’ll also have the operator. If not… Well… Then you won’t pass your undergrad physics exam. 🙂

I already mentioned that the position operator is a much simpler beast. That’s because it’s so intimately related to our interpretation of the wavefunction. It’s the one thing you know about quantum mechanics: the absolute square of the wavefunction gives us the probability density function. So, for one-dimensional space, the position operator is just:

position operator

The equivalent operator for three-dimensional space is equally simple:

position operator 3D V2

Note how the operator, for the one- as well as for the three-dimensional case, gets rid of time as a variable. In fact, the idea itself of an average makes abstraction of the temporal aspect. Well… Here, at least—because we’re looking at some box in space, rather than some box in spacetime. We’ll re-visit that rather particular idea of an average, and allow for averages that change with time, in a short while.

Next, we introduced the momentum operator in that post of mine. For one dimension, Feynman shows this operator is given by the following formula:

momentum operator

Now that does not look very simple. You might think that the ∂/∂x operator reflects our velocity operator, but… Well… No: ∂/∂t gives us a time rate of change, while ∂/∂x gives us the spatial variation. So it’s not the same. Also, that ħ/i factor is quite intriguing, isn’t it? We’ll come back to it in the next section of this post. Let me just give you the three-dimensional equivalent which, remembering that 1/i = −i, you’ll understand to be equal to the following vector operator:

momentum vector operator

Now it’s time to define the operator we wanted to talk about, i.e. the angular momentum operator.

The angular momentum operator

The formula for the angular momentum operator is remarkably simple:

angular momentum operator

Why do I call this a simple formula? Because it looks like the familiar formula of classical mechanics for the z-component of the classical angular momentum L = r × p. I must assume you know how to calculate a vector cross product. If not, check one of my many posts on vector analysis. I must also assume you remember the L = r × p formula. If not, the following animation might bring it all back. If that doesn’t help, check my post on gyroscopes. 🙂

torque_animation-1.gif

Now, spin is a complicated phenomenon, and so, to simplify the analysis, we should think of orbital angular momentum only. This is a simplification, because electron spin is some complicated mix of intrinsic and orbital angular momentum. Hence, the angular momentum operator we’re introducing here is only the orbital angular momentum operator. However, let us not get bogged down in all of the nitty-gritty and, hence, let’s just go along with it for the time being.

I am somewhat hesitant to show you how we get that formula for our operator, but I’ll try to show you using an intuitive approach, which uses only bits and pieces of Feynman’s more detailed derivation. It will, hopefully, give you a bit of an idea of how these differential operators work. Think about a rotation of our reference frame over an infinitesimally small angle – which we’ll denote as ε – as illustrated below.

rotation

Now, the whole idea is that, because of that rotation of our reference frame, our wavefunction will look different. It’s nothing fundamental, but… Well… It’s just because we’re using a different coordinate system. Indeed, that’s where all these complicated transformation rules for amplitudes come in.  I’ve spoken about these at length when we were still discussing n-state systems. In contrast, the transformation rules for the coordinates themselves are very simple:

rotation

Now, because ε is an infinitesimally small angle, we may equate cos(θ) = cos(ε) to 1, and cos(θ) = sin(ε) to ε. Hence, x’ and y’ are then written as x’+ εy and y’− εx, while z‘ remains z. Vice versa, we can also write the old coordinates in terms of the new ones: x = x’ − εy, y = y’ + εx, and zThat’s obvious. Now comes the difficult thing: you need to think about the two-dimensional equivalent of the simple illustration below.

izvod

If we have some function y = f(x), then we know that, for small Δx, we have the following approximation formula for f(x + Δx): f(x + Δx) ≈ f(x) + (dy/dx)·Δx. It’s the formula you saw in high school: you would then take a limit (Δ0), and define dy/dx as the Δy/Δx ratio for Δ0. You would this after re-writing the f(x + Δx) ≈ f(x) + (dy/dx)·Δx formula as:

Δy = Δf = f(x + Δx) − f(x) ≈ (dy/dx)·Δx

Now you need to substitute f for ψ, and Δx for ε. There is only one complication here: ψ is a function of two variables: x and y. In fact, it’s a function of three variables – x, y and z – but we keep constant. So think of moving from and to + εy = + Δand to + Δ− εx. Hence, Δ= εy and Δ= −εx. It then makes sense to write Δψ as:

angular momentum operator v2

If you agree with that, you’ll also agree we can write something like this:

formula 2

Now that implies the following formula for Δψ:

repair

This looks great! You can see we get some sort of differential operator here, which is what we want. So the next step should be simple: we just let ε go to zero and then we’re done, right? Well… No. In quantum mechanics, it’s always a bit more complicated. But it’s logical stuff. Think of the following:

1. We will want to re-write the infinitesimally small ε angle as a fraction of i, i.e. the imaginary unit.

Huh? Yes. This little represents many things. In this particular case, we want to look at it as a right angle. In fact, you know multiplication with i amounts to a rotation by 90 degrees. So we should replace ε by ε·i. It’s like measuring ε in natural units. However, we’re not done.

2. We should also note that Nature measures angles clockwise, rather than counter-clockwise, as evidenced by the fact that the argument of our wavefunction rotates clockwise as time goes by. So our ε is, in fact, a −ε. We will just bring the minus sign inside of the brackets to solve this issue.

Huh? Yes. Sorry. I told you this is a rather intuitive approach to getting what we want to get. 🙂

3. The third modification we’d want to make is to express ε·i as a multiple of Planck’s constant.

Huh? Yes. This is a very weird thing, but it should make sense—intuitively: we’re talking angular momentum here, and its dimension is the same as that of physical action: N·m·s. Therefore, Planck’s quantum of action (ħ = h/2π ≈ 1×10−34 J·s ≈ 6.6×10−16 eV·s) naturally appears as… Well… A natural unit, or a scaling factor, I should say.

To make a long story short, we’ll want to re-write ε as −(i/ħ)·ε. However, there is a thing called mathematical consistency, and so, if we want to do such substitutions and prepare for that limit situation (ε → 0), we should re-write that Δψ equation as follows:

final

So now – finally! – we do have the formula we wanted to find for our angular momentum operator:

final 2

The final substitution, which yields the formula we just gave you when commencing this section, just uses the formula for the linear momentum operator in the x– and y-direction respectively. We’re done! 🙂 Finally! 

Well… No. 🙂 The question, of course, is the same as always: what does it all mean, really? That’s always a great question. 🙂 Unfortunately, the answer is rather boring: we can calculate the average angular momentum in the z-direction, using a similar integral as the one we used to get the average energy, or the average linear momentum in some direction. That’s basically it.

To compensate for that very boring answer, however, I will show you something that is far less boring. 🙂

Quantum-mechanical weirdness

I’ll shameless copy from Feynman here. He notes that many classical equations get carried over into a quantum-mechanical form (I’ll copy some of his illustrations later). But then there are some that don’t. As Feynman puts it—rather humorously: “There had better be some that don’t come out right, because if everything did, then there would be nothing different about quantum mechanics. There would be no new physics.” He then looks at the following super-obvious equation in classical mechanics:

x·p− px·x = 0

In fact, this equation is so super-obvious that it’s almost meaningless. Almost. It’s super-obvious because multiplication is commutative (for real as well for complex numbers). However, when we replace x and pby the position and momentum operator, we get an entirely different result. You can verify the following yourself:

strange

This is plain weird! What does it mean? I am not sure. Feynman’s take on it is nice but leaves us in the dark on it:

Feynman quote 2

He adds: “If Planck’s constant were zero, the classical and quantum results would be the same, and there would be no quantum mechanics to learn!” Hmm… What does it mean, really? Not sure. Let me make two remarks here:

1. We should not put any dot (·) between our operators, because they do not amount to multiplying one with another. We just apply operators successively. Hence, commutativity is not what we should expect.

2. Note that Feynman forgot to put the subscript in that quote. When doing the same calculations for the equivalent of the x·p− py·x expression, we do get zero, as shown below:

not strange

These equations – zero or not – are referred to as ‘commutation rules’. [Again, I should not have used any dot between x and py, because there is no multiplication here. It’s just a separation mark.] Let me quote Feynman on it, so the matter is dealt with:

quote

OK. So what do we conclude? What are we talking about?

Conclusions

Some of the stuff above was really intriguing. For example, we found that the linear and angular momentum operators are differential operators in the true sense of the word. The angular momentum operator shows us what happens to the wavefunction if we rotate our reference frame over an infinitesimally small angle ε. That’s what’s captured by the formulas we’ve developed, as summarized below:

angular momentum

Likewise, the linear momentum operator captures what happens to the wavefunction for an infinitesimally small displacement of the reference frame, as shown by the equivalent formulas below:

linear momentum

What’s the interpretation for the position operator, and the energy operator? Here we are not so sure. The integrals above make sense, but these integrals are used to calculate averages values, as opposed to instantaneous values. So… Well… There is not all that much I can say about the position and energy operator right now, except… Well… We now need to explore the question of how averages could possibly change over time. Let’s do that now.

Averages that change with time

I know: you are totally quantum-mechanicked out by now. So am I. But we’re almost there. In fact, this is Feynman’s last Lecture on quantum mechanics and, hence, I think I should let the Master speak here. So just click on the link and read for yourself. It’s a really interesting chapter, as he shows us the equivalent of Newton’s Law in quantum mechanics, as well as the quantum-mechanical equivalent of other standard equations in classical mechanics. However, I need to warn you: Feynman keeps testing the limits of our intellectual absorption capacity by switching back and forth between matrix and wave mechanics. Interesting, but not easy. For example, you’ll need to remind yourself of the fact that the Hamiltonian matrix is equal to its own complex conjugate (or – because it’s a matrix – its own conjugate transpose.

Having said that, it’s all wonderful. The time rate of change of all those average values is denoted by using the over-dot notation. For example, the time rate of change of the average position is denoted by:

p1

Once you ‘get’ that new notation, you will quickly understand the derivations. They are not easy (what derivations are in quantum mechanics?), but we get very interesting results. Nice things to play with, or think about—like this identity:

formula2

It takes a while, but you suddenly realize this is the equivalent of the classical dx/dtv = p/m formula. 🙂

Another sweet result is the following one:

formula3

This is the quantum-mechanical equivalent of Newton’s force law: F = m·a. Huh? Yes. Think of it: the spatial derivative of the (potential) energy is the force. Now just think of the classical dp/dt = d(m·v) = m·dv/dt = m·a formula. […] Can you see it now? Isn’t this just Great Fun?

Note, however, that these formulas also show the limits of our analysis so far, because they treat m as some constant. Hence, we’ll need to relativistically correct them. But that’s complicated, and so we’ll postpone that to another day.

[…]

Well… That’s it, folks! We’re really through! This was the last of the last of Feynman’s Lectures on Physics. So we’re totally done now. Isn’t this great? What an adventure! I hope that, despite the enormous mental energy that’s required to digest all this stuff, you enjoyed it as much as I did. 🙂

Post scriptum 1: I just love Feynman but, frankly, I think he’s sometimes somewhat sloppy with terminology. In regard to what these operators really mean, we should make use of better terminology: an average is something else than an expected value. Our momentum operator, for example, as such returns an expected value – not an average momentum. We need to deepen the analysis here somewhat, but I’ll also leave that for later.

Post scriptum 2: There is something really interesting about that i·ħ or −(i/ħ) scaling factor – or whatever you want to call it – appearing in our formulas. Remember the Schrödinger equation can also be written as:

i·ħ·∂ψ/∂t = −(1/2)·(ħ2/m)∇2ψ + V·ψ = Hψ

This is interesting in light of our interpretation of the Schrödinger equation as an energy propagation mechanism. If we write Schrödinger’s equation like we write it here, then we have the energy on the right-hand side – which is time-independent. How do we interpret the left-hand side now? Well… It’s kinda simple, but we just have the time rate of change of the real and imaginary part of the wavefunction here, and the i·ħ factor then becomes a sort of unit in which we measure the time rate of change. Alternatively, you may think of ‘splitting’ Planck’s constant in two: Planck’s energy, and Planck’s time unit, and then you bring the Planck energy unit to the other side, so we’d express the energy in natural units. Likewise, the time rate of change of the components of our wavefunction would also be measured in natural time units if we’d do that.

I know this is all very abstract but, frankly, it’s crystal clear to me. This formula tells us that the energy of the particle that’s being described by the wavefunction is being carried by the oscillations of the wavefunction. In fact, the oscillations are the energy. You can play with the mass factor, by moving it to the left-hand side too, or by using Einstein’s mass-energy equivalence relation. The interpretation remains consistent.

In fact, there is something really interesting here. You know that we usually separate out the spatial and temporal part of the wavefunction, so we write: ψ(r, t) = ψ(rei·(E/ħ)·t. In fact, it is quite common to refer to ψ(r) – rather than to ψ(r, t) – as the wavefunction, even if, personally, I find that quite confusing and misleading (see my page onSchrödinger’s equation). Now, we may want to think of what happens when we’d apply the energy operator to ψ(r) rather than to ψ(r, t). We may think that we’d get a time-independent value for the energy at that point in space, so energy is some function of position only, not of time. That’s an interesting thought, and we should explore it. For example, we then may think of energy as an average that changes with position—as opposed to the (average) position and momentum, which we like to think of as averages than change with time, as mentioned above. I will come back to this later – but perhaps in another post or so. Not now. The only point I want to mention here is the following: you cannot use ψ(r) in Schrödinger’s equation. Why? Well… Schrödinger’s equation is no longer valid when substituting ψ for ψ(r), because the left-hand side is always zero, as ∂ψ(r)/∂t is zero – for any r.

There is another, related, point to this observation. If you think that Schrödinger’s equation implies that the operators on both sides of Schrödinger’s equation must be equivalent (i.e. the same), you’re wrong:

i·ħ·∂/∂t ≠ H = −(1/2)·(ħ2/m)∇2 + V

It’s a basic thing, really: Schrödinger’s equation is not valid for just any function. Hence, it does not work for ψ(r). Only ψ(r, t) makes it work, because… Well… Schrödinger’s equation gave us ψ(r, t)!