The speed of light as an angular velocity (2)

My previous post on the speed of light as an angular velocity was rather cryptic. This post will be a bit more elaborate. Not all that much, however: this stuff is and remains quite dense, unfortunately. 😦 But I’ll do my best to try to explain what I am thinking of. Remember the formula (or definition) of the elementary wavefunction:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

How should we interpret this? We know an actual particle will be represented by a wave packet: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t − pkx)/ħ. But… Well… Let’s see how far we get when analyzing the elementary wavefunction itself only.

According to mathematical convention, the imaginary unit (i) is a 90° angle in the counterclockwise direction. However, Nature surely cannot be bothered about our convention of measuring phase angles – or time itself – clockwise or counterclockwise. Therefore, both right- as well as left-handed polarization may be possible, as illustrated below.

The left-handed elementary wavefunction would be written as:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) − i·a·sin(px/ħ − E∙t/ħ)

In my previous posts, I hypothesized that the two physical possibilities correspond to the angular momentum of our particle – say, an electron – being either positive or negative: J = +ħ/2 or, else, J = −ħ/2. I will come back to this in a moment. Let us first further examine the functional form of the wavefunction.

We should note that both the direction as well as the magnitude of the (linear) momentum (p) are relative: they depend on the orientation and relative velocity of our reference frame – which are, in effect, relative to the reference frame of our object. As such, the wavefunction itself is relative: another observer will obtain a different value for both the momentum (p) as well as for the energy (E). Of course, this makes us think of the relativity of the electric and magnetic field vectors (E and B) but… Well… It’s not quite the same because – as I will explain in a moment – the argument of the wavefunction, considered as a whole, is actually invariant under a Lorentz transformation.

Let me elaborate this point. If we consider the reference frame of the particle itself, then the idea of direction and momentum sort of vanishes, as the momentum vector shrinks to the origin itself: p = 0. Let us now look at how the argument of the wavefunction transforms. The E and p in the argument of the wavefunction (θ = ω∙t – kx = (E/ħ)∙t – (p/ħ)∙x = (E∙t – px)/ħ) are, of course, the energy and momentum as measured in our frame of reference. Hence, we will want to write these quantities as E = Ev and p = pv = pvv. If we then use natural time and distance units (hence, the numerical value of c is equal to 1 and, hence, the (relative) velocity is then measured as a fraction of c, with a value between 0 and 1), we can relate the energy and momentum of a moving object to its energy and momentum when at rest using the following relativistic formulas:

E= γ·E0 and p= γ·m0v = γ·E0v/c2

The argument of the wavefunction can then be re-written as:

θ = [γ·E0/ħ]∙t – [(γ·E0v/c2)/ħ]∙x = (E0/ħ)·(t − v∙x/c2)·γ = (E0/ħ)∙t’

The γ in these formulas is, of course, the Lorentz factor, and t’ is the proper time: t’ = (t − v∙x/c2)/√(1−v2/c2). Two essential points should be noted here:

1. The argument of the wavefunction is invariant. There is a primed time (t’) but there is no primed θ (θ’): θ = (Ev/ħ)·t – (pv/ħ)·x = (E0/ħ)∙t’.

2. The E0/ħ coefficient pops up as an angular frequency: E0/ħ = ω0. We may refer to it as the frequency of the elementary wavefunction.

Now, if you don’t like the concept of angular frequency, we can also write: f0 = ω0/2π = (E0/ħ)/2π = E0/h. Alternatively, and perhaps more elucidating, we get the following formula for the period of the oscillation:

T0 = 1/f0 = h/E0

This is interesting, because we can look at the period as a natural unit of time for our particle. This period is inversely proportional to the (rest) energy of the particle, and the constant of proportionality is h. Substituting Efor m0·c2, we may also say it’s inversely proportional to the (rest) mass of the particle, with the constant of proportionality equal to h/c2. The period of an electron, for example, would be equal to about 8×10−21 s. That’s very small, and it only gets smaller for larger objects ! But what does all of this really tell us? What does it actually mean?

We can look at the sine and cosine components of the wavefunction as an oscillation in two dimensions, as illustrated below.

Circle_cos_sin

Look at the little green dot going around. Imagine it is some mass going around and around. Its circular motion is equivalent to the two-dimensional oscillation. Indeed, instead of saying it moves along a circle, we may also say it moves simultaneously (1) left and right and back again (the cosine) while also moving (2) up and down and back again (the sine).

Now, a mass that rotates about a fixed axis has angular momentum, which we can write as the vector cross-product L = r×p or, alternatively, as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular massL = I·ω. [Note we write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface).]

We can now do some calculations. We already know the angular velocity (ω) is equal to E0/ħ. Now, the magnitude of r in the Lr×p vector cross-product should equal the magnitude of ψ = a·ei∙E·t/ħ, so we write: r = a. What’s next? Well… The momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω.

So now we only need to think about what formula we should use for the angular mass. If we’re thinking, as we are doing here, of some point mass going around some center, then the formula to use is I = m·r2. However, we may also want to think that the two-dimensional oscillation of our point mass actually describes the surface of a disk, in which case the formula for I becomes I = m·r2/2. Of course, the addition of this 1/2 factor may seem arbitrary but, as you will see, it will give us a more intuitive result. This is what we get:

L = I·ω = (m·r2/2)·(E/ħ) = (1/2)·a2·(E/c2)·(E/ħ) = a2·E2/(2·ħ·c2)

Note that our frame of reference is that of the particle itself, so we should actually write ω0, m0 and E0 instead of ω, m and E. The value of the rest energy of an electron is about 0.510 MeV, or 8.1871×10−14 N∙m. Now, this momentum should equal J = ±ħ/2. We can, therefore, derive the (Compton scattering) radius of an electron:Formula 1Substituting the various constants with their numerical values, we find that a is equal 3.8616×10−13 m, which is the (reduced) Compton scattering radius of an electron. The (tangential) velocity (v) can now be calculated as being equal to v = r·ω = a·ω = [ħ·/(m·c)]·(E/ħ) = c. This is an amazing result. Let us think about it.

In our previous posts, we introduced the metaphor of two springs or oscillators, whose energy was equal to E = m·ω2. Is this compatible with Einstein’s E = m·c2 mass-energy equivalence relation? It is. The E = m·c2 implies E/m = c2. We, therefore, can write the following:

ω = E/ħ = m·c2/ħ = m·(E/m)·/ħ ⇔ ω = E/ħ

Hence, we should actually have titled this and the previous post somewhat differently: the speed of light appears as a tangential velocity. Think of the following: the ratio of c and ω is equal to c/ω = a·ω/ω = a. Hence, the tangential and angular velocity would be the same if we’d measure distance in units of a. In other words, the radius of an electron appears as a natural distance unit here: if we’d measure ω in units of per second, rather than in radians (which are expressed in the SI unit of distance, i.e. the meter) per second, the two concepts would coincide.

More fundamentally, we may want to look at the radius of an electron as a natural unit of velocityHuh? Yes. Just re-write the c/ω = a as ω = c/a. What does it say? Exactly what I said, right? As such, the radius of an electron is not only a norm for measuring distance but also for time. 🙂

If you don’t quite get this, think of the following. For an electron, we get an angular frequency that is equal to ω = E/ħ = (8.19×10−14 N·m)/(1.05×10−34 N·m·s) ≈ 7.76×1020 radians per second. That’s an incredible velocity, because radians are expressed in distance units—so that’s in meter. However, our mass is not moving along the unit circle, but along a much tinier orbit. The ratio of the radius of the unit circle and is equal to 1/a ≈ (1 m)/(3.86×10−13 m) ≈ 2.59×1012. Now, if we divide the above-mentioned velocity of 7.76×1020 radians per second by this factor, we get… Right ! The speed of light: 2.998×1082 m/s. 🙂

Post scriptum: I have no clear answer to the question as to why we should use the I = m·r2/2 formula, as opposed to the I = m·r2 formula. It ensures we get the result we want, but this 1/2 factor is actually rather enigmatic. It makes me think of the 1/2 factor in Schrödinger’s equation, which is also quite enigmatic. In my view, the 1/2 factor should not be there in Schrödinger’s equation. Electron orbitals tend to be occupied by two electrons with opposite spin. That’s why their energy levels should be twice as much. And so I’d get rid of the 1/2 factor, solve for the energy levels, and then divide them by two again. Or something like that. 🙂 But then that’s just my personal opinion or… Well… I’ve always been intrigued by the difference between the original printed edition of the Feynman Lectures and the online version, which has been edited on this point. My printed edition is the third printing, which is dated July 1966, and – on this point – it says the following:

“Don’t forget that meff has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude, about 2 to 20 times the free-space mass of the electron.”

Two to twenty times. Not 1 or 0.5 to 20 times. No. Two times. As I’ve explained a couple of times, if we’d define a new effective mass which would be twice the old concept – so meffNEW = 2∙meffOLD – then such re-definition would not only solve a number of paradoxes and inconsistencies, but it will also justify my interpretation of energy as a two-dimensional oscillation of mass.

However, the online edition has been edited here to reflect the current knowledge about the behavior of an electron in a medium. Hence, if you click on the link above, you will read that the effective mass can be “about 0.1 to 30 times” the free-space mass of the electron. Well… This is another topic altogether, and so I’ll sign off here and let you think about it all. 🙂

The speed of light as an angular velocity

Over the weekend, I worked on a revised version of my paper on a physical interpretation of the wavefunction. However, I forgot to add the final remarks on the speed of light as an angular velocity. I know… This post is for my faithful followers only. It is dense, but let me add the missing bits here:

12

Post scriptum (29 October): Einstein’s view on aether theories probably still holds true: “We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity, space without aether is unthinkable – for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

The above quote is taken from the Wikipedia article on aether theories. The same article also quotes Robert Laughlin, the 1998 Nobel Laureate in Physics, who said this about aether in 2005: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed. […] The word ‘aether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. […]The modern concept of the vacuum of space, confirmed every day by experiment, is a relativistic aether. But we do not call it this because it is taboo.”

I really love this: a relativistic aether. My interpretation of the wavefunction is very consistent with that.

Wavefunctions as gravitational waves

This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas.

It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂

Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension.

The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively.

While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter.

Introduction

This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries.

The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way?

We show the answer is positive and remarkably straightforward.  If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both?

The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself.

As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4]

Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6]

Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7]

We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us. 

I. Energy as a two-dimensional oscillation of mass

The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking:

  1. E = mc2
  2. E = mω2/2
  3. E = mv2/2

In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2?

That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs.

Figure 1: Oscillations in two dimensionsV-2 engine

If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11]

At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t).

The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as:

  1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ)
  2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ)

The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy is equal to:

E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2

To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ

Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:

2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2.

We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity?

These are sensible questions. Let us explore them. 

II. The wavefunction as a two-dimensional oscillation

The elementary wavefunction is written as:

ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ)

When considering a particle at rest (p = 0) this reduces to:

ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ) 

Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise.

Figure 2: Euler’s formula760px-eulers_formula

If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine  – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this.

Figure 3: Geometric representation of the wavefunction5d_euler_f

Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time.

Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2.

Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well.

However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter.

What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:F1We can re-write this as:F2What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass?

Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5. 

III. What is mass?

We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2:

[m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg

This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:F3This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here.

 

The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2.

However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly.

When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it.

What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation. 

IV. Schrödinger’s equation as an energy diffusion equation

The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows:

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17]

Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

The ubiquitous diffusion equation in physics is:

∂φ(x, t)/∂t = D·∇2φ(x, t)

The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

The above equations effectively describe a propagation mechanism in spacetime, as illustrated below.

Figure 4: Propagation mechanismspropagation

The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor:

  1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian);
  2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible.

Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction?

At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number.

To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be?

We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20]

Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:F4Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes:

N/kg = N/(N·s2/m)= m/s2

What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction?

My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent.

In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves.

V. Energy densities and flows

Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:F5E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:F6Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S.

We can analyze the dimensions of the equation for the energy density as follows:

  1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2.
  2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2.
  3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21]

Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them.

Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:F7Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same.

Figure 5: Electromagnetic wave: E and BEM field

Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:F8We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density !

|ψ|2  = |a·ei∙E·t/ħ|2 = a2 = u

This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible.

As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman:

“Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1)

The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more. 

VI. Group and phase velocity of the matter-wave

The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle.

Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again:

ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to:

ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)

E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following:

T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h

This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg

This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as:

vp/= βp = c/vp = 1/βg = 1/(c/vp)

Figure 6: Reciprocal relation between phase and group velocitygraph

We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24]

Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move.

For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c

This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity.

To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations:

  1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)

Both equations imply the following dispersion relation:

ω = ħ·k2/(2meff)

Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in. 

VII. Explaining spin

The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5).

We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave.

The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here.

Figure 7: Torque and angular momentum vectorsTorque_animation

A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write:

L = I·ω

Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to:

ω = 2π/[2π·(ħ/E0)] = E0

We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get:

L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2)

Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as:

L = a2·E02/(2·ħ·c2)

Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a?

We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius.

Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number.

The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m.

This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:F9

In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:F10

This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper. 

VIII. The boson-fermion dichotomy

Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this.

Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as:

ψ(θi= ai·(cosθi + i·sinθi)

In contrast, an elementary left-handed wave would be written as:

ψ(θi= ai·(cosθii·sinθi)

How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like:

ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)

If we count time like −1, −2, −3, etcetera then we write it as:

 ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ)

Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it.

It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there.

Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles:

(vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2

The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26] 

IX. Concluding remarks

There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle.

Figure 8: Two-dimensional circular movementoscillation-of-a-ball

The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition.

The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus.

The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak?

The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant.

Appendix 1: The de Broglie relations and energy

The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept:

  1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h.
  2. v = λ = (E/h)∙(p/h) = E/p
  3. p = m·v. Therefore, E = v·p = m·v2

E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:F11Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27]

However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave.

Appendix 2: The concept of the effective mass

The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:F12This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following:

a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ 

⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2

It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better:

meff = m∙(v/c)2 = m∙β2

We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]).

In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply.

References

This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3.

Notes

[1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition.

[2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law.

[3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs.

[4] The author re-derives the equation for the Compton scattering radius in section VII of the paper.

[5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism.

[6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3)

[7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979)

[8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator.

[9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that.

[10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft.

[11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2.

[12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation.

[13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring.

[14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively.

[15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom.

[16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator.

[17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation.

[18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.

[19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).     

[20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do.

[21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area.

[22] The illustration shows a linearly polarized wave, but the obtained result is general.

[23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2).

[24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017).

[25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase.

[26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks.

[27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited.

[28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017).

Math, physics and reality

This blog has been nice. It doesn’t get an awful lot of traffic (about a thousand visitors a week) but, from time to time, I do get a response or a question that fires me up, if only because it tells me someone is actually reading what I write.

Looking at the site now, I feel like I need to reorganize it completely. It’s just chaos, right? But then that’s what gets me the positive feedback: my readers are in the same boat. We’re trying to make sense of what physicists tell us is reality. The interference model I presented in my previous post is really nice. It has all the ingredients of quantum mechanics, which I would group under two broad categories: uncertainty and duality. Both are related, obviously. I will not talk about the reality of the wavefunction here, because I am biased: I firmly believe the wavefunction represents something real. Why? Because Einstein’s E = m·c2 formula tells us so: energy is a two-dimensional oscillation of mass. Two-dimensional, because it’s got twice the energy of the classroom oscillator (think of a mass on a spring). More importantly, the real and imaginary dimension of the oscillation are both real: they’re perpendicular to the direction of motion of the wave-particle. Photon or electron. It doesn’t matter. Of course, we have all of the transformation formulas, but… Well… These are not real: they are only there to accommodate our perspective: the state of the observer.

The distinction between the group and phase velocity of a wave packet is probably the best example of the failure of ordinary words to describe reality: particles are not waves, and waves are not particles. They are both… Well… Both at the same time. To calculate the action along some path, we assume there is some path, and we assume there is some particle following some path. The path and the particle are just figments of our mind. Useful figments of the mind, but… Well… There is no such thing as an infinitesimally small particle, and the concept of some one-dimensional line in spacetime does not make sense either. Or… Well… They do. Because they help us to make sense of the world. Of what is, whatever it is. 🙂

The mainstream views on the physical significance of the wavefunction are probably best summed up in the Encyclopædia Britannica, which says the wavefunction has no physical significance. Let me quote the relevant extract here:

“The wave functionin quantum mechanics, is a variable quantity that mathematically describes the wave characteristics of a particle. The value of the wave function of a particle at a given point of space and time is related to the likelihood of the particle’s being there at the time. By analogy with waves such as those of sound, a wave function, designated by the Greek letter psi, Ψ, may be thought of as an expression for the amplitude of the particle wave (or de Broglie wave), although for such waves amplitude has no physical significance. The square of the wave function, Ψ2, however, does have physical significance: the probability of finding the particle described by a specific wave function Ψ at a given point and time is proportional to the value of Ψ2.”

Really? First, this is factually wrong: the probability is given by the square of the absolute value of the wave function. These are two very different things:

  1. The square of a complex number is just another complex number: (a + ib)2 = a+ (ib)+ 2iab = ai2b+ 2iab = a– b+ 2iab.
  2. In contrast, the square of the absolute value always gives us a real number, to which we assign the mentioned physical interpretation:|a + ib|2 = [√(a+ b2)]2 = a+ b2.

But it’s not only position: using the right operators, we can also get probabilities on momentum, energy and other physical variables. Hence, the wavefunction is so much more than what the Encyclopædia Britannica suggests.

More fundamentally, what is written there is philosophically inconsistent. Squaring something – the number itself or its norm – is a mathematical operation. How can a mathematical operation suddenly yield something that has physical significance, if none of the elements it operates on, has any. One cannot just go from the mathematical to the physical space. The mathematical space describes the physical space. Always. In physics, at least. 🙂

So… Well… There is too much nonsense around. Disgusting. And the Encyclopædia Britannica should not just present the mainstream view. The truth is: the jury is still out, and there are many guys like me. We think the majority view is plain wrong. In this case, at least. 🙂

Thinking again…

One of the comments on my other blog made me think I should, perhaps, write something on waves again. The animation below shows the elementary wavefunction ψ = a·eiθ = ψ = a·ei·θ  = a·ei(ω·t−k·x) = a·e(i/ħ)·(E·t−p·x) .AnimationWe know this elementary wavefunction cannot represent a real-life particle. Indeed, the a·ei·θ function implies the probability of finding the particle – an electron, a photon, or whatever – would be equal to P(x, t) = |ψ(x, t)|2 = |a·e(i/ħ)·(E·t−p·x)|2 = |a|2·|e(i/ħ)·(E·t−p·x)|2 = |a|2·12= a2 everywhere. Hence, the particle would be everywhere – and, therefore, nowhere really. We need to localize the wave – or build a wave packet. We can do so by introducing uncertainty: we then add a potentially infinite number of these elementary wavefunctions with slightly different values for E and p, and various amplitudes a. Each of these amplitudes will then reflect the contribution to the composite wave, which – in three-dimensional space – we can write as:

ψ(r, t) = ei·(E/ħ)·t·f(r)

As I explained in previous posts (see, for example, my recent post on reality and perception), the f(r) function basically provides some envelope for the two-dimensional ei·θ = ei·(E/ħ)·t = cosθ + i·sinθ oscillation, with r = (x, y, z), θ = (E/ħ)·t = ω·t and ω = E/ħ.

Note that it looks like the wave propagates from left to right – in the positive direction of an axis which we may refer to as the x-axis. Also note this perception results from the fact that, naturally, we’d associate time with the rotation of that arrow at the center – i.e. with the motion in the illustration, while the spatial dimensions are just what they are: linear spatial dimensions. [This point is, perhaps, somewhat less self-evident than you may think at first.]

Now, the axis which points upwards is usually referred to as the z-axis, and the third and final axis – which points towards us – would then be the y-axis, obviously. Unfortunately, this definition would violate the so-called right-hand rule for defining a proper reference frame: the figures below shows the two possibilities – a left-handed and a right-handed reference frame – and it’s the right-handed reference (i.e. the illustration on the right) which we have to use in order to correctly define all directions, including the direction of rotation of the argument of the wavefunction.400px-Cartesian_coordinate_system_handednessHence, if we don’t change the direction of the y– and z-axes – so we keep defining the z-axis as the axis pointing upwards, and the y-axis as the axis pointing towards us – then the positive direction of the x-axis would actually be the direction from right to left, and we should say that the elementary wavefunction in the animation above seems to propagate in the negative x-direction. [Note that this left- or right-hand rule is quite astonishing: simply swapping the direction of one axis of a left-handed frame makes it right-handed, and vice versa.]

Note my language when I talk about the direction of propagation of our wave. I wrote: it looks like, or it seems to go in this or that direction. And I mean that: there is no real traveling here. At this point, you may want to review a post I wrote for my son, which explains the basic math behind waves, and in which I also explained the animation below.

wave_opposite-group-phase-velocity

Note how the peaks and troughs of this pulse seem to move leftwards, but the wave packet (or the group or the envelope of the wave—whatever you want to call it) moves to the right. The point is: the pulse itself doesn’t travel left or right. Think of the horizontal axis in the illustration above as an oscillating guitar string: each point on the string just moves up and down. Likewise, if our repeated pulse would represent a physical wave in water, for example, then the water just stays where it is: it just moves up and down. Likewise, if we shake up some rope, the rope is not going anywhere: we just started some motion that is traveling down the rope. In other words, the phase velocity is just a mathematical concept. The peaks and troughs that seem to be traveling are just mathematical points that are ‘traveling’ left or right. That’s why there’s no limit on the phase velocity: it can – and, according to quantum mechanics, actually will – exceed the speed of light. In contrast, the group velocity – which is the actual speed of the particle that is being represented by the wavefunction – may approach – or, in the case of a massless photon, will actually equal – the speed of light, but will never exceed it, and its direction will, obviously, have a physical significance as it is, effectively, the direction of travel of our particle – be it an electron, a photon (electromagnetic radiation), or whatever.

Hence, you should not think the spin of a particle – integer or half-integer – is somehow related to the direction of rotation of the argument of the elementary wavefunction. It isn’t: Nature doesn’t give a damn about our mathematical conventions, and that’s what the direction of rotation of the argument of that wavefunction is: just some mathematical convention. That’s why we write a·ei(ω·t−k·x) rather than a·ei(ω·t+k·x) or a·ei(ω·t−k·x): it’s just because of the right-hand rule for coordinate frames, and also because Euler defined the counter-clockwise direction as the positive direction of an angle. There’s nothing more to it.

OK. That’s obvious. Let me now return to my interpretation of Einstein’s E = m·c2 formula (see my previous posts on this). I noted that, in the reference frame of the particle itself (see my basics page), the elementary wavefunction a·e(i/ħ)·(E·t−p·x) reduces to a·e(i/ħ)·(E’·t’): the origin of the reference frame then coincides with (the center of) our particle itself, and the wavefunction only varies with the time in the inertial reference frame (i.e. the proper time t’), with the rest energy of the object (E’) as the time scale factor. How should we interpret this?

Well… Energy is force times distance, and force is defined as that what causes some mass to accelerate. To be precise, the newton – as the unit of force – is defined as the magnitude of a force which would cause a mass of one kg to accelerate with one meter per second per second. Per second per second. This is not a typo: 1 N corresponds to 1 kg times 1 m/s per second, i.e. 1 kg·m/s2. So… Because energy is force times distance, the unit of energy may be expressed in units of kg·m/s2·m, or kg·m2/s2, i.e. the unit of mass times the unit of velocity squared. To sum it all up:

1 J = 1 N·m = 1 kg·(m/s)2

This reflects the physical dimensions on both sides of the E = m·c2 formula again but… Well… How should we interpret this? Look at the animation below once more, and imagine the green dot is some tiny mass moving around the origin, in an equally tiny circle. We’ve got two oscillations here: each packing half of the total energy of… Well… Whatever it is that our elementary wavefunction might represent in reality – which we don’t know, of course.

circle_cos_sin

Now, the blue and the red dot – i.e. the horizontal and vertical projection of the green dot – accelerate up and down. If we look carefully, we see these dots accelerate towards the zero point and, once they’ve crossed it, they decelerate, so as to allow for a reversal of direction: the blue dot goes up, and then down. Likewise, the red dot does the same. The interplay between the two oscillations, because of the 90° phase difference, is interesting: if the blue dot is at maximum speed (near or at the origin), the red dot reverses speed (its speed is, therefore, (almost) nil), and vice versa. The metaphor of our frictionless V-2 engine, our perpetuum mobile, comes to mind once more.

The question is: what’s going on, really?

My answer is: I don’t know. I do think that, somehow, energy should be thought of as some two-dimensional oscillation of something – something which we refer to as mass, but we didn’t define mass very clearly either. It also, somehow, combines linear and rotational motion. Each of the two dimensions packs half of the energy of the particle that is being represented by our wavefunction. It is, therefore, only logical that the physical unit of both is to be expressed as a force over some distance – which is, effectively, the physical dimension of energy – or the rotational equivalent of them: torque over some angle. Indeed, the analogy between linear and angular movement is obvious: the kinetic energy of a rotating object is equal to K.E. = (1/2)·I·ω2. In this formula, I is the rotational inertia – i.e. the rotational equivalent of mass – and ω is the angular velocity – i.e. the rotational equivalent of linear velocity. Noting that the (average) kinetic energy in any system must be equal to the (average) potential energy in the system, we can add both, so we get a formula which is structurally similar to the E = m·c2 formula. But is it the same? Is the effective mass of some object the sum of an almost infinite number of quanta that incorporate some kind of rotational motion? And – if we use the right units – is the angular velocity of these infinitesimally small rotations effectively equal to the speed of light?

I am not sure. Not at all, really. But, so far, I can’t think of any explanation of the wavefunction that would make more sense than this one. I just need to keep trying to find better ways to articulate or imagine what might be going on. 🙂 In this regard, I’d like to add a point – which may or may not be relevant. When I talked about that guitar string, or the water wave, and wrote that each point on the string – or each water drop – just moves up and down, we should think of the physicality of the situation: when the string oscillates, its length increases. So it’s only because our string is flexible that it can vibrate between the fixed points at its ends. For a rope that’s not flexible, the end points would need to move in and out with the oscillation. Look at the illustration below, for example: the two kids who are holding rope must come closer to each other, so as to provide the necessary space inside of the oscillation for the other kid. 🙂kid in a ropeThe next illustration – of how water waves actually propagate – is, perhaps, more relevant. Just think of a two-dimensional equivalent – and of the two oscillations as being transverse waves, as opposed to longitudinal. See how string theory starts making sense? 🙂

rayleighwaveThe most fundamental question remains the same: what is it, exactly, that is oscillating here? What is the field? It’s always some force on some charge – but what charge, exactly? Mass? What is it? Well… I don’t have the answer to that. It’s the same as asking: what is electric charge, really? So the question is: what’s the reality of mass, of electric charge, or whatever other charge that causes a force to act on it?

If you know, please let me know. 🙂

Post scriptum: The fact that we’re talking some two-dimensional oscillation here – think of a surface now – explains the probability formula: we need to square the absolute value of the amplitude to get it. And normalize, of course. Also note that, when normalizing, we’d expect to get some factor involving π somewhere, because we’re talking some circular surface – as opposed to a rectangular one. But I’ll let you figure that out. 🙂

Reality and perception

It’s quite easy to get lost in all of the math when talking quantum mechanics. In this post, I’d like to freewheel a bit. I’ll basically try to relate the wavefunction we’ve derived for the electron orbitals to the more speculative posts I wrote on how to interpret the wavefunction. So… Well… Let’s go. 🙂

If there is one thing you should remember from all of the stuff I wrote in my previous posts, then it’s that the wavefunction for an electron orbital – ψ(x, t), so that’s a complex-valued function in two variables (position and time) – can be written as the product of two functions in one variable:

ψ(x, t) = ei·(E/ħ)·t·f(x)

In fact, we wrote f(x) as ψ(x), but I told you how confusing that is: the ψ(x) and ψ(x, t) functions are, obviously, very different. To be precise, the f(x) = ψ(x) function basically provides some envelope for the two-dimensional eiθ = ei·(E/ħ)·t = cosθ + i·sinθ oscillation – as depicted below (θ = −(E/ħ)·t = ω·t with ω = −E/ħ).Circle_cos_sinWhen analyzing this animation – look at the movement of the green, red and blue dots respectively – one cannot miss the equivalence between this oscillation and the movement of a mass on a spring – as depicted below.spiral_sThe ei·(E/ħ)·t function just gives us two springs for the price of one. 🙂 Now, you may want to imagine some kind of elastic medium – Feynman’s famous drum-head, perhaps 🙂 – and you may also want to think of all of this in terms of superimposed waves but… Well… I’d need to review if that’s really relevant to what we’re discussing here, so I’d rather not make things too complicated and stick to basics.

First note that the amplitude of the two linear oscillations above is normalized: the maximum displacement of the object from equilibrium, in the positive or negative direction, which we may denote by x = ±A, is equal to one. Hence, the energy formula is just the sum of the potential and kinetic energy: T + U = (1/2)·A2·m·ω2 = (1/2)·m·ω2. But so we have two springs and, therefore, the energy in this two-dimensional oscillation is equal to E = 2·(1/2)·m·ω2 = m·ω2.

This formula is structurally similar to Einstein’s E = m·c2 formula. Hence, one may want to assume that the energy of some particle (an electron, in our case, because we’re discussing electron orbitals here) is just the two-dimensional motion of its mass. To put it differently, we might also want to think that the oscillating real and imaginary component of our wavefunction each store one half of the total energy of our particle.

However, the interpretation of this rather bold statement is not so straightforward. First, you should note that the ω in the E = m·ω2 formula is an angular velocity, as opposed to the in the E = m·c2 formula, which is a linear velocity. Angular velocities are expressed in radians per second, while linear velocities are expressed in meter per second. However, while the radian measures an angle, we know it does so by measuring a length. Hence, if our distance unit is 1 m, an angle of 2π rad will correspond to a length of 2π meter, i.e. the circumference of the unit circle. So… Well… The two velocities may not be so different after all.

There are other questions here. In fact, the other questions are probably more relevant. First, we should note that the ω in the E = m·ω2 can take on any value. For a mechanical spring, ω will be a function of (1) the stiffness of the spring (which we usually denote by k, and which is typically measured in newton (N) per meter) and (2) the mass (m) on the spring. To be precise, we write: ω2 = k/m – or, what amounts to the same, ω = √(k/m). Both k and m are variables and, therefore, ω can really be anything. In contrast, we know that c is a constant: equals 299,792,458 meter per second, to be precise. So we have this rather remarkable expression: c = √(E/m), and it is valid for any particle – our electron, or the proton at the center, or our hydrogen atom as a whole. It is also valid for more complicated atoms, of course. In fact, it is valid for any system.

Hence, we need to take another look at the energy concept that is used in our ψ(x, t) = ei·(E/ħ)·t·f(x) wavefunction. You’ll remember (if not, you should) that the E here is equal to En = −13.6 eV, −3.4 eV, −1.5 eV and so on, for = 1, 2, 3, etc. Hence, this energy concept is rather particular. As Feynman puts it: “The energies are negative because we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.”

Now, this is the one and only issue I have with the standard physics story. I mentioned it in one of my previous posts and, just for clarity, let me copy what I wrote at the time:

Feynman gives us a rather casual explanation [on choosing a zero point for measuring energy] in one of his very first Lectures on quantum mechanics, where he writes the following: “If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like a·eiωt, with ħ·ω = E = m·c2. Hence, we can write the amplitude for the two states, for example as:

ei(E1/ħ)·t and ei(E2/ħ)·t

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be:

ei(E1+A)·t/ħ and ei(E2+A)·t/ħ

All of his amplitudes would be multiplied by the same factor ei(A/ħ)·t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Ms·c2, where Ms is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.”

It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the probabilities we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, its density in time will be very different. Hence, if we’d want to give the wavefunction some physical meaning – which is what I’ve been trying to do all along – it does make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

So… Well… There you go. If we’d want to try to interpret our ψ(x, t) = ei·(En/ħ)·t·f(x) function as a two-dimensional oscillation of the mass of our electron, the energy concept in it – so that’s the Ein it – should include all pieces. Most notably, it should also include the electron’s rest energy, i.e. its energy when it is not in a bound state. This rest energy is equal to 0.511 MeV. […] Read this again: 0.511 mega-electronvolt (106 eV), so that’s huge as compared to the tiny energy values we mentioned so far (−13.6 eV, −3.4 eV, −1.5 eV,…).

Of course, this gives us a rather phenomenal order of magnitude for the oscillation that we’re looking at. Let’s quickly calculate it. We need to convert to SI units, of course: 0.511 MeV is about 8.2×10−14 joule (J), and so the associated frequency is equal to ν = E/h = (8.2×10−14 J)/(6.626×10−34 J·s) ≈ 1.23559×1020 cycles per second. Now, I know such number doesn’t say all that much: just note it’s the same order of magnitude as the frequency of gamma rays and… Well… No. I won’t say more. You should try to think about this for yourself. [If you do, think – for starters – about the difference between bosons and fermions: matter-particles are fermions, and photons are bosons. Their nature is very different.]

The corresponding angular frequency is just the same number but multiplied by 2π (one cycle corresponds to 2π radians and, hence, ω = 2π·ν = 7.76344×1020 rad per second. Now, if our green dot would be moving around the origin, along the circumference of our unit circle, then its horizontal and/or vertical velocity would approach the same value. Think of it. We have this eiθ = ei·(E/ħ)·t = ei·ω·t = cos(ω·t) + i·sin(ω·t) function, with ω = E/ħ. So the cos(ω·t) captures the motion along the horizontal axis, while the sin(ω·t) function captures the motion along the vertical axis. Now, the velocity along the horizontal axis as a function of time is given by the following formula:

v(t) = d[x(t)]/dt = d[cos(ω·t)]/dt = −ω·sin(ω·t)

Likewise, the velocity along the vertical axis is given by v(t) = d[sin(ω·t)]/dt = ω·cos(ω·t). These are interesting formulas: they show the velocity (v) along one of the two axes is always less than the angular velocity (ω). To be precise, the velocity approaches – or, in the limit, is equal to – the angular velocity ω when ω·t is equal to ω·= 0, π/2, π or 3π/2. So… Well… 7.76344×1020 meter per second!? That’s like 2.6 trillion times the speed of light. So that’s not possible, of course!

That’s where the amplitude of our wavefunction comes in – our envelope function f(x): the green dot does not move along the unit circle. The circle is much tinier and, hence, the oscillation should not exceed the speed of light. In fact, I should probably try to prove it oscillates at the speed of light, thereby respecting Einstein’s universal formula:

c = √(E/m)

Written like this – rather than as you know it: E = m·c2 – this formula shows the speed of light is just a property of spacetime, just like the ω = √(k/m) formula (or the ω = √(1/LC) formula for a resonant AC circuit) shows that ω, the natural frequency of our oscillator, is a characteristic of the system.

Am I absolutely certain of what I am writing here? No. My level of understanding of physics is still that of an undergrad. But… Well… It all makes a lot of sense, doesn’t it? 🙂

Now, I said there were a few obvious questions, and so far I answered only one. The other obvious question is why energy would appear to us as mass in motion in two dimensions only. Why is it an oscillation in a plane? We might imagine a third spring, so to speak, moving in and out from us, right? Also, energy densities are measured per unit volume, right?

Now that‘s a clever question, and I must admit I can’t answer it right now. However, I do suspect it’s got to do with the fact that the wavefunction depends on the orientation of our reference frame. If we rotate it, it changes. So it’s like we’ve lost one degree of freedom already, so only two are left. Or think of the third direction as the direction of propagation of the wave. 🙂 Also, we should re-read what we wrote about the Poynting vector for the matter wave, or what Feynman wrote about probability currents. Let me give you some appetite for that by noting that we can re-write joule per cubic meter (J/m3) as newton per square meter: J/m3 = N·m/m3 = N/m2. [Remember: the unit of energy is force times distance. In fact, looking at Einstein’s formula, I’d say it’s kg·m2/s2 (mass times a squared velocity), but that simplifies to the same: kg·m2/s2 = [N/(m/s2)]·m2/s2.]

I should probably also remind you that there is no three-dimensional equivalent of Euler’s formula, and the way the kinetic and potential energy of those two oscillations works together is rather unique. Remember I illustrated it with the image of a V-2 engine in previous posts. There is no such thing as a V-3 engine. [Well… There actually is – but not with the third cylinder being positioned sideways.]two-timer-576-px-photo-369911-s-original

But… Then… Well… Perhaps we should think of some weird combination of two V-2 engines. The illustration below shows the superposition of two one-dimensional waves – I think – one traveling east-west and back, and the other one traveling north-south and back. So, yes, we may to think of Feynman’s drum-head again – but combining two-dimensional waves – two waves that both have an imaginary as well as a real dimension

dippArticle-14

Hmm… Not sure. If we go down this path, we’d need to add a third dimension – so w’d have a super-weird V-6 engine! As mentioned above, the wavefunction does depend on our reference frame: we’re looking at stuff from a certain direction and, therefore, we can only see what goes up and down, and what goes left or right. We can’t see what comes near and what goes away from us. Also think of the particularities involved in measuring angular momentum – or the magnetic moment of some particle. We’re measuring that along one direction only! Hence, it’s probably no use to imagine we’re looking at three waves simultaneously!

In any case… I’ll let you think about all of this. I do feel I am on to something. I am convinced that my interpretation of the wavefunction as an energy propagation mechanism, or as energy itself – as a two-dimensional oscillation of mass – makes sense. 🙂

Of course, I haven’t answered one key question here: what is mass? What is that green dot – in reality, that is? At this point, we can only waffle – probably best to just give its standard definition: mass is a measure of inertia. A resistance to acceleration or deceleration, or to changing direction. But that doesn’t say much. I hate to say that – in many ways – all that I’ve learned so far has deepened the mystery, rather than solve it. The more we understand, the less we understand? But… Well… That’s all for today, folks ! Have fun working through it for yourself. 🙂

Post scriptum: I’ve simplified the wavefunction a bit. As I noted in my post on it, the complex exponential is actually equal to ei·[(E/ħ)·− m·φ], so we’ve got a phase shift because of m, the quantum number which denotes the z-component of the angular momentum. But that’s a minor detail that shouldn’t trouble or worry you here.

Re-visiting electron orbitals (III)

In my previous post, I mentioned that it was not so obvious (both from a physical as well as from a mathematical point of view) to write the wavefunction for electron orbitals – which we denoted as ψ(x, t), i.e. a function of two variables (or four: one time coordinate and three space coordinates) – as the product of two other functions in one variable only.

[…] OK. The above sentence is difficult to read. Let me write in math. 🙂 It is not so obvious to write ψ(x, t) as:

ψ(x, t) = ei·(E/ħ)·t·ψ(x)

As I mentioned before, the physicists’ use of the same symbol (ψ, psi) for both the ψ(x, t) and ψ(x) function is quite confusing – because the two functions are very different:

  • ψ(x, t) is a complex-valued function of two (real) variables: x and t. Or four, I should say, because x = (x, y, z) – but it’s probably easier to think of x as one vector variable – a vector-valued argument, so to speak. And then t is, of course, just a scalar variable. So… Well… A function of two variables: the position in space (x), and time (t).
  • In contrast, ψ(x) is a real-valued function of one (vector) variable only: x, so that’s the position in space only.

Now you should cry foul, of course: ψ(x) is not necessarily real-valued. It may be complex-valued. You’re right. You know the formula:wavefunctionNote the derivation of this formula involved a switch from Cartesian to polar coordinates here, so from = (x, y, z) to r = (r, θ, φ), and that the function is also a function of the two quantum numbers l and m now, i.e. the orbital angular momentum (l) and its z-component (m) respectively. In my previous post(s), I gave you the formulas for Yl,m(θ, φ) and Fl,m(r) respectively. Fl,m(r) was a real-valued function alright, but the Yl,m(θ, φ) had that ei·m·φ factor in it. So… Yes. You’re right: the Yl,m(θ, φ) function is real-valued if – and only if – m = 0, in which case ei·m·φ = 1. Let me copy the table from Feynman’s treatment of the topic once again:spherical harmonics 2The Plm(cosθ) functions are the so-called (associated) Legendre polynomials, and the formula for these functions is rather horrible:Legendre polynomialDon’t worry about it too much: just note the Plm(cosθ) is a real-valued function. The point is the following:the ψ(x, t) is a complex-valued function because – and only because – we multiply a real-valued envelope function – which depends on position only – with ei·(E/ħ)·t·ei·m·φ = ei·[(E/ħ)·− m·φ].

[…]

Please read the above once again and – more importantly – think about it for a while. 🙂 You’ll have to agree with the following:

  • As mentioned in my previous post, the ei·m·φ factor just gives us phase shift: just a re-set of our zero point for measuring time, so to speak, and the whole ei·[(E/ħ)·− m·φ] factor just disappears when we’re calculating probabilities.
  • The envelope function gives us the basic amplitude – in the classical sense of the word: the maximum displacement from the zero value. And so it’s that ei·[(E/ħ)·− m·φ] that ensures the whole expression somehow captures the energy of the oscillation.

Let’s first look at the envelope function again. Let me copy the illustration for n = 5 and = 2 from Wikimedia Commons article. Note the symmetry planes:

  • Any plane containing the z-axis is a symmetry plane – like a mirror in which we can reflect one half of the shape to get the other half. [Note that I am talking the shape only here. Forget about the colors for a while – as these reflect the complex phase of the wavefunction.]
  • Likewise, the plane containing both the x– and the y-axis is a symmetry plane as well.

n = 5

The first symmetry plane – or symmetry line, really (i.e. the z-axis) – should not surprise us, because the azimuthal angle φ is conspicuously absent in the formula for our envelope function if, as we are doing in this article here, we merge the ei·m·φ factor with the ei·(E/ħ)·t, so it’s just part and parcel of what the author of the illustrations above refers to as the ‘complex phase’ of our wavefunction. OK. Clear enough – I hope. 🙂 But why is the the xy-plane a symmetry plane too? We need to look at that monstrous formula for the Plm(cosθ) function here: just note the cosθ argument in it is being squared before it’s used in all of the other manipulation. Now, we know that cosθ = sin(π/2 − θ). So we can define some new angle – let’s just call it α – which is measured in the way we’re used to measuring angle, which is not from the z-axis but from the xy-plane. So we write: cosθ = sin(π/2 − θ) = sinα. The illustration below may or may not help you to see what we’re doing here.angle 2So… To make a long story short, we can substitute the cosθ argument in the Plm(cosθ) function for sinα = sin(π/2 − θ). Now, if the xy-plane is a symmetry plane, then we must find the same value for Plm(sinα) and Plm[sin(−α)]. Now, that’s not obvious, because sin(−α) = −sinα ≠ sinα. However, because the argument in that Plm(x) function is being squared before any other operation (like subtracting 1 and exponentiating the result), it is OK: [−sinα]2 = [sinα]sin2α. […] OK, I am sure the geeks amongst my readers will be able to explain this more rigorously. In fact, I hope they’ll have a look at it, because there’s also that dl+m/dxl+m operator, and so you should check what happens with the minus sign there. 🙂

[…] Well… By now, you’re probably totally lost, but the fact of the matter is that we’ve got a beautiful result here. Let me highlight the most significant results:

  • definite energy state of a hydrogen atom (or of an electron orbiting around some nucleus, I should say) appears to us as some beautifully shaped orbital – an envelope function in three dimensions, really – which has the z-axis – i.e. the vertical axis – as a symmetry line and the xy-plane as a symmetry plane.
  • The ei·[(E/ħ)·− m·φ] factor gives us the oscillation within the envelope function. As such, it’s this factor that, somehow, captures the energy of the oscillation.

It’s worth thinking about this. Look at the geometry of the situation again – as depicted below. We’re looking at the situation along the x-axis, in the direction of the origin, which is the nucleus of our atom.

spherical

The ei·m·φ factor just gives us phase shift: just a re-set of our zero point for measuring time, so to speak. Interesting, weird – but probably less relevant than the ei·[(E/ħ)·t factor, which gives us the two-dimensional oscillation that captures the energy of the state.

Circle_cos_sin

Now, the obvious question is: the oscillation of what, exactly? I am not quite sure but – as I explained in my Deep Blue page – the real and imaginary part of our wavefunction are really like the electric and magnetic field vector of an oscillating electromagnetic field (think of electromagnetic radiation – if that makes it easier). Hence, just like the electric and magnetic field vector represent some rapidly changing force on a unit charge, the real and imaginary part of our wavefunction must also represent some rapidly changing force on… Well… I am not quite sure on what though. The unit charge is usually defined as the charge of a proton – rather than an electron – but then forces act on some mass, right? And the mass of a proton is hugely different from the mass of an electron. The same electric (or magnetic) force will, therefore, give a hugely different acceleration to both.

So… Well… My guts instinct tells me the real and imaginary part of our wavefunction just represent, somehow, a rapidly changing force on some unit of mass, but then I am not sure how to define that unit right now (it’s probably not the kilogram!).

Now, there is another thing we should note here: we’re actually sort of de-constructing a rotation (look at the illustration above once again) in two linearly oscillating vectors – one along the z-axis and the other along the y-axis. Hence, in essence, we’re actually talking about something that’s spinning. In other words, we’re actually talking some torque around the x-axis. In what direction? I think that shouldn’t matter – that we can write E or −E, in other words, but… Well… I need to explore this further – as should you! 🙂

Let me just add one more note on the ei·m·φ factor. It sort of defines the geometry of the complex phase itself. Look at the illustration below. Click on it to enlarge it if necessary – or, better still, visit the magnificent Wikimedia Commons article from which I get these illustrations. These are the orbitals = 4 and = 3. Look at the red hues in particular – or the blue – whatever: focus on one color only, and see how how – for m = ±1, we’ve got one appearance of that color only. For m = ±1, the same color appears at two ends of the ‘tubes’ – or tori (plural of torus), I should say – just to sound more professional. 🙂 For m = ±2, the torus consists of three parts – or, in mathematical terms, we’d say the order of its rotational symmetry is equal to 3. Check that Wikimedia Commons article for higher values of and l: the shapes become very convoluted, but the observation holds. 🙂

l = 3

Have fun thinking all of this through for yourself – and please do look at those symmetries in particular. 🙂

Post scriptum: You should do some thinking on whether or not these = ±1, ±2,…, ±orbitals are really different. As I mentioned above, a phase difference is just what it is: a re-set of the t = 0 point. Nothing more, nothing less. So… Well… As far as I am concerned, that’s not real difference, is it? 🙂 As with other stuff, I’ll let you think about this for yourself.

An interpretation of the wavefunction

This is my umpteenth post on the same topic. 😦 It is obvious that this search for a sensible interpretation is consuming me. Why? I am not sure. Studying physics is frustrating. As a leading physicist puts it:

“The teaching of quantum mechanics these days usually follows the same dogma: firstly, the student is told about the failure of classical physics at the beginning of the last century; secondly, the heroic confusions of the founding fathers are described and the student is given to understand that no humble undergraduate student could hope to actually understand quantum mechanics for himself; thirdly, a deus ex machina arrives in the form of a set of postulates (the Schrödinger equation, the collapse of the wavefunction, etc); fourthly, a bombardment of experimental verifications is given, so that the student cannot doubt that QM is correct; fifthly, the student learns how to solve the problems that will appear on the exam paper, hopefully with as little thought as possible.”

That’s obviously not the way we want to understand quantum mechanics. [With we, I mean, me, of course, and you, if you’re reading this blog.] Of course, that doesn’t mean I don’t believe Richard Feynman, one of the greatest physicists ever, when he tells us no one, including himself, understands physics quite the way we’d like to understand it. Such statements should not prevent us from trying harder. So let’s look for better metaphors. The animation below shows the two components of the archetypal wavefunction – a simple sine and cosine. They’re the same function actually, but their phases differ by 90 degrees (π/2).

circle_cos_sin

It makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below, which I took from a rather simple article on cars and engines that has nothing to do with quantum mechanics. Think of the moving pistons as harmonic oscillators, like springs.

two-timer-576-px-photo-369911-s-original

We will also think of the center of each cylinder as the zero point: think of that point as a point where – if we’re looking at one cylinder alone – the internal and external pressure balance each other, so the piston would not move… Well… If it weren’t for the other piston, because the second piston is not at the center when the first is. In fact, it is easy to verify and compare the following positions of both pistons, as well as the associated dynamics of the situation:

Piston 1

Piston 2

Motion of Piston 1

Motion Piston 2

Top

Center

Compressed air will push piston down

Piston moves down against external pressure

Center

Bottom

Piston moves down against external pressure

External air pressure will push piston up

Bottom

Center

External air pressure will push piston up

Piston moves further up and compresses the air

Center

Top

Piston moves further up and compresses the air

Compressed air will push piston down

When the pistons move, their linear motion will be described by a sinusoidal function: a sine or a cosine. In fact, the 90-degree V-2 configuration ensures that the linear motion of the two pistons will be exactly the same, except for a phase difference of 90 degrees. [Of course, because of the sideways motion of the connecting rods, our sine and cosine function describes the linear motion only approximately, but you can easily imagine the idealized limit situation. If not, check Feynman’s description of the harmonic oscillator.]

The question is: if we’d have a set-up like this, two springs – or two harmonic oscillators – attached to a shaft through a crank, would this really work as a perpetuum mobile? We obviously talk energy being transferred back and forth between the rotating shaft and the moving pistons… So… Well… Let’s model this: the total energy, potential and kinetic, in each harmonic oscillator is constant. Hence, the piston only delivers or receives kinetic energy from the rotating mass of the shaft.

Now, in physics, that’s a bit of an oxymoron: we don’t think of negative or positive kinetic (or potential) energy in the context of oscillators. We don’t think of the direction of energy. But… Well… If we’ve got two oscillators, our picture changes, and so we may have to adjust our thinking here.

Let me start by giving you an authoritative derivation of the various formulas involved here, taking the example of the physical spring as an oscillator—but the formulas are basically the same for any harmonic oscillator.

energy harmonic oscillator

The first formula is a general description of the motion of our oscillator. The coefficient in front of the cosine function (a) is the maximum amplitude. Of course, you will also recognize ω0 as the natural frequency of the oscillator, and Δ as the phase factor, which takes into account our t = 0 point. In our case, for example, we have two oscillators with a phase difference equal to π/2 and, hence, Δ would be 0 for one oscillator, and –π/2 for the other. [The formula to apply here is sinθ = cos(θ – π/2).] Also note that we can equate our θ argument to ω0·t. Now, if = 1 (which is the case here), then these formulas simplify to:

  1. K.E. = T = m·v2/2 = m·ω02·sin2(θ + Δ) = m·ω02·sin20·t + Δ)
  2. P.E. = U = k·x2/2 = k·cos2(θ + Δ)

The coefficient k in the potential energy formula characterizes the force: F = −k·x. The minus sign reminds us our oscillator wants to return to the center point, so the force pulls back. From the dynamics involved, it is obvious that k must be equal to m·ω02., so that gives us the famous T + U = m·ω02/2 formula or, including once again, T + U = m·a2·ω02/2.

Now, if we normalize our functions by equating k to one (k = 1), then the motion of our first oscillator is given by the cosθ function, and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:

d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dt = 2∙sinθ∙cosθ

Let’s look at the second oscillator now. Just think of the second piston going up and down in our V-twin engine. Its motion is given by the sinθ function which, as mentioned above, is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:

2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ

We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the rotating shaft moves at constant speed. Linear motion becomes circular motion, and vice versa, in a frictionless Universe. We have the metaphor we were looking for!

Somehow, in this beautiful interplay between linear and circular motion, energy is being borrowed from one place to another, and then returned. From what place to what place? I am not sure. We may call it the real and imaginary energy space respectively, but what does that mean? One thing is for sure, however: the interplay between the real and imaginary part of the wavefunction describes how energy propagates through space!

How exactly? Again, I am not sure. Energy is, obviously, mass in motion – as evidenced by the E = m·c2 equation, and it may not have any direction (when everything is said and done, it’s a scalar quantity without direction), but the energy in a linear motion is surely different from that in a circular motion, and our metaphor suggests we need to think somewhat more along those lines. Perhaps we will, one day, able to square this circle. 🙂

Schrödinger’s equation

Let’s analyze the interplay between the real and imaginary part of the wavefunction through an analysis of Schrödinger’s equation, which we write as:

i·ħ∙∂ψ/∂t = –(ħ2/2m)∙∇2ψ + V·ψ

We can do a quick dimensional analysis of both sides:

  • [i·ħ∙∂ψ/∂t] = N∙m∙s/s = N∙m
  • [–(ħ2/2m)∙∇2ψ] = N∙m3/m2 = N∙m
  • [V·ψ] = N∙m

Note the dimension of the ‘diffusion’ constant ħ2/2m: [ħ2/2m] = N2∙m2∙s2/kg = N2∙m2∙s2/(N·s2/m) = N∙m3. Also note that, in order for the dimensions to come out alright, the dimension of V – the potential – must be that of energy. Hence, Feynman’s description of it as the potential energy – rather than the potential tout court – is somewhat confusing but correct: V must equal the potential energy of the electron. Hence, V is not the conventional (potential) energy of the unit charge (1 coulomb). Instead, the natural unit of charge is used here, i.e. the charge of the electron itself.

Now, Schrödinger’s equation – without the V·ψ term – can be written as the following pair of equations:

  1. Re(∂ψ/∂t) = −(1/2)∙(ħ/m)∙Im(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)∙(ħ/m)∙Re(∇2ψ)

This closely resembles the propagation mechanism of an electromagnetic wave as described by Maxwell’s equation for free space (i.e. a space with no charges), but E and B are vectors, not scalars. How do we get this result. Well… ψ is a complex function, which we can write as a + i∙b. Likewise, ∂ψ/∂t is a complex function, which we can write as c + i∙d, and ∇2ψ can then be written as e + i∙f. If we temporarily forget about the coefficients (ħ, ħ2/m and V), then Schrödinger’s equation – including V·ψ term – amounts to writing something like this:

i∙(c + i∙d) = –(e + i∙f) + (a + i∙b) ⇔ a + i∙b = i∙c − d + e+ i∙f  ⇔ a = −d + e and b = c + f

Hence, we can now write:

  1. V∙Re(ψ) = −ħ∙Im(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Re(∇2ψ)
  2. V∙Im(ψ) = ħ∙Re(∂ψ/∂t) + (1/2)∙( ħ2/m)∙Im(∇2ψ)

This simplifies to the two equations above for V = 0, i.e. when there is no potential (electron in free space). Now we can bring the Re and Im operators into the brackets to get:

  1. V∙Re(ψ) = −ħ∙∂Im (ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Re(ψ)
  2. V∙Im(ψ) = ħ∙∂Re(ψ)/∂t + (1/2)∙( ħ2/m)∙∇2Im(ψ)

This is very interesting, because we can re-write this using the quantum-mechanical energy operator H = –(ħ2/2m)∙∇2 + V· (note the multiplication sign after the V, which we do not have – for obvious reasons – for the –(ħ2/2m)∙∇2 expression):

  1. H[Re (ψ)] = −ħ∙∂Im(ψ)/∂t
  2. H[Im(ψ)] = ħ∙∂Re(ψ)/∂t

A dimensional analysis shows us both sides are, once again, expressed in N∙m. It’s a beautiful expression because – if we write the real and imaginary part of ψ as r∙cosθ and r∙sinθ, we get:

  1. H[cosθ] = −ħ∙∂sinθ/∂t = E∙cosθ
  2. H[sinθ] = ħ∙∂cosθ/∂t = E∙sinθ

Indeed, θ = (E∙t − px)/ħ and, hence, −ħ∙∂sinθ/∂t = ħ∙cosθ∙E/ħ = E∙cosθ and ħ∙∂cosθ/∂t = ħ∙sinθ∙E/ħ = E∙sinθ.  Now we can combine the two equations in one equation again and write:

H[r∙(cosθ + i∙sinθ)] = r∙(E∙cosθ + i∙sinθ) ⇔ H[ψ] = E∙ψ

The operator H – applied to the wavefunction – gives us the (scalar) product of the energy E and the wavefunction itself. Isn’t this strange?

Hmm… I need to further verify and explain this result… I’ll probably do so in yet another post on the same topic… 🙂

Post scriptum: The symmetry of our V-2 engine – or perpetuum mobile – is interesting: its cross-section has only one axis of symmetry. Hence, we may associate some angle with it, so as to define its orientation in the two-dimensional cross-sectional plane. Of course, the cross-sectional plane itself is at right angles to the crankshaft axis, which we may also associate with some angle in three-dimensional space. Hence, its geometry defines two orthogonal directions which, in turn, define a spherical coordinate system, as shown below.

558px-3d_spherical

We may, therefore, say that three-dimensional space is actually being implied by the geometry of our V-2 engine. Now that is interesting, isn’t it? 🙂

Re-visiting uncertainty…

I re-visited the Uncertainty Principle a couple of times already, but here I really want to get at the bottom of the thing? What’s uncertain? The energy? The time? The wavefunction itself? These questions are not easily answered, and I need to warn you: you won’t get too much wiser when you’re finished reading this. I just felt like freewheeling a bit. [Note that the first part of this post repeats what you’ll find on the Occam page, or my post on Occam’s Razor. But these post do not analyze uncertainty, which is what I will be trying to do here.]

Let’s first think about the wavefunction itself. It’s tempting to think it actually is the particle, somehow. But it isn’t. So what is it then? Well… Nobody knows. In my previous post, I said I like to think it travels with the particle, but then doesn’t make much sense either. It’s like a fundamental property of the particle. Like the color of an apple. But where is that color? In the apple, in the light it reflects, in the retina of our eye, or is it in our brain? If you know a thing or two about how perception actually works, you’ll tend to agree the quality of color is not in the apple. When everything is said and done, the wavefunction is a mental construct: when learning physics, we start to think of a particle as a wavefunction, but they are two separate things: the particle is reality, the wavefunction is imaginary.

But that’s not what I want to talk about here. It’s about that uncertainty. Where is the uncertainty? You’ll say: you just said it was in our brain. No. I didn’t say that. It’s not that simple. Let’s look at the basic assumptions of quantum physics:

  1. Quantum physics assumes there’s always some randomness in Nature and, hence, we can measure probabilities only. We’ve got randomness in classical mechanics too, but this is different. This is an assumption about how Nature works: we don’t really know what’s happening. We don’t know the internal wheels and gears, so to speak, or the ‘hidden variables’, as one interpretation of quantum mechanics would say. In fact, the most commonly accepted interpretation of quantum mechanics says there are no ‘hidden variables’.
  2. However, as Shakespeare has one of his characters say: there is a method in the madness, and the pioneers– I mean Werner Heisenberg, Louis de Broglie, Niels Bohr, Paul Dirac, etcetera – discovered that method: all probabilities can be found by taking the square of the absolute value of a complex-valued wavefunction (often denoted by Ψ), whose argument, or phase (θ), is given by the de Broglie relations ω = E/ħ and k = p/ħ. The generic functional form of that wavefunction is:

Ψ = Ψ(x, t) = a·eiθ = a·ei(ω·t − k ∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

That should be obvious by now, as I’ve written more than a dozens of posts on this. 🙂 I still have trouble interpreting this, however—and I am not ashamed, because the Great Ones I just mentioned have trouble with that too. It’s not that complex exponential. That eiφ is a very simple periodic function, consisting of two sine waves rather than just one, as illustrated below. [It’s a sine and a cosine, but they’re the same function: there’s just a phase difference of 90 degrees.] sine

No. To understand the wavefunction, we need to understand those de Broglie relations, ω = E/ħ and k = p/ħ, and then, as mentioned, we need to understand the Uncertainty Principle. We need to understand where it comes from. Let’s try to go as far as we can by making a few remarks:

  • Adding or subtracting two terms in math, (E/ħ)·t − (p/ħ)∙x, implies the two terms should have the same dimension: we can only add apples to apples, and oranges to oranges. We shouldn’t mix them. Now, the (E/ħ)·t and (p/ħ)·x terms are actually dimensionless: they are pure numbers. So that’s even better. Just check it: energy is expressed in newton·meter (energy, or work, is force over distance, remember?) or electronvolts (1 eV = 1.6×10−19 J = 1.6×10−19 N·m); Planck’s constant, as the quantum of action, is expressed in J·s or eV·s; and the unit of (linear) momentum is 1 N·s = 1 kg·m/s = 1 N·s. E/ħ gives a number expressed per second, and p/ħ a number expressed per meter. Therefore, multiplying E/ħ and p/ħ by t and x respectively gives us a dimensionless number indeed.
  • It’s also an invariant number, which means we’ll always get the same value for it, regardless of our frame of reference. As mentioned above, that’s because the four-vector product pμxμ = E·t − px is invariant: it doesn’t change when analyzing a phenomenon in one reference frame (e.g. our inertial reference frame) or another (i.e. in a moving frame).
  • Now, Planck’s quantum of action h, or ħ – h and ħ only differ in their dimension: h is measured in cycles per second, while ħ is measured in radians per second: both assume we can at least measure one cycle – is the quantum of energy really. Indeed, if “energy is the currency of the Universe”, and it’s real and/or virtual photons who are exchanging it, then it’s good to know the currency unit is h, i.e. the energy that’s associated with one cycle of a photon. [In case you want to see the logic of this, see my post on the physical constants c, h and α.]
  • It’s not only time and space that are related, as evidenced by the fact that t − x itself is an invariant four-vector, E and p are related too, of course! They are related through the classical velocity of the particle that we’re looking at: E/p = c2/v and, therefore, we can write: E·β = p·c, with β = v/c, i.e. the relative velocity of our particle, as measured as a ratio of the speed of light. Now, I should add that the t − x four-vector is invariant only if we measure time and space in equivalent units. Otherwise, we have to write c·t − x. If we do that, so our unit of distance becomes meter, rather than one meter, or our unit of time becomes the time that is needed for light to travel one meter, then = 1, and the E·β = p·c becomes E·β = p, which we also write as β = p/E: the ratio of the energy and the momentum of our particle is its (relative) velocity.

Combining all of the above, we may want to assume that we are measuring energy and momentum in terms of the Planck constant, i.e. the ‘natural’ unit for both. In addition, we may also want to assume that we’re measuring time and distance in equivalent units. Then the equation for the phase of our wavefunctions reduces to:

θ = (ω·t − k ∙x) = E·t − p·x

Now, θ is the argument of a wavefunction, and we can always re-scale such argument by multiplying or dividing it by some constant. It’s just like writing the argument of a wavefunction as v·t–x or (v·t–x)/v = t –x/v  with the velocity of the waveform that we happen to be looking at. [In case you have trouble following this argument, please check the post I did for my kids on waves and wavefunctions.] Now, the energy conservation principle tells us the energy of a free particle won’t change. [Just to remind you, a ‘free particle’ means it’s in a ‘field-free’ space, so our particle is in a region of uniform potential.] So we can, in this case, treat E as a constant, and divide E·t − p·x by E, so we get a re-scaled phase for our wavefunction, which I’ll write as:

φ = (E·t − p·x)/E = t − (p/E)·x = t − β·x

Alternatively, we could also look at p as some constant, as there is no variation in potential energy that will cause a change in momentum, and the related kinetic energy. We’d then divide by p and we’d get (E·t − p·x)/p = (E/p)·t − x) = t/β − x, which amounts to the same, as we can always re-scale by multiplying it with β, which would again yield the same t − β·x argument.

The point is, if we measure energy and momentum in terms of the Planck unit (I mean: in terms of the Planck constant, i.e. the quantum of energy), and if we measure time and distance in ‘natural’ units too, i.e. we take the speed of light to be unity, then our Platonic wavefunction becomes as simple as:

Φ(φ) = a·eiφ = a·ei(t − β·x)

This is a wonderful formula, but let me first answer your most likely question: why would we use a relative velocity?Well… Just think of it: when everything is said and done, the whole theory of relativity and, hence, the whole of physics, is based on one fundamental and experimentally verified fact: the speed of light is absolute. In whatever reference frame, we will always measure it as 299,792,458 m/s. That’s obvious, you’ll say, but it’s actually the weirdest thing ever if you start thinking about it, and it explains why those Lorentz transformations look so damn complicated. In any case, this fact legitimately establishes as some kind of absolute measure against which all speeds can be measured. Therefore, it is only natural indeed to express a velocity as some number between 0 and 1. Now that amounts to expressing it as the β = v/c ratio.

Let’s now go back to that Φ(φ) = a·eiφ = a·ei(t − β·x) wavefunction. Its temporal frequency ω is equal to one, and its spatial frequency k is equal to β = v/c. It couldn’t be simpler but, of course, we’ve got this remarkably simple result because we re-scaled the argument of our wavefunction using the energy and momentum itself as the scale factor. So, yes, we can re-write the wavefunction of our particle in a particular elegant and simple form using the only information that we have when looking at quantum-mechanical stuff: energy and momentum, because that’s what everything reduces to at that level.

So… Well… We’ve pretty much explained what quantum physics is all about here. You just need to get used to that complex exponential: eiφ = cos(−φ) + i·sin(−φ) = cos(φ) −i·sin(φ). It would have been nice if Nature would have given us a simple sine or cosine function. [Remember the sine and cosine function are actually the same, except for a phase difference of 90 degrees: sin(φ) = cos(π/2−φ) = cos(φ+π/2). So we can go always from one to the other by shifting the origin of our axis.] But… Well… As we’ve shown so many times already, a real-valued wavefunction doesn’t explain the interference we observe, be it interference of electrons or whatever other particles or, for that matter, the interference of electromagnetic waves itself, which, as you know, we also need to look at as a stream of photons , i.e. light quanta, rather than as some kind of infinitely flexible aether that’s undulating, like water or air.

However, the analysis above does not include uncertainty. That’s as fundamental to quantum physics as de Broglie‘s equations, so let’s think about that now.

Introducing uncertainty

Our information on the energy and the momentum of our particle will be incomplete: we’ll write E = E± σE, and p = p± σp. Huh? No ΔE or ΔE? Well… It’s the same, really, but I am a bit tired of using the Δ symbol, so I am using the σ symbol here, which denotes a standard deviation of some density function. It underlines the probabilistic, or statistical, nature of our approach.

The simplest model is that of a two-state system, because it involves two energy levels only: E = E± A, with A some constant. Large or small, it doesn’t matter. All is relative anyway. 🙂 We explained the basics of the two-state system using the example of an ammonia molecule, i.e. an NHmolecule, so it consists on one nitrogen and three hydrogen atoms. We had two base states in this system: ‘up’ or ‘down’, which we denoted as base state | 1 〉 and base state | 2 〉 respectively. This ‘up’ and ‘down’ had nothing to do with the classical or quantum-mechanical notion of spin, which is related to the magnetic moment. No. It’s much simpler than that: the nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown below, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry.

Capture

In any case, for the details, I’ll refer you to the post(s) on it. Here I just want to mention the result. We wrote the amplitude to find the molecule in either one of these two states as:

  • C= 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t
  • C= 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t

That gave us the following probabilities:

graph

If our molecule can be in two states only, and it starts off in one, then the probability that it will remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase.

Now, the point you should note is that we get these time-dependent probabilities only because we’re introducing two different energy levels: E+ A and E− A. [Note they separated by an amount equal to 2·A, as I’ll use that information later.] If we’d have one energy level only – which amounts to saying that we know it, and that it’s something definite then we’d just have one wavefunction, which we’d write as:

a·eiθ = a·e−(i/ħ)·(E0·t − p·x) = a·e−(i/ħ)·(E0·t)·e(i/ħ)·(p·x)

Note that we can always split our wavefunction in a ‘time’ and a ‘space’ part, which is quite convenient. In fact, because our ammonia molecule stays where it is, it has no momentum: p = 0. Therefore, its wavefunction reduces to:

a·eiθ = a·e−(i/ħ)·(E0·t)

As simple as it can be. 🙂 The point is that a wavefunction like this, i.e. a wavefunction that’s defined by a definite energy, will always yield a constant and equal probability, both in time as well as in space. That’s just the math of it: |a·eiθ|= a2. Always! If you want to know why, you should think of Euler’s formula and Pythagoras’ Theorem: cos2θ +sin2θ = 1. Always! 🙂

That constant probability is annoying, because our nitrogen atom never ‘flips’, and we know it actually does, thereby overcoming a energy barrier: it’s a phenomenon that’s referred to as ‘tunneling’, and it’s real! The probabilities in that graph above are real! Also, if our wavefunction would represent some moving particle, it would imply that the probability to find it somewhere in space is the same all over space, which implies our particle is everywhere and nowhere at the same time, really.

So, in quantum physics, this problem is solved by introducing uncertainty. Introducing some uncertainty about the energy, or about the momentum, is mathematically equivalent to saying that we’re actually looking at a composite wave, i.e. the sum of a finite or potentially infinite set of component waves. So we have the same ω = E/ħ and k = p/ħ relations, but we apply them to energy levels, or to some continuous range of energy levels ΔE. It amounts to saying that our wave function doesn’t have a specific frequency: it now has n frequencies, or a range of frequencies Δω = ΔE/ħ. In our two-state system, n = 2, obviously! So we’ve two energy levels only and so our composite wave consists of two component waves only.

We know what that does: it ensures our wavefunction is being ‘contained’ in some ‘envelope’. It becomes a wavetrain, or a kind of beat note, as illustrated below:

File-Wave_group

[The animation comes from Wikipedia, and shows the difference between the group and phase velocity: the green dot shows the group velocity, while the red dot travels at the phase velocity.]

So… OK. That should be clear enough. Let’s now apply these thoughts to our ‘reduced’ wavefunction

Φ(φ) = a·eiφ = a·ei(t − β·x)

Thinking about uncertainty

Frankly, I tried to fool you above. If the functional form of the wavefunction is a·e−(i/ħ)·(E·t − p·x), then we can measure E and p in whatever unit we want, including h or ħ, but we cannot re-scale the argument of the function, i.e. the phase θ, without changing the functional form itself. I explained that in that post for my kids on wavefunctions:, in which I explained we may represent the same electromagnetic wave by two different functional forms:

 F(ct−x) = G(t−x/c)

So F and G represent the same wave, but they are different wavefunctions. In this regard, you should note that the argument of F is expressed in distance units, as we multiply t with the speed of light (so it’s like our time unit is 299,792,458 m now), while the argument of G is expressed in time units, as we divide x by the distance traveled in one second). But F and G are different functional forms. Just do an example and take a simple sine function: you’ll agree that sin(θ) ≠ sin(θ/c) for all values of θ, except 0. Re-scaling changes the frequency, or the wavelength, and it does so quite drastically in this case. 🙂 Likewise, you can see that a·ei(φ/E) = [a·eiφ]1/E, so that’s a very different function. In short, we were a bit too adventurous above. Now, while we can drop the 1/ħ in the a·e−(i/ħ)·(E·t − p·x) function when measuring energy and momentum in units that are numerically equal to ħ, we’ll just revert to our original wavefunction for the time being, which equals

Ψ(θ) = a·eiθ = a·ei·[(E/ħ)·t − (p/ħ)·x]

Let’s now introduce uncertainty once again. The simplest situation is that we have two closely spaced energy levels. In theory, the difference between the two can be as small as ħ, so we’d write: E = E± ħ/2. [Remember what I said about the ± A: it means the difference is 2A.] However, we can generalize this and write: E = E± n·ħ/2, with n = 1, 2, 3,… This does not imply any greater uncertainty – we still have two states only – but just a larger difference between the two energy levels.

Let’s also simplify by looking at the ‘time part’ of our equation only, i.e. a·ei·(E/ħ)·t. It doesn’t mean we don’t care about the ‘space part’: it just means that we’re only looking at how our function varies in time and so we just ‘fix’ or ‘freeze’ x. Now, the uncertainty is in the energy really but, from a mathematical point of view, we’ve got an uncertainty in the argument of our wavefunction, really. This uncertainty in the argument is, obviously, equal to:

(E/ħ)·t = [(E± n·ħ/2)/ħ]·t = (E0/ħ ± n/2)·t = (E0/ħ)·t ± (n/2)·t

So we can write:

a·ei·(E/ħ)·t = a·ei·[(E0/ħ)·t ± (1/2)·t] = a·ei·[(E0/ħ)·t]·ei·[±(n/2)·t]

This is valid for any value of t. What the expression says is that, from a mathematical point of view, introducing uncertainty about the energy is equivalent to introducing uncertainty about the wavefunction itself. It may be equal to a·ei·[(E0/ħ)·t]·ei·(n/2)·t, but it may also be equal to a·ei·[(E0/ħ)·t]·ei·(n/2)·t. The phases of the ei·t/2 and ei·t/2 factors are separated by a distance equal to t.

So… Well…

[…]

Hmm… I am stuck. How is this going to lead me to the ΔE·Δt = ħ/2 principle? To anyone out there: can you help? 🙂

[…]

The thing is: you won’t get the Uncertainty Principle by staring at that formula above. It’s a bit more complicated. The idea is that we have some distribution of the observables, like energy and momentum, and that implies some distribution of the associated frequencies, i.e. ω for E, and k for p. The Wikipedia article on the Uncertainty Principle gives you a formal derivation of the Uncertainty Principle, using the so-called Kennard formulation of it. You can have a look, but it involves a lot of formalism—which is what I wanted to avoid here!

I hope you get the idea though. It’s like statistics. First, we assume we know the population, and then we describe that population using all kinds of summary statistics. But then we reverse the situation: we don’t know the population but we do have sample information, which we also describe using all kinds of summary statistics. Then, based on what we find for the sample, we calculate the estimated statistics for the population itself, like the mean value and the standard deviation, to name the most important ones. So it’s a bit the same here, except that, in quantum mechanics, there may not be any real value underneath: the mean and the standard deviation represent something fuzzy, rather than something precise.

Hmm… I’ll leave you with these thoughts. We’ll develop them further as we will be digging into all much deeper over the coming weeks. 🙂

Post scriptum: I know you expect something more from me, so… Well… Think about the following. If we have some uncertainty about the energy E, we’ll have some uncertainty about the momentum p according to that β = p/E. [By the way, please think about this relationship: it says, all other things being equal (such as the inertia, i.e. the mass, of our particle), that more energy will all go into more momentum. More specifically, note that ∂p/∂p = β according to this equation. In fact, if we include the mass of our particle, i.e. its inertia, as potential energy, then we might say that (1−β)·E is the potential energy of our particle, as opposed to its kinetic energy.] So let’s try to think about that.

Let’s denote the uncertainty about the energy as ΔE. As should be obvious from the discussion above, it can be anything: it can mean two separate energy levels E = E± A, or a potentially infinite set of values. However, even if the set is infinite, we know the various energy levels need to be separated by ħ, at least. So if the set is infinite, it’s going to be a countable infinite set, like the set of natural numbers, or the set of integers. But let’s stick to our example of two values E = E± A only, with A = ħ so E + ΔE = E± ħ and, therefore, ΔE = ± ħ. That implies Δp = Δ(β·E) = β·ΔE = ± β·ħ.

Hmm… This is a bit fishy, isn’t it? We said we’d measure the momentum in units of ħ, but so here we say the uncertainty in the momentum can actually be a fraction of ħ. […] Well… Yes. Now, the momentum is the product of the mass, as measured by the inertia of our particle to accelerations or decelerations, and its velocity. If we assume the inertia of our particle, or its mass, to be constant – so we say it’s a property of the object that is not subject to uncertainty, which, I admit, is a rather dicey assumption (if all other measurable properties of the particle are subject to uncertainty, then why not its mass?) – then we can also write: Δp = Δ(m·v) = Δ(m·β) = m·Δβ. [Note that we’re not only assuming that the mass is not subject to uncertainty, but also that the velocity is non-relativistic. If not, we couldn’t treat the particle’s mass as a constant.] But let’s be specific here: what we’re saying is that, if ΔE = ± ħ, then Δv = Δβ will be equal to Δβ = Δp/m = ± (β/m)·ħ. The point to note is that we’re no longer sure about the velocity of our particle. Its (relative) velocity is now:

β ± Δβ = β ± (β/m)·ħ

But, because velocity is the ratio of distance over time, this introduces an uncertainty about time and distance. Indeed, if its velocity is β ± (β/m)·ħ, then, over some time T, it will travel some distance X = [β ± (β/m)·ħ]·T. Likewise, it we have some distance X, then our particle will need a time equal to T = X/[β ± (β/m)·ħ].

You’ll wonder what I am trying to say because… Well… If we’d just measure X and T precisely, then all the uncertainty is gone and we know if the energy is E+ ħ or E− ħ. Well… Yes and no. The uncertainty is fundamental – at least that’s what’s quantum physicists believe – so our uncertainty about the time and the distance we’re measuring is equally fundamental: we can have either of the two values X = [β ± (β/m)·ħ] T = X/[β ± (β/m)·ħ], whenever or wherever we measure. So we have a ΔX and ΔT that are equal to ± [(β/m)·ħ]·T and X/[± (β/m)·ħ] respectively. We can relate this to ΔE and Δp:

  • ΔX = (1/m)·T·Δp
  • ΔT = X/[(β/m)·ΔE]

You’ll grumble: this still doesn’t give us the Uncertainty Principle in its canonical form. Not at all, really. I know… I need to do some more thinking here. But I feel I am getting somewhere. 🙂 Let me know if you see where, and if you think you can get any further. 🙂

The thing is: you’ll have to read a bit more about Fourier transforms and why and how variables like time and energy, or position and momentum, are so-called conjugate variables. As you can see, energy and time, and position and momentum, are obviously linked through the E·t and p·products in the E0·t − p·x sum. That says a lot, and it helps us to understand, in a more intuitive way, why the ΔE·Δt and Δp·Δx products should obey the relation they are obeying, i.e. the Uncertainty Principle, which we write as ΔE·Δt ≥ ħ/2 and Δp·Δx ≥ ħ/2. But so proving involves more than just staring at that Ψ(θ) = a·eiθ = a·ei·[(E/ħ)·t − (p/ħ)·x] relation.

Having said, it helps to think about how that E·t − p·x sum works. For example, think about two particles, a and b, with different velocity and mass, but with the same momentum, so p= pb ⇔ ma·v= ma·v⇔ ma/v= mb/va. The spatial frequency of the wavefunction  would be the same for both but the temporal frequency would be different, because their energy incorporates the rest mass and, hence, because m≠ mb, we also know that E≠ Eb. So… It all works out but, yes, I admit it’s all very strange, and it takes a long time and a lot of reflection to advance our understanding.