A physical explanation for relativistic length contraction?

My last posts were all about a possible physical interpretation of the quantum-mechanical wavefunction. To be precise, we have been interpreting the wavefunction as a gravitational wave. In this interpretation, the real and imaginary component of the wavefunction get a physical dimension: force per unit mass (newton per kg). The inspiration here was the structural similarity between Coulomb’s and Newton’s force laws. They both look alike: it’s just that one gives us a force per unit charge (newton per coulomb), while the other gives us a force per unit mass.

So… Well… Many nice things came out of this – and I wrote about that at length – but last night I was thinking this interpretation may also offer an explanation of relativistic length contraction. Before we get there, let us re-visit our hypothesis.

The geometry of the wavefunction

The elementary wavefunction is written as:

ψ = a·ei(E·t − px)/ħa·cos(px/ħ – E∙t/ħ) + i·a·sin(px/ħ – E∙t/ħ)

Nature should not care about our conventions for measuring the phase angle clockwise or counterclockwise and, therefore, the ψ = a·ei[E·t − px]/ħ function may also be permitted. We know that cos(θ) = cos(θ) and sinθ = sin(θ), so we can write:    

ψ = a·ei(E·t − p∙x)/ħa·cos(E∙t/ħ – px/ħ) + i·a·sin(E∙t/ħ – px/ħ)

= a·cos(px/ħ – E∙t/ħ) i·a·sin(px/ħ – E∙t/ħ)

The vectors p and x are the the momentum and position vector respectively: p = (px, py, pz) and x = (x, y, z). However, if we assume there is no uncertainty about p – not about the direction nor the magnitude – then we may choose an x-axis which reflects the direction of p. As such, x = (x, y, z) reduces to (x, 0, 0), and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis or, if p = 0, that our particle is located somewhere on the x-axis. Hence, the analysis is one-dimensional only.

The geometry of the elementary wavefunction is illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. You can check this as follows: if the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively. We speculated this should correspond to the two possible values for the quantum-mechanical spin of the wave: +ħ/2 or −ħ/2. The cosine and sine components for the left-handed wave are shown below. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2).

circular polarizaton with components

As for the wave velocity, and its direction of propagation, we know that the (phase) velocity of any wave F(kx – ωt) is given by vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, the momentum might also be in the negative x-direction, in which case k would be equal to -p and, therefore, we would get a negative phase velocity: vp = ω/k = E/p.

The de Broglie relations

E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). Of course, we may write: f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time is easy to interpret. The wavefunction of a particle with more energy, or more mass, will have a higher density in time than a particle with less energy.

In contrast, the second de Broglie relation is somewhat harder to interpret. According to the p = h/λ relation, the wavelength is inversely proportional to the momentum: λ = h/p. The velocity of a photon, or a (theoretical) particle with zero rest mass (m0 = 0), is c and, therefore, we find that p = mvv = mcc = m∙c (all of the energy is kinetic). Hence, we can write: p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass, the wavelength can be written as:

λ = h/p = hc/E = h/mc

However, this is a limiting situation – applicable to photons only. Real-life matter-particles should have some mass[1] and, therefore, their velocity will never be c.[2]

Hence, if p goes to zero, then the wavelength becomes infinitely long: if p → 0 then λ → ∞. How should we interpret this inverse proportionality between λ and p? To answer this question, let us first see what this wavelength λ actually represents.

If we look at the ψ = a·cos(p∙x/ħ – E∙t/ħ) – i·a·sin(p∙x/ħ – E∙t/ħ) once more, and if we write p∙x/ħ as Δ, then we can look at p∙x/ħ as a phase factor, and so we will be interested to know for what x this phase factor Δ = p∙x/ħ will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ

So now we get a meaningful interpretation for that wavelength. It is the distance between the crests (or the troughs) of the wave, so to speak, as illustrated below. Of course, this two-dimensional wave has no real crests or troughs: we measure crests and troughs against the y-axis here. Hence, our definition depend on the frame of reference.

wavelength

Now we know what λ actually represents for our one-dimensional elementary wavefunction. Now, the time that is needed for one cycle is equal to T = 1/f = 2π·(ħ/E). Hence, we can now calculate the wave velocity:

v = λ/T = (h/p)/[2π·(ħ/E)] = E/p

Unsurprisingly, we just get the phase velocity that we had calculated already: v = vp = E/p. The question remains: what if p is zero? What if we are looking at some particle at rest? It is an intriguing question: we get an infinitely long wavelength, and an infinite wave velocity.

Now, re-writing the v = E/p as v = m∙c2/m∙vg  = cg, in which βg is the relative classical velocity[3] of our particle βg = vg/c) tells us that the phase velocities will effectively be superluminal (βg  < 1 so 1/ βg > 1), but what if βg approaches zero? The conclusion seems unavoidable: for a particle at rest, we only have a frequency in time, as the wavefunction reduces to:

ψ = a·e−i·E·t/ħ = a·cos(E∙t/ħ) – i·a·sin(E∙t/ħ)

How should we interpret this?

A physical interpretation of relativistic length contraction?

In my previous posts, we argued that the oscillations of the wavefunction pack energy. Because the energy of our particle is finite, the wave train cannot be infinitely long. If we assume some definite number of oscillations, then the string of oscillations will be shorter as λ decreases. Hence, the physical interpretation of the wavefunction that is offered here may explain relativistic length contraction.

🙂

Yep. Think about it. 🙂

[1] Even neutrinos have some (rest) mass. This was first confirmed by the US-Japan Super-Kamiokande collaboration in 1998. Neutrinos oscillate between three so-called flavors: electron neutrinos, muon neutrinos and tau neutrinos. Recent data suggests that the sum of their masses is less than a millionth of the rest mass of an electron. Hence, they propagate at speeds that are very near to the speed of light.

[2] Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as KE = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As v approaches c, γ approaches infinity and, therefore, the kinetic energy would become infinite as well.

[3] Because our particle will be represented by a wave packet, i.e. a superimposition of elementary waves with different E and p, the classical velocity of the particle becomes the group velocity of the wave, which is why we denote it by vg.

Advertisement

The geometry of the wavefunction

Pre-script (dated 26 June 2020): Our ideas have evolved into a full-blown realistic (or classical) interpretation of all things quantum-mechanical. In addition, I note the dark force has amused himself by removing some material. So no use to read this. Read my recent papers instead. 🙂

Original post:

My posts and article on the wavefunction as a gravitational wave are rather short on the exact geometry of the wavefunction, so let us explore that a bit here. By now, you know the formula for the elementary wavefunction by heart:

ψ = a·ei[E·t − px]/ħa·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)

If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. This amounts to saying our particle is traveling along the x-axis. The geometry of the wavefunction is illustrated below. The x-axis is the direction of propagation, and the y- and z-axes represent the real and imaginary part of the wavefunction respectively.

Note that, when applying the right-hand rule for the axes, the vertical axis is the y-axis, not the z-axis. Hence, we may associate the vertical axis with the cosine component, and the horizontal axis with the sine component. If the origin is the (x, t) = (0, 0) point, then cos(θ) = cos(0) = 1 and sin(θ) = sin(0) = 0. This is reflected in both illustrations, which show a left- and a right-handed wave respectively. I am convinced these correspond to the two possible values for the quantum-mechanical spin of the wave: +ħ/2 or −ħ/2. But… Well… Who am I? The cosine and sine components are shown below. Needless to say, the cosine and sine function are the same, except for a phase difference of π/2: sin(θ) = cos(θ − π/2)  circular polarizaton with components

Surely, Nature doesn’t care a hoot about our conventions for measuring the phase angle clockwise or counterclockwise and therefore, the ψ = a·ei[E·t − px]/ħ function should, effectively, also be permitted. We know that cos(θ) = cos(θ) and sinθ = sin(θ), so we can write:    

ψ = a·ei[E·t − p∙x]/ħa·cos(E∙t/ħ − p∙x/ħ) + i·a·sin(E∙t/ħ − p∙x/ħ)

= a·cos(p∙x/ħ − E∙t/ħ) − i·a·sin(p∙x/ħ − E∙t/ħ)

E/ħ = ω gives the frequency in time (expressed in radians per second), while p/ħ = k gives us the wavenumber, or the frequency in space (expressed in radians per meter). Of course, we may write: f = ω/2π  and λ = 2π/k, which gives us the two de Broglie relations:

  1. E = ħ∙ω = h∙f
  2. p = ħ∙k = h/λ

The frequency in time is easy to interpret (a particle will always have some mass and, therefore, some energy), but the wavelength is inversely proportional to the momentum: λ = h/p. Hence, if p goes to zero, then the wavelength becomes infinitely long: if p → 0, then λ → ∞. For the limit situation, a particle with zero rest mass (m0 = 0), the velocity may be c and, therefore, we find that p = mvv = m∙c  and, therefore, p∙c = m∙c2 = E, which we may also write as: E/p = c. Hence, for a particle with zero rest mass, the wavelength can be written as:

λ = h/p = hc/E = h/mc

However, we argued that the physical dimension of the components of the wavefunction may be usefully expressed in N/kg units (force per unit mass), while the physical dimension of the electromagnetic wave are expressed in N/C (force per unit charge). This, in fact, explains the dichotomy between bosons (photons) and fermions (spin-1/2 particles). Hence, all matter-particles should have some mass.[1] But how we interpret the inverse proportionality between λ and p?

We should probably first ask ourselves what wavelength we are talking about. The wave only has a phase velocity here, which is equal to vp = ω/k = (E/ħ)/(p/ħ) = E/p. Of course, we know that, classically, the momentum will be equal to the group velocity times the mass: p = m·vg. However, when p is zero, we have a division by zero once more: if p → 0, then vp = E/p → ∞. Infinite wavelengths and infinite phase velocities probably tell us that our particle has to move: our notion of a particle at rest is mathematically inconsistent. If we associate this elementary wavefunction with some particle, and if we then imagine it to move, somehow, then we get an interesting relation between the group and the phase velocity:

vp = ω/k = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg

We can re-write this as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. But what is the group velocity of the elementary wavefunction? Is it a meaningful concept?

The phase velocity is just the ratio of ω/k. In contrast, the group velocity is the derivative of ω with respect to k. So we need to write ω as a function of k. Can we do that even if we have only one wave? We do not have a wave packet here, right? Just some hypothetical building block of a real-life wavefunction, right? Right. So we should introduce uncertainty about E and p and build up the wave packet, right? Well… Yes. But let’s wait with that, and see how far we can go in our interpretation of this elementary wavefunction. Let’s first get that ω = ω(k) relation. You’ll remember we can write Schrödinger’s equation – the equation that describes the propagation mechanism for matter-waves – as the following pair of equations:

  1. Re(∂ψ/∂t) = −[ħ/(2m)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2m)]·cos(kx − ωt)
  2. Im(∂ψ/∂t) = [ħ/(2m)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2m)]·sin(kx − ωt)

This tells us that ω = ħ·k2/(2m). Therefore, we can calculate ∂ω/∂k as:

∂ω/∂k = ħ·k/m = p/m = vg

We learn nothing new. We are going round and round in circles here, and we always end up with a tautology: as soon as we have a non-zero momentum, we have a mathematical formula for the group velocity – but we don’t know what it represents – and a finite wavelength. In fact, using the p = ħ∙k = h/λ relation, we can write one as a function of the other:

λ = h/p = h/mvg ⇔ vg = h/mλ

What does this mean? It resembles the c = h/mλ relation we had for a particle with zero rest mass. Of course, it does: the λ = h/mc relation is, once again, a limit for vg going to c. By the way, it is interesting to note that the vp·vg = c2 relation implies that the phase velocity is always superluminal. That’ easy to see when you re-write the equation in terms of relative velocities: (vp/c)·(vg/c) = βphase·βgroup = 1. Hence, if βgroup < 1, then βphase > 1.

So what is the geometry, really? Let’s look at the ψ = a·cos(p∙x/ħ – E∙t/ħ) i·a·sin(p∙x/ħ – E∙t/ħ) formula once more. If we write p∙x/ħ as Δ, then we will be interested to know for what x this phase factor will be equal to 2π. So we write:

Δ =p∙x/ħ = 2π ⇔ x = 2π∙ħ/p = h/p = λ  

So now we get a meaningful interpretation for that wavelength: it’s that distance between the crests of the wave, so to speak, as illustrated below.

wavelength

Can we now find a meaningful (i.e. geometric) interpretation for the group and phase velocity? If you look at the illustration above, you see we can sort of distinguish (1) a linear velocity (the speed with which those wave crests move) and (2) some kind of circular or tangential velocity (the velocity along the red contour line above). We’ll probably need the formula for the tangential velocity: v = a∙ω. If p = 0 (so we have that weird infinitesimally long wavelength), then we have two velocities:

  1. The tangential velocity around the a·ei·E·t  circle, so to speak, and that will just be equal to v = a∙ω = a∙E/ħ.
  2. The red contour line sort of gets stretched out, like infinitely long, and the velocity becomes… What does it do? Does it go to ∞ , or to c?

Let’s think about this. For a particle at rest, we had this weird calculation. We had an angular momentum formula (for an electron) which we equated with the real-life +ħ/2 or −ħ/2 values of its spin. And so we got a numerical value for a. It was the Compton radius: the scattering radius for an electron. Let me copy it once again:

Compton radius formula

Just to bring this story a bit back to Earth, you should note the calculated value: = 3.8616×10−13 m. We did then another weird calculation. We said all of the energy of the electron had to be packed in this cylinder that might of might not be there. The point was: the energy is finite, so that elementary wavefunction cannot have an infinite length in space. Indeed, assuming that the energy was distributed uniformly, we jotted down this formula, which reflects the formula for the volume of a cylinder:

E = π·a2·l ⇔ = E/(π·a2)

Using the value we got for the Compton scattering radius (= 3.8616×10−13 m), we got an astronomical value for l. Let me write it out:

= (8.19×10−14)/(π·14.9×10−26) ≈ 0.175×1012 m

It is, literally, an astronomical value: 0.175×1012 m is 175 million kilometer, so that’s like the distance between the Sun and the Earth. We wrote, jokingly, that such space is too large to look for an electron and, hence, that we should really build a proper packet by making use of the Uncertainty Principle: allowing for uncertainty in the energy should, effectively, reduce the uncertainty in position.

But… Well… What if we use that value as the value for λ? We’d get that linear velocity, right? Let’s try it. The period is equal to T = T = 2π·(ħ/E) = h/E and λ = E/(π·a2), so we write:formula for vWe can write this as a function of m and the and ħ constants only:velocitiy 2

A weird formula but not necessarily nonsensical: we get a finite speed. Now, if the wavelength becomes somewhat less astronomical, we’ll get different values of course. I have a strange feeling that, with these formula, we should, somehow, be able to explain relativistic length contraction. But I will let you think about that as for now. Here I just wanted to show the geometry of the wavefunction a bit more in detail.

[1] The discussions on the mass of neutrinos are interesting in this regard. Scientists all felt the neutrino had to have some (rest) mass, so my instinct on this is theirs. In fact, only recently experimental confirmation came in, and the mass of the known neutrino flavors was estimated to be something like 0.12 eV/c2. This mass combines the three known neutrino flavors. To understand this number, you should note it is the same order of magnitude of the equivalent mass of low-energy photons, like infrared or microwave radiation.

Re-visiting uncertainty…

I re-visited the Uncertainty Principle a couple of times already, but here I really want to get at the bottom of the thing? What’s uncertain? The energy? The time? The wavefunction itself? These questions are not easily answered, and I need to warn you: you won’t get too much wiser when you’re finished reading this. I just felt like freewheeling a bit. [Note that the first part of this post repeats what you’ll find on the Occam page, or my post on Occam’s Razor. But these post do not analyze uncertainty, which is what I will be trying to do here.]

Let’s first think about the wavefunction itself. It’s tempting to think it actually is the particle, somehow. But it isn’t. So what is it then? Well… Nobody knows. In my previous post, I said I like to think it travels with the particle, but then doesn’t make much sense either. It’s like a fundamental property of the particle. Like the color of an apple. But where is that color? In the apple, in the light it reflects, in the retina of our eye, or is it in our brain? If you know a thing or two about how perception actually works, you’ll tend to agree the quality of color is not in the apple. When everything is said and done, the wavefunction is a mental construct: when learning physics, we start to think of a particle as a wavefunction, but they are two separate things: the particle is reality, the wavefunction is imaginary.

But that’s not what I want to talk about here. It’s about that uncertainty. Where is the uncertainty? You’ll say: you just said it was in our brain. No. I didn’t say that. It’s not that simple. Let’s look at the basic assumptions of quantum physics:

  1. Quantum physics assumes there’s always some randomness in Nature and, hence, we can measure probabilities only. We’ve got randomness in classical mechanics too, but this is different. This is an assumption about how Nature works: we don’t really know what’s happening. We don’t know the internal wheels and gears, so to speak, or the ‘hidden variables’, as one interpretation of quantum mechanics would say. In fact, the most commonly accepted interpretation of quantum mechanics says there are no ‘hidden variables’.
  2. However, as Shakespeare has one of his characters say: there is a method in the madness, and the pioneers– I mean Werner Heisenberg, Louis de Broglie, Niels Bohr, Paul Dirac, etcetera – discovered that method: all probabilities can be found by taking the square of the absolute value of a complex-valued wavefunction (often denoted by Ψ), whose argument, or phase (θ), is given by the de Broglie relations ω = E/ħ and k = p/ħ. The generic functional form of that wavefunction is:

Ψ = Ψ(x, t) = a·eiθ = a·ei(ω·t − k ∙x) = a·ei·[(E/ħ)·t − (p/ħ)∙x]

That should be obvious by now, as I’ve written more than a dozens of posts on this. 🙂 I still have trouble interpreting this, however—and I am not ashamed, because the Great Ones I just mentioned have trouble with that too. It’s not that complex exponential. That eiφ is a very simple periodic function, consisting of two sine waves rather than just one, as illustrated below. [It’s a sine and a cosine, but they’re the same function: there’s just a phase difference of 90 degrees.] sine

No. To understand the wavefunction, we need to understand those de Broglie relations, ω = E/ħ and k = p/ħ, and then, as mentioned, we need to understand the Uncertainty Principle. We need to understand where it comes from. Let’s try to go as far as we can by making a few remarks:

  • Adding or subtracting two terms in math, (E/ħ)·t − (p/ħ)∙x, implies the two terms should have the same dimension: we can only add apples to apples, and oranges to oranges. We shouldn’t mix them. Now, the (E/ħ)·t and (p/ħ)·x terms are actually dimensionless: they are pure numbers. So that’s even better. Just check it: energy is expressed in newton·meter (energy, or work, is force over distance, remember?) or electronvolts (1 eV = 1.6×10−19 J = 1.6×10−19 N·m); Planck’s constant, as the quantum of action, is expressed in J·s or eV·s; and the unit of (linear) momentum is 1 N·s = 1 kg·m/s = 1 N·s. E/ħ gives a number expressed per second, and p/ħ a number expressed per meter. Therefore, multiplying E/ħ and p/ħ by t and x respectively gives us a dimensionless number indeed.
  • It’s also an invariant number, which means we’ll always get the same value for it, regardless of our frame of reference. As mentioned above, that’s because the four-vector product pμxμ = E·t − px is invariant: it doesn’t change when analyzing a phenomenon in one reference frame (e.g. our inertial reference frame) or another (i.e. in a moving frame).
  • Now, Planck’s quantum of action h, or ħ – h and ħ only differ in their dimension: h is measured in cycles per second, while ħ is measured in radians per second: both assume we can at least measure one cycle – is the quantum of energy really. Indeed, if “energy is the currency of the Universe”, and it’s real and/or virtual photons who are exchanging it, then it’s good to know the currency unit is h, i.e. the energy that’s associated with one cycle of a photon. [In case you want to see the logic of this, see my post on the physical constants c, h and α.]
  • It’s not only time and space that are related, as evidenced by the fact that t − x itself is an invariant four-vector, E and p are related too, of course! They are related through the classical velocity of the particle that we’re looking at: E/p = c2/v and, therefore, we can write: E·β = p·c, with β = v/c, i.e. the relative velocity of our particle, as measured as a ratio of the speed of light. Now, I should add that the t − x four-vector is invariant only if we measure time and space in equivalent units. Otherwise, we have to write c·t − x. If we do that, so our unit of distance becomes meter, rather than one meter, or our unit of time becomes the time that is needed for light to travel one meter, then = 1, and the E·β = p·c becomes E·β = p, which we also write as β = p/E: the ratio of the energy and the momentum of our particle is its (relative) velocity.

Combining all of the above, we may want to assume that we are measuring energy and momentum in terms of the Planck constant, i.e. the ‘natural’ unit for both. In addition, we may also want to assume that we’re measuring time and distance in equivalent units. Then the equation for the phase of our wavefunctions reduces to:

θ = (ω·t − k ∙x) = E·t − p·x

Now, θ is the argument of a wavefunction, and we can always re-scale such argument by multiplying or dividing it by some constant. It’s just like writing the argument of a wavefunction as v·t–x or (v·t–x)/v = t –x/v  with the velocity of the waveform that we happen to be looking at. [In case you have trouble following this argument, please check the post I did for my kids on waves and wavefunctions.] Now, the energy conservation principle tells us the energy of a free particle won’t change. [Just to remind you, a ‘free particle’ means it’s in a ‘field-free’ space, so our particle is in a region of uniform potential.] So we can, in this case, treat E as a constant, and divide E·t − p·x by E, so we get a re-scaled phase for our wavefunction, which I’ll write as:

φ = (E·t − p·x)/E = t − (p/E)·x = t − β·x

Alternatively, we could also look at p as some constant, as there is no variation in potential energy that will cause a change in momentum, and the related kinetic energy. We’d then divide by p and we’d get (E·t − p·x)/p = (E/p)·t − x) = t/β − x, which amounts to the same, as we can always re-scale by multiplying it with β, which would again yield the same t − β·x argument.

The point is, if we measure energy and momentum in terms of the Planck unit (I mean: in terms of the Planck constant, i.e. the quantum of energy), and if we measure time and distance in ‘natural’ units too, i.e. we take the speed of light to be unity, then our Platonic wavefunction becomes as simple as:

Φ(φ) = a·eiφ = a·ei(t − β·x)

This is a wonderful formula, but let me first answer your most likely question: why would we use a relative velocity?Well… Just think of it: when everything is said and done, the whole theory of relativity and, hence, the whole of physics, is based on one fundamental and experimentally verified fact: the speed of light is absolute. In whatever reference frame, we will always measure it as 299,792,458 m/s. That’s obvious, you’ll say, but it’s actually the weirdest thing ever if you start thinking about it, and it explains why those Lorentz transformations look so damn complicated. In any case, this fact legitimately establishes as some kind of absolute measure against which all speeds can be measured. Therefore, it is only natural indeed to express a velocity as some number between 0 and 1. Now that amounts to expressing it as the β = v/c ratio.

Let’s now go back to that Φ(φ) = a·eiφ = a·ei(t − β·x) wavefunction. Its temporal frequency ω is equal to one, and its spatial frequency k is equal to β = v/c. It couldn’t be simpler but, of course, we’ve got this remarkably simple result because we re-scaled the argument of our wavefunction using the energy and momentum itself as the scale factor. So, yes, we can re-write the wavefunction of our particle in a particular elegant and simple form using the only information that we have when looking at quantum-mechanical stuff: energy and momentum, because that’s what everything reduces to at that level.

So… Well… We’ve pretty much explained what quantum physics is all about here. You just need to get used to that complex exponential: eiφ = cos(−φ) + i·sin(−φ) = cos(φ) −i·sin(φ). It would have been nice if Nature would have given us a simple sine or cosine function. [Remember the sine and cosine function are actually the same, except for a phase difference of 90 degrees: sin(φ) = cos(π/2−φ) = cos(φ+π/2). So we can go always from one to the other by shifting the origin of our axis.] But… Well… As we’ve shown so many times already, a real-valued wavefunction doesn’t explain the interference we observe, be it interference of electrons or whatever other particles or, for that matter, the interference of electromagnetic waves itself, which, as you know, we also need to look at as a stream of photons , i.e. light quanta, rather than as some kind of infinitely flexible aether that’s undulating, like water or air.

However, the analysis above does not include uncertainty. That’s as fundamental to quantum physics as de Broglie‘s equations, so let’s think about that now.

Introducing uncertainty

Our information on the energy and the momentum of our particle will be incomplete: we’ll write E = E± σE, and p = p± σp. Huh? No ΔE or ΔE? Well… It’s the same, really, but I am a bit tired of using the Δ symbol, so I am using the σ symbol here, which denotes a standard deviation of some density function. It underlines the probabilistic, or statistical, nature of our approach.

The simplest model is that of a two-state system, because it involves two energy levels only: E = E± A, with A some constant. Large or small, it doesn’t matter. All is relative anyway. 🙂 We explained the basics of the two-state system using the example of an ammonia molecule, i.e. an NHmolecule, so it consists on one nitrogen and three hydrogen atoms. We had two base states in this system: ‘up’ or ‘down’, which we denoted as base state | 1 〉 and base state | 2 〉 respectively. This ‘up’ and ‘down’ had nothing to do with the classical or quantum-mechanical notion of spin, which is related to the magnetic moment. No. It’s much simpler than that: the nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown below, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry.

Capture

In any case, for the details, I’ll refer you to the post(s) on it. Here I just want to mention the result. We wrote the amplitude to find the molecule in either one of these two states as:

  • C= 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t
  • C= 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t

That gave us the following probabilities:

graph

If our molecule can be in two states only, and it starts off in one, then the probability that it will remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase.

Now, the point you should note is that we get these time-dependent probabilities only because we’re introducing two different energy levels: E+ A and E− A. [Note they separated by an amount equal to 2·A, as I’ll use that information later.] If we’d have one energy level only – which amounts to saying that we know it, and that it’s something definite then we’d just have one wavefunction, which we’d write as:

a·eiθ = a·e−(i/ħ)·(E0·t − p·x) = a·e−(i/ħ)·(E0·t)·e(i/ħ)·(p·x)

Note that we can always split our wavefunction in a ‘time’ and a ‘space’ part, which is quite convenient. In fact, because our ammonia molecule stays where it is, it has no momentum: p = 0. Therefore, its wavefunction reduces to:

a·eiθ = a·e−(i/ħ)·(E0·t)

As simple as it can be. 🙂 The point is that a wavefunction like this, i.e. a wavefunction that’s defined by a definite energy, will always yield a constant and equal probability, both in time as well as in space. That’s just the math of it: |a·eiθ|= a2. Always! If you want to know why, you should think of Euler’s formula and Pythagoras’ Theorem: cos2θ +sin2θ = 1. Always! 🙂

That constant probability is annoying, because our nitrogen atom never ‘flips’, and we know it actually does, thereby overcoming a energy barrier: it’s a phenomenon that’s referred to as ‘tunneling’, and it’s real! The probabilities in that graph above are real! Also, if our wavefunction would represent some moving particle, it would imply that the probability to find it somewhere in space is the same all over space, which implies our particle is everywhere and nowhere at the same time, really.

So, in quantum physics, this problem is solved by introducing uncertainty. Introducing some uncertainty about the energy, or about the momentum, is mathematically equivalent to saying that we’re actually looking at a composite wave, i.e. the sum of a finite or potentially infinite set of component waves. So we have the same ω = E/ħ and k = p/ħ relations, but we apply them to energy levels, or to some continuous range of energy levels ΔE. It amounts to saying that our wave function doesn’t have a specific frequency: it now has n frequencies, or a range of frequencies Δω = ΔE/ħ. In our two-state system, n = 2, obviously! So we’ve two energy levels only and so our composite wave consists of two component waves only.

We know what that does: it ensures our wavefunction is being ‘contained’ in some ‘envelope’. It becomes a wavetrain, or a kind of beat note, as illustrated below:

File-Wave_group

[The animation comes from Wikipedia, and shows the difference between the group and phase velocity: the green dot shows the group velocity, while the red dot travels at the phase velocity.]

So… OK. That should be clear enough. Let’s now apply these thoughts to our ‘reduced’ wavefunction

Φ(φ) = a·eiφ = a·ei(t − β·x)

Thinking about uncertainty

Frankly, I tried to fool you above. If the functional form of the wavefunction is a·e−(i/ħ)·(E·t − p·x), then we can measure E and p in whatever unit we want, including h or ħ, but we cannot re-scale the argument of the function, i.e. the phase θ, without changing the functional form itself. I explained that in that post for my kids on wavefunctions:, in which I explained we may represent the same electromagnetic wave by two different functional forms:

 F(ct−x) = G(t−x/c)

So F and G represent the same wave, but they are different wavefunctions. In this regard, you should note that the argument of F is expressed in distance units, as we multiply t with the speed of light (so it’s like our time unit is 299,792,458 m now), while the argument of G is expressed in time units, as we divide x by the distance traveled in one second). But F and G are different functional forms. Just do an example and take a simple sine function: you’ll agree that sin(θ) ≠ sin(θ/c) for all values of θ, except 0. Re-scaling changes the frequency, or the wavelength, and it does so quite drastically in this case. 🙂 Likewise, you can see that a·ei(φ/E) = [a·eiφ]1/E, so that’s a very different function. In short, we were a bit too adventurous above. Now, while we can drop the 1/ħ in the a·e−(i/ħ)·(E·t − p·x) function when measuring energy and momentum in units that are numerically equal to ħ, we’ll just revert to our original wavefunction for the time being, which equals

Ψ(θ) = a·eiθ = a·ei·[(E/ħ)·t − (p/ħ)·x]

Let’s now introduce uncertainty once again. The simplest situation is that we have two closely spaced energy levels. In theory, the difference between the two can be as small as ħ, so we’d write: E = E± ħ/2. [Remember what I said about the ± A: it means the difference is 2A.] However, we can generalize this and write: E = E± n·ħ/2, with n = 1, 2, 3,… This does not imply any greater uncertainty – we still have two states only – but just a larger difference between the two energy levels.

Let’s also simplify by looking at the ‘time part’ of our equation only, i.e. a·ei·(E/ħ)·t. It doesn’t mean we don’t care about the ‘space part’: it just means that we’re only looking at how our function varies in time and so we just ‘fix’ or ‘freeze’ x. Now, the uncertainty is in the energy really but, from a mathematical point of view, we’ve got an uncertainty in the argument of our wavefunction, really. This uncertainty in the argument is, obviously, equal to:

(E/ħ)·t = [(E± n·ħ/2)/ħ]·t = (E0/ħ ± n/2)·t = (E0/ħ)·t ± (n/2)·t

So we can write:

a·ei·(E/ħ)·t = a·ei·[(E0/ħ)·t ± (1/2)·t] = a·ei·[(E0/ħ)·t]·ei·[±(n/2)·t]

This is valid for any value of t. What the expression says is that, from a mathematical point of view, introducing uncertainty about the energy is equivalent to introducing uncertainty about the wavefunction itself. It may be equal to a·ei·[(E0/ħ)·t]·ei·(n/2)·t, but it may also be equal to a·ei·[(E0/ħ)·t]·ei·(n/2)·t. The phases of the ei·t/2 and ei·t/2 factors are separated by a distance equal to t.

So… Well…

[…]

Hmm… I am stuck. How is this going to lead me to the ΔE·Δt = ħ/2 principle? To anyone out there: can you help? 🙂

[…]

The thing is: you won’t get the Uncertainty Principle by staring at that formula above. It’s a bit more complicated. The idea is that we have some distribution of the observables, like energy and momentum, and that implies some distribution of the associated frequencies, i.e. ω for E, and k for p. The Wikipedia article on the Uncertainty Principle gives you a formal derivation of the Uncertainty Principle, using the so-called Kennard formulation of it. You can have a look, but it involves a lot of formalism—which is what I wanted to avoid here!

I hope you get the idea though. It’s like statistics. First, we assume we know the population, and then we describe that population using all kinds of summary statistics. But then we reverse the situation: we don’t know the population but we do have sample information, which we also describe using all kinds of summary statistics. Then, based on what we find for the sample, we calculate the estimated statistics for the population itself, like the mean value and the standard deviation, to name the most important ones. So it’s a bit the same here, except that, in quantum mechanics, there may not be any real value underneath: the mean and the standard deviation represent something fuzzy, rather than something precise.

Hmm… I’ll leave you with these thoughts. We’ll develop them further as we will be digging into all much deeper over the coming weeks. 🙂

Post scriptum: I know you expect something more from me, so… Well… Think about the following. If we have some uncertainty about the energy E, we’ll have some uncertainty about the momentum p according to that β = p/E. [By the way, please think about this relationship: it says, all other things being equal (such as the inertia, i.e. the mass, of our particle), that more energy will all go into more momentum. More specifically, note that ∂p/∂p = β according to this equation. In fact, if we include the mass of our particle, i.e. its inertia, as potential energy, then we might say that (1−β)·E is the potential energy of our particle, as opposed to its kinetic energy.] So let’s try to think about that.

Let’s denote the uncertainty about the energy as ΔE. As should be obvious from the discussion above, it can be anything: it can mean two separate energy levels E = E± A, or a potentially infinite set of values. However, even if the set is infinite, we know the various energy levels need to be separated by ħ, at least. So if the set is infinite, it’s going to be a countable infinite set, like the set of natural numbers, or the set of integers. But let’s stick to our example of two values E = E± A only, with A = ħ so E + ΔE = E± ħ and, therefore, ΔE = ± ħ. That implies Δp = Δ(β·E) = β·ΔE = ± β·ħ.

Hmm… This is a bit fishy, isn’t it? We said we’d measure the momentum in units of ħ, but so here we say the uncertainty in the momentum can actually be a fraction of ħ. […] Well… Yes. Now, the momentum is the product of the mass, as measured by the inertia of our particle to accelerations or decelerations, and its velocity. If we assume the inertia of our particle, or its mass, to be constant – so we say it’s a property of the object that is not subject to uncertainty, which, I admit, is a rather dicey assumption (if all other measurable properties of the particle are subject to uncertainty, then why not its mass?) – then we can also write: Δp = Δ(m·v) = Δ(m·β) = m·Δβ. [Note that we’re not only assuming that the mass is not subject to uncertainty, but also that the velocity is non-relativistic. If not, we couldn’t treat the particle’s mass as a constant.] But let’s be specific here: what we’re saying is that, if ΔE = ± ħ, then Δv = Δβ will be equal to Δβ = Δp/m = ± (β/m)·ħ. The point to note is that we’re no longer sure about the velocity of our particle. Its (relative) velocity is now:

β ± Δβ = β ± (β/m)·ħ

But, because velocity is the ratio of distance over time, this introduces an uncertainty about time and distance. Indeed, if its velocity is β ± (β/m)·ħ, then, over some time T, it will travel some distance X = [β ± (β/m)·ħ]·T. Likewise, it we have some distance X, then our particle will need a time equal to T = X/[β ± (β/m)·ħ].

You’ll wonder what I am trying to say because… Well… If we’d just measure X and T precisely, then all the uncertainty is gone and we know if the energy is E+ ħ or E− ħ. Well… Yes and no. The uncertainty is fundamental – at least that’s what’s quantum physicists believe – so our uncertainty about the time and the distance we’re measuring is equally fundamental: we can have either of the two values X = [β ± (β/m)·ħ] T = X/[β ± (β/m)·ħ], whenever or wherever we measure. So we have a ΔX and ΔT that are equal to ± [(β/m)·ħ]·T and X/[± (β/m)·ħ] respectively. We can relate this to ΔE and Δp:

  • ΔX = (1/m)·T·Δp
  • ΔT = X/[(β/m)·ΔE]

You’ll grumble: this still doesn’t give us the Uncertainty Principle in its canonical form. Not at all, really. I know… I need to do some more thinking here. But I feel I am getting somewhere. 🙂 Let me know if you see where, and if you think you can get any further. 🙂

The thing is: you’ll have to read a bit more about Fourier transforms and why and how variables like time and energy, or position and momentum, are so-called conjugate variables. As you can see, energy and time, and position and momentum, are obviously linked through the E·t and p·products in the E0·t − p·x sum. That says a lot, and it helps us to understand, in a more intuitive way, why the ΔE·Δt and Δp·Δx products should obey the relation they are obeying, i.e. the Uncertainty Principle, which we write as ΔE·Δt ≥ ħ/2 and Δp·Δx ≥ ħ/2. But so proving involves more than just staring at that Ψ(θ) = a·eiθ = a·ei·[(E/ħ)·t − (p/ħ)·x] relation.

Having said, it helps to think about how that E·t − p·x sum works. For example, think about two particles, a and b, with different velocity and mass, but with the same momentum, so p= pb ⇔ ma·v= ma·v⇔ ma/v= mb/va. The spatial frequency of the wavefunction  would be the same for both but the temporal frequency would be different, because their energy incorporates the rest mass and, hence, because m≠ mb, we also know that E≠ Eb. So… It all works out but, yes, I admit it’s all very strange, and it takes a long time and a lot of reflection to advance our understanding.