Should we reinvent wavefunction math?

Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂

If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:formula 1The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:formula 2As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·ei∙θ = a·ei∙(E/ħ)·t = a·cos[(E/ħ)∙t]  i·a·sin[(E/ħ)∙t] once more:

Circle_cos_sin

So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:

  1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
  2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:

propagation

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors here – something with a magnitude and a direction in a real space – as opposed to state vectors from the Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:

  1. E is measured in newton per coulomb (N/C).
  2. B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

F = qE + q(v×B)

Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:

B = (1/c)∙ex×E = (1/c)∙iE

This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.

E and B

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.wavefunction

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

Wavefunction 2But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (again, real vectors – not these weird state vectors) rather than complex algebra?

I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂

Hmm… I need to do some research here. 🙂

Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂

P.S. 2: I also published this post on my new blog: https://readingeinstein.blog/. However, I thought the followers of this blog should get it first. 🙂

Advertisements

Photons as strings

In my previous post, I explored, somewhat jokingly, the grey area between classical physics and quantum mechanics: light as a wave versus light as a particle. I did so by trying to picture a photon as an electromagnetic transient traveling through space, as illustrated below. While actual physicists would probably deride my attempt to think of a photon as an electromagnetic transient traveling through space, the idea illustrates the wave-particle duality quite well, I feel.

Photon wave

Understanding light is the key to understanding physics. Light is a wave, as Thomas Young proved to the Royal Society of London in 1803, thereby demolishing Newton’s corpuscular theory. But its constituents, photons, behave like particles. According to modern-day physics, both were right. Just to put things in perspective, the thickness of the note card which Young used to split the light – ordinary sunlight entering his room through a pinhole in a window shutter – was 1/30 of an inch, or approximately 0.85 mm. Hence, in essence, this is a double-slit experiment with the two slits being separated by a distance of almost 1 millimeter. That’s enormous as compared to modern-day engineering tolerance standards: what was thin then, is obviously not considered to be thin now. Scale matters. I’ll come back to this.

Young’s experiment (from www.physicsclassroom.com)

Young experiment

The table below shows that the ‘particle character’ of electromagnetic radiation becomes apparent when its frequency is a few hundred terahertz, like the sodium light example I used in my previous post: sodium light, as emitted by sodium lamps, has a frequency of 500×1012 oscillations per second and, therefore (the relation between frequency and wavelength is very straightforward: their product is the velocity of the wave, so for light we have the simple λf = c equation), a wavelength of 600 nanometer (600×10–9 meter).

Electromagnetic spectrum

However, whether something behaves like a particle or a wave also depends on our measurement scale: 0.85 mm was thin in Young’s time, and so it was a delicate experiment then but now, it’s a standard classroom experiment indeed. The theory of light as a wave would hold until more delicate equipment refuted it. Such equipment came with another sense of scale. It’s good to remind oneself that Einstein’s “discovery of the law of the photoelectric effect”, which explained the photoelectric effect as the result of light energy being carried in discrete quantized packets of energy, now referred to as photons, goes back to 1905 only, and that the experimental apparatus which could measure it was not much older. So waves behave like particles if we look at them close enough. Conversely, particles behave like waves if we look at them close enough. So there is this zone where they are neither, the zone for which we invoke the mathematical formalism of quantum mechanics or, to put it more precisely, the formalism of quantum electrodynamics: that “strange theory of light and Matter”, as Feynman calls it.

Let’s have a look at how particles became waves. It should not surprise us that the experimental apparatuses needed to confirm that electrons–or matter in general–can actually behave like a wave is more recent than the 19th century apparatuses which led Einstein to develop his ‘corpuscular’ theory of light (i.e. the theory of light as photons). The engineering tolerances involved are daunting. Let me be precise here. To be sure, the phenomenon of electron diffraction (i.e. electrons going through one slit and producing a diffraction pattern on the other side) had been confirmed experimentally already in 1925, in the famous Davisson-Germer experiment. I am saying because it’s rather famous indeed. First, because electron diffraction was a weird thing to contemplate at the time. Second, because it confirmed the de Broglie hypothesis only two years after Louis de Broglie had advanced it. And, third, because Davisson and Germer had never intended to set it up to detect diffraction: it was pure coincidence. In fact, the observed diffraction pattern was the result of a laboratory accident, and Davisson and Germer weren’t aware of other, conscious, attempts of trying to prove the de Broglie hypothesis. 🙂 […] OK. I am digressing. Sorry. Back to the lesson.

The nanotechnology that was needed to confirm Feynman’s 1965 thought experiment on electron interference (i.e. electrons going through two slits and interfering with each other (rather than producing some diffraction pattern as they go through one slit only) – and, equally significant as an experiment result, with themselves as they go through the slit(s) one by one! – was only developed over the past decades. In fact, it was only in 2008 (and again in 2012) that the experiment was carried out exactly the way Feynman describes it in his Lectures.

It is useful to think of what such experiments entail from a technical point of view. Have a look at the illustration below, which shows the set-up. The insert in the upper-left corner shows the two slits which were used in the 2012 experiment: they are each 62 nanometer wide – that’s 50×10–9 m! – and the distance between them is 272 nanometer, or 0.272 micrometer. [Just to be complete: they are 4 micrometer tall (4×10–6 m), and the thing in the middle of the slits is just a little support (150 nm) to make sure the slit width doesn’t vary.]

The second inset (in the upper-right corner) shows the mask that can be moved to close one or both slits partially or completely. The mask is 4.5µm wide ×20µm tall. Please do take a few seconds to contemplate the technology behind this feat: a nanometer is a millionth of a millimeter, so that’s a billionth of a meter, and a micrometer is a millionth of a meter. To imagine how small a nanometer is, you should imagine dividing one millimeter in ten, and then one of these tenths in ten again, and again, and once again, again, and again. In fact, you actually cannot imagine that because we live in the world we live in and, hence, our mind is used only to addition (and subtraction) when it comes to comparing sizes and – to a much more limited extent – with multiplication (and division): our brain is, quite simply, not wired to deal with exponentials and, hence, it can’t really ‘imagine’ these incredible (negative) powers. So don’t think you can imagine it really, because one can’t: in our mind, these scales exist only as mathematical constructs. They don’t correspond to anything we can actually make a mental picture of.

Electron double-slit set-up

The electron beam consisted of electrons with an (average) energy of 600 eV. That’s not an awful lot: 8.5 times more than the energy of an electron in orbit in a atom, whose energy would be some 70 eV, so the acceleration before they went through the slits was relatively modest. I’ve calculated the corresponding de Broglie wavelength of these electrons in another post (Re-Visiting the Matter-Wave, April 2014), using the de Broglie equations: f = E/h or λ = p/h. And, of course, you could just google the article on the experiment and read about it, but it’s a good exercise, and actually quite simple: just note that you’ll need to express the energy in joule (not in eV) to get it right. Also note that you need to include the rest mass of the electron in the energy. I’ll let you try it (or else just go to that post of mine). You should find a de Broglie wavelength of 50 picometer for these electrons, so that’s 50×10–12 m. While that wavelength is less than a thousandth of the slit width (62 nm), and about 5,500 times smaller than the space between the two slits (272 nm), the interference effect was unambiguous in the experiment. I advice you to google the results yourself (or read that April 2014 post of mine if you want a summary): the experiment was done at the University of Nebraska-Lincoln in 2012.

Electrons and X-rays

To put everything in perspective: 50 picometer is like the wavelength of X-rays, and you can google similar double-slit experiments for X-rays: they also loose their ‘particle behavior’ when we look at them at this tiny scale. In short, scale matters, and the boundary between ‘classical physics’ (electromagnetics) and quantum physics (wave mechanics) is not clear-cut. If anything, it depends on our perspective, i.e. what we can measure, and we seem to be shifting that boundary constantly. In what direction?

Downwards obviously: we’re devising instruments that measure stuff at smaller and smaller scales, and what’s happening is that we can ‘see’ typical ‘particles’, including hard radiation such as gamma rays, as local wave trains. Indeed, the next step is clear-cut evidence for interference between gamma rays.

Energy levels of photons

We would not associate low-frequency electromagnetic waves, such as radio or radar waves, with photons. But light in the visible spectrum, yes. Obviously. […]

Isn’t that an odd dichotomy? If we see that, on a smaller scale, particles start to look like waves, why would the reverse not be true? Why wouldn’t we analyze radio or radar waves, on a much larger scale, as a stream of very (I must say extremely) low-energy photons? I know the idea sounds ridiculous, because the energies involved would be ridiculously low indeed. Think about it. The energy of a photon is given by the Planck relation: E = h= hc/λ. For visible light, with wavelengths ranging from 800 nm (red) to 400 nm (violet or indigo), the photon energies range between 1.5 and 3 eV. Now, the shortest wavelengths for radar waves are in the so-called millimeter band, i.e. they range from 1 mm to 1 cm. A wavelength of 1 mm corresponds to a photon energy of 0.00124 eV. That’s close to nothing, of course, and surely not the kind of energy levels that we can currently detect.

But you get the idea: there is a grey area between classical physics and quantum mechanics, and it’s our equipment–notably the scale of our measurements–that determine where that grey area begins, and where it ends, and it seems to become larger and larger as the sensitivity of our equipment improves.

What do I want to get at? Nothing much. Just some awareness of scale, as an introduction to the actual topic of this post, and that’s some thoughts on a rather primitive string theory of photons. What !? 

Yes. Purely speculative, of course. 🙂

Photons as strings

I think my calculations in the previous post, as primitive as they were, actually provide quite some food for thought. If we’d treat a photon in the sodium light band (i.e. the light emitted by sodium, from a sodium lamp for instance) just like any other electromagnetic pulse, we would find it’s a pulse of some 10 meter long. We also made sense of this incredibly long distance by noting that, if we’d look at it as a particle (which is what we do when analyzing it as a photon), it should have zero size, because it moves at the speed of light and, hence, the relativistic length contraction effect ensures we (or any observer in whatever reference frame really, because light always moves at the speed of light, regardless of the reference frame) will see it as a zero-size particle.

Having said that, and knowing damn well that we have treat the photon as an elementary particle, I would think it’s very tempting to think of it as a vibrating string.

Huh?

Yes. Let me copy that graph again. The assumption I started with is a standard one in physics, and not something that you’d want to argue with: photons are emitted when an electron jumps from a higher to a lower energy level and, for all practical purposes, this emission can be analyzed as the emission of an electromagnetic pulse by an atomic oscillator. I’ll refer you to my previous post – as silly as it is – for details on these basics: the atomic oscillator has a Q, and so there’s damping involved and, hence, the assumption that the electromagnetic pulse resembles a transient should not sound ridiculous. Because the electric field as a function in space is the ‘reversed’ image of the oscillation in time, the suggested shape has nothing blasphemous.

Photon wave

Just go along with it for a while. First, we need to remind ourselves that what’s vibrating here is nothing physical: it’s an oscillating electromagnetic field. That being said, in my previous post, I toyed with the idea that the oscillation could actually also represent the photon’s wave function, provided we use a unit for the electric field that ensures that the area under the squared curve adds up to one, so as to normalize the probability amplitudes. Hence, I suggested that the field strength over the length of this string could actually represent the probability amplitudes, provided we choose an appropriate unit to measure the electric field.

But then I was joking, right? Well… No. Why not consider it? An electromagnetic oscillation packs energy, and the energy is proportional to the square of the amplitude of the oscillation. Now, the probability of detecting a particle is related to its energy, and such probability is calculated from taking the (absolute) square of probability amplitudes. Hence, mathematically, this makes perfect sense.

It’s quite interesting to think through the consequences, and I hope I will (a) understand enough of physics and (b) find enough time for this—one day! One interesting thing is that the field strength (i.e. the magnitude of the electric field vector) is a real number. Hence, if we equate these magnitudes with probability amplitudes, we’d have real probability amplitudes, instead of complex-valued ones. That’s not a very fundamental issue. It probably indicates we should also take into account the fact that the E vector also oscillates in the other direction that’s normal to the direction of propagation, i.e. the y-coordinate (assuming that the z-axis is the direction of propagation). To put it differently, we should take the polarization of the light into account. The figure below–which I took from Wikipedia again (by far the most convenient place to shop for images and animations: what would I do without it?– shows how the electric field vector moves in the xy-plane indeed, as the wave travels along the z-axis. So… Well… I still have to figure it all out, but the idea surely makes sense.

Circular.Polarization.Circularly.Polarized.Light_Right.Handed.Animation.305x190.255Colors

Another interesting thing to think about is how the collapse of the wave function would come about. If we think of a photon as a string, it must have some ‘hooks’ which could cause it to ‘stick’ or ‘collapse’ into a ‘lump’ as it hits a detector. What kind of hook? What force would come into play?

Well… The interaction between the photon and the photodetector is electromagnetic, but we’re looking for some other kind of ‘hook’ here. What could it be? I have no idea. Having said that, we know that the weakest of all fundamental forces—gravity—becomes much stronger—very much stronger—as the distance becomes smaller and smaller. In fact, it is said that, if we go to the Planck scale, the strength of the force of gravity becomes quite comparable with the other forces. So… Perhaps it’s, quite simply, the equivalent mass of the energy involved that gets ‘hooked’, somehow, as it starts interacting with the photon detector. Hence, when thinking about a photon as an oscillating string of energy, we should also think of that string as having some inseparable (equivalent) mass that, once it’s ‘hooked’, has no other option that to ‘collapse into itself’. [You may note there’s no quantum theory for gravity as yet. I am not sure how, but I’ve got a gut instinct that tells me that may help to explain why a photon consists of one single ‘unbreakable’ lump, although I need to elaborate this argument obviously.]

You must be laughing aloud now. A new string theory–really?

I know… I know… I haven’t reach sophomore level and I am already wildly speculating… Well… Yes. What I am talking about here has probably nothing to do with current string theories, although my proposed string would also replace the point-like photon by a one-dimensional ‘string’. However, ‘my’ string is, quite simply, an electromagnetic pulse (a transient actually, for reasons I explained in my previous post). Naive? Perhaps. However, I note that the earliest version of string theory is referred to as bosonic string theory, because it only incorporated bosons, which is what photons are.

So what? Well… Nothing… I am sure others have thought of this too, and I’ll look into it. It’s surely an idea which I’ll keep in the back of my head as I continue to explore physics. The idea is just too simple and beautiful to disregard, even if I am sure it must be pretty naive indeed. Photons as ten-meter long strings? Let’s just forget about it. 🙂 Onwards !!! 🙂

Post Scriptum: The key to ‘closing’ this discussion is, obviously, to be found in a full-blown analysis of the relativity of fields. So, yes, I have not done all of the required ‘homework’ on this and the previous post. I apologize for that. If anything, I hope it helped you to also try to think somewhat beyond the obvious. I realize I wasted a lot of time trying to understand the pre-cooked ready-made stuff that’s ‘on the market’, so to say. I still am, actually. Perhaps I should first thoroughly digest Feynman’s Lectures. In fact, I think that’s what I’ll try to do in the next year or so. Sorry for any inconvenience caused. 🙂

Re-visiting the matter wave (I)

In my previous posts, I introduced a lot of wave formulas. They are essential to understanding waves – both real ones (e.g. electromagnetic waves) as well as probability amplitude functions. Probability amplitude function is quite a mouthful so let me call it a matter wave, or a de Broglie wave. The formulas are necessary to create true understanding – whatever that means to you – because otherwise we just keep on repeating very simplistic but nonsensical things such as ‘matter behaves (sometimes) like light’, ‘light behaves (sometimes) like matter’ or, combining both, ‘light and matter behave like wavicles’. Indeed: what does ‘like‘ mean? Like the same but different? 🙂 So that means it’s different. Let’s therefore re-visit the matter wave (i.e. the de Broglie wave) and point out the differences with light waves.

In fact, this post actually has its origin in a mistake in a post scriptum of a previous post (An Easy Piece: On Quantum Mechanics and the Wave Function), in which I wondered what formula to use for the energy E in the (first) de Broglie relation E = hf (with the frequency of the matter wave and h the Planck constant). Should we use (a) the kinetic energy of the particle, (b) the rest mass (mass is energy, remember?), or (c) its total energy? So let us first re-visit these de Broglie relations which, you’ll remember, relate energy and momentum to frequency (f) and wavelength (λ) respectively with the Planck constant as the factor of proportionality:

E = hf and p = h/λ

The de Broglie wave

I first tried kinetic energy in that E = h equation. However, if you use the kinetic energy formula (K.E. = mv2/2, with the velocity of the particle), then the second de Broglie relation (p = h/λ) does not come out right. The second de Broglie relation has the wavelength λ on the right side, not the frequency f. But it’s easy to go from one to the other: frequency and wavelength are related through the velocity of the wave (v). Indeed, the number of cycles per second (i.e. the frequency f) times the length of one cycle (i.e. the wavelength λ) gives the distance traveled by the wave per second, i.e. its velocity v. So fλ = v. Hence, using that kinetic energy formula and that very obvious fλ = v relation, we can write E = hf as mv2/2 = v/λ and, hence, after moving one of the two v’s in v2 (and the 1/2 factor) on the left side to the right side of this equation, we get mv = 2h/λ. So there we are:

p = mv = 2h/λ.

Well… No. The second de Broglie relation is just p = h/λ. There is no factor 2 in it. So what’s wrong?

A factor of 2 in an equation like this surely doesn’t matter, does it? It does. We are talking tiny wavelengths here but a wavelength of 1 nanometer (1×10–9 m) – this is just an example of the scale we’re talking about here – is not the same as a wavelength of 0.5 nm. There’s another problem too. Let’s go back to our an example of an electron with a mass of 9.1×10–31 kg (that’s very tiny, and so you’ll usually see it expressed in a unit that’s more appropriate to the atomic scale), moving about with a velocity of 2.2×106 m/s (that’s the estimated speed of orbit of an electron around a hydrogen nucleus: it’s fast (2,200 km per second), but still less than 1% of the speed of light), and let’s do the math. 

[Before I do the math, however, let me quickly insert a line on that ‘other unit’ to measure mass. You will usually see it written down as eV, so that’s electronvolt. Electronvolt is a measure of energy but that’s OK because mass is energy according to Einstein’s mass-energy equation: E = mc2. The point to note is that the actual measure for mass at the atomic scale is eV/c2, so we make the unit even smaller by dividing the eV (which already is a very tiny amount of energy) by c2: 1 eV/ccorresponds to 1.782662×10−36 kg, so the mass of our electron (9.1×10–31 kg) is about 510,000 eV/c2, or 0.510 MeV/c2. I am spelling it out because you will often just see 0.510 MeV in older or more popular publications, but so don’t forget that cfactor. As for the calculations below, I just stick to the kg and m measures because they make the dimensions come out right.]

According to our kinetic energy formula (K.E. = mv2/2), these mass and velocity values correspond to an energy value of 22 ×10−19 Joule (the Joule is the so-called SI unit for energy – don’t worry about it right now). So, from the first de Broglie equation (f = E/h) – and using the right value for Planck’s constant (6.626 J·s), we get a frequency of 3.32×1015 hertz (hertz just means oscillations per second as you know). Now, using v once again, and fλ = v, we see that corresponds to a wavelength of 0.66 nanometer (0.66×10−9 m). [Just take the numbers and do the math.] 

However, if we use the second de Broglie relation, which relates wavelength to momentum instead of energy, then we get 0.33 nanometer (0.33×10−9 m), so that’s half of the value we got from the first equation. So what is it: 0.33 or 0.66 nm? It’s that factor 2 again. Something is wrong.

It must be that kinetic energy formula. You’ll say we should include potential energy or something. No. That’s not the issue. First, we’re talking a free particle here: an electron moving in space (a vacuum) with no external forces acting on it, so it’s a field-free space (or a region of constant potential). Second, we could, of course, extend the analysis and include potential energy, and show how it’s converted to kinetic energy (like a stone falling from 100 m to 50 m: potential energy gets converted into kinetic energy) but making our analysis more complicated by including potential energy as well will not solve our problem here: it will only make you even more confused.

Then it must be some relativistic effect you’ll say. No. It’s true the formula for kinetic energy above only holds for relatively low speeds (as compared to light, so ‘relatively’ low can be thousands of km per second) but that’s not the problem here: we are talking electrons moving at non-relativistic speeds indeed, so their mass or energy is not (or hardly) affected by relativistic effects and, hence, we can indeed use the more simple non-relativistic formulas.

The real problem we’re encountering here is not with the equations: it’s the simplistic model of our wave. We are imagining one wave here indeed, with a single frequency, a single wavelength and, hence, one single velocity – which happens to coincide with the velocity of our particle. Such wave cannot possibly represent an actual de Broglie wave: the wave is everywhere and, hence, the particle it represents is nowhere. Indeed, a wave defined by a specific wavelength λ (or a wave number k = 2π/λ if we’re using complex number notation) and a specific frequency f or period T (or angular frequency ω = 2π/T = 2πf) will have a very regular shape – such as Ψ = Aei(ωt-kx) and, hence, the probability of actually locating that particle at some specific point in space will be the same everywhere: |Ψ|= |Aei(ωt-kx)|= A2. [If you are confused about the math here, I am sorry but I cannot re-explain this once again: just remember that our de Broglie wave represents probability amplitudes – so that’s some complex number Ψ = Ψ(x, t) depending on space and time – and that we  need to take the modulus squared of that complex number to get the probability associated with some (real) value x (i.e. the space variable) and some value t (i.e. the time variable).]

So the actual matter wave of a real-life electron will be represented by a wave train, or a wave packet as it is usually referred to. Now, a wave packet is described by (at least) two types of wave velocity:

  1. The so-called group velocity: the group velocity of a wave is denoted by vand is the velocity of the wave packet as a whole is traveling. Wikipedia defines it as “the velocity with which the overall shape of the waves’ amplitudes — known as the modulation or envelope of the wave — propagates through space.”
  2. The so-called phase velocity: the phase velocity is denoted by vp and is what we usually associate with the velocity of a wave. It is just what it says it is: the rate at which the phase of the (composite) wave travels through space.

The term between brackets above – ‘composite’ – already indicates what it’s all about: a wave packet is to be analyzed as a composite wave: so it’s a wave composed of a finite or infinite number of component waves which all have their own wave number k and their own angular frequency ω. So the mistake we made above is that, naively, we just assumed that (i) there is only one simple wave (and, of course, there is only one wave, but it’s not a simple one: it’s a composite wave), and (ii) that the velocity v of our electron would be equal to the velocity of that wave. Now that we are a little bit more enlightened, we need to answer two questions in regard to point (ii):

  1. Why would that be the case?
  2. If it’s is the case, then what wave velocity are we talking about: the group velocity or the phase velocity?

To answer both questions, we need to look at wave packets once again, so let’s do that. Just to visualize things, I’ll insert – once more – that illustration you’ve seen in my other posts already:

Explanation of uncertainty principle

The de Broglie wave packet

The Wikipedia article on the group velocity of a wave has wonderful animations, which I would advise you to look at in order to make sure you are following me here. There are several possibilities:

  1. The phase velocity and the group velocity are the same: that’s a rather unexciting possibility but it’s the easiest thing to work with and, hence, most examples will assume that this is the case.
  2. The group and phase velocity are not the same, but our wave packet is ‘stable’, so to say. In other words, the individual peaks and troughs of the wave within the envelope travel at a different speed (the phase velocity vg), but the envelope as a whole (so the wave packet as such) does not get distorted as it travels through space.
  3. The wave packet dissipates: in this case, we have a constant group velocity, but the wave packet delocalizes. Its width increases over time and so the wave packet diffuses – as time goes by – over a wider and wider region of space, until it’s actually no longer there. [In case you wonder why it did not group this third possibility under (2): it’s a bit difficult to assign a fixed phase velocity to a wave like this.]

How the wave packet will behave depends on the characteristics of the component waves. To be precise, it will depend on their angular frequency and their wave number and, hence, their individual velocities. First, note the relationship between these three variables: ω = 2πf and k = 2π/λ so ω/k = fλ = v. So these variables are not independent: if you have two values (e.g. v and k), you also have the third one (ω). Secondly, note that the component waves of our wave packet will have different wavelengths and, hence, different wave numbers k.

Now, the de Broglie relation p = ħk (i.e. the same relation as p = h/λ but we replace λ with 2π/k and then ħ is the so-called reduced Planck constant ħ = h/2π) makes it obvious that different wave numbers k correspond to different values p for the momentum of our electron, so allowing for a spread in k (or a spread in λ as illustrates above) amounts to allowing for some spread in p. That’s where the uncertainty principle comes in – which I actually derived from a theoretical wave function in my post on Fourier transforms and conjugate variables. But so that’s not something I want to dwell on here.

We’re interested in the ω’s. What about them? Well… ω can take any value really – from a theoretical point of view that is. Now you’ll surely object to that from a practical point of view, because you know what it implies: different velocities of the component waves. But you can’t object in a theoretical analysis like this. The only thing we could possible impose as a constraint is that our wave packet should not dissipate – so we don’t want it to delocalize and/or vanish after a while because we’re talking about some real-life electron here, and so that’s a particle which just doesn’t vanish like that.

To impose that condition, we need to look at the so-called dispersion relation. We know that we’ll have a whole range of wave numbers k, but so what values should ω take for a wave function to be ‘well-behaved’, i.e. not disperse in our case? Let’s first accept that k is some variable, the independent variable actually, and so then we associate some ω with each of these values k. So ω becomes the dependent variable (dependent on k that is) and that amounts to saying that we have some function ω = ω(k).

What kind of function? Well… It’s called the dispersion relation – for rather obvious reasons: because this function determines how the wave packet will behave: non-dispersive or – what we don’t want here – dispersive. Indeed, there are several possibilities:

  1. The speed of all component waves is the same: that means that the ratio ω/k = is the same for all component waves. Now that’s the case only if ω is directly proportional to k, with the factor of proportionality equal to v. That means that we have a very simple dispersion relation: ω = αk with α some constant equal to the velocity of the component waves as well as the group and phase velocity of the composite wave. So all velocities are just the same (vvp = vg = α) and we’re in the first of the three cases explained at the beginning of this section.
  2. There is a linear relation between ω and k but no direct proportionality, so we write ω = αk + β, in which β can be anything but not some function of k. So we allow different wave speeds for the component waves. The phase velocity will, once again, be equal to the ratio of the angular frequency and the wave number of the composite wave (whatever that is), but what about the group velocity, i.e. the velocity of our electron in this example? Well… One can show – but I will not do it here because it is quite a bit of work – that the group velocity of the wave packet will be equal to vg = dω/dk, i.e. the (first-order) derivative of ω with respect to k. So, if we want that wave packet to travel at the same speed of our electron (which is what we want of course because, otherwise, the wave packet would obviously not represent our electron), we’ll have to impose that dω/dk (or ∂ω/∂k if you would want to introduce more independent variables) equals v. In short, we have the condition that dω/dk = d(αk + β)/dk = α = k.
  3. If the relation between ω and k is non-linear, well… Then we have none of the above. Hence, we then have a wave packet that gets distorted and stretched out and actually vanishes after a while. That case surely does not represent an electron.

Back to the de Broglie wave relations

Indeed, it’s now time to go back to our de Broglie relations – E = hf and p = h/λ and the question that sparked the presentation above: what formula to use for E? Indeed, for p it’s easy: we use p = mv and, if you want to include the case of relativistic speeds, you will write that formula in a more sophisticated way by making it explicit that the mass m is the relativistic mass m = γm0: the rest mass multiplied with a factor referred to as the Lorentz factor which, I am sure, you have seen before: γ = (1 – v2/c2)–1/2. At relativistic speeds (i.e. speeds close to c), this factor makes a difference: it adds mass to the rest mass. So the mass of a particle can be written as m = γm0, with m0 denoting the rest mass. At low speeds (e.g. 1% of the speed of light – as in the case of our electron), m will hardly differ from m0 and then we don’t need this Lorentz factor. It only comes into play at higher speeds.

At this point, I just can’t resist a small digression. It’s just to show that it’s not ‘relativistic effects’ that cause us trouble in finding the right energy equation for our E = hf relation. What’s kinetic energy? Well… There’s a few definitions – such as the energy gathered through converting potential energy – but one very useful definition in the current context is the following: kinetic energy is the excess of a particle over its rest mass energy. So when we’re looking at high-speed or high-energy particles, we will write the kinetic energy as K.E. = mc– m0c= (m – m0)c= γm0c– m0c= m0c2(γ – 1). Before you think I am trying to cheat you: where is the v of our particle? [To make it specific: think about our electron once again but not moving at leisure this time around: imagine it’s speeding at a velocity very close to c in some particle accelerator. Now, v is close to c but not equal to c and so it should not disappear. […]

It’s in the Lorentz factor γ = (1 – v2/c2)–1/2.

Now, we can expand γ into a binomial series (it’s basically an application of the Taylor series – but just check it online if you’re in doubt), so we can write γ as an infinite sum of the following terms: γ = 1 + (1/2)·v2/c+ (3/8)·v4/c+ (3/8)·v4/c+ (5/16)·v6/c+ … etcetera. [The binomial series is an infinite Taylor series, so it’s not to be confused with the (finite) binomial expansion.] Now, when we plug this back into our (relativistic) kinetic energy equation, we can scrap a few things (just do it) to get where I want to get:

K.E. = (1/2)·m0v+ (3/8)·m0v4/c+ (5/16)·m0v6/c+ … etcetera

So what? Well… That’s it – for the digression at least: see how our non-relativistic formula for kinetic energy (K.E. = m0v2/2 is only the first term of this series and, hence, just an approximation: at low speeds, the second, third etcetera terms represent close to nothing (and more close to nothing as you check out the fourth, fifth etcetera terms). OK, OK… You’re getting tired of these games. So what? Should we use this relativistic kinetic energy formula in the de Broglie relation?

No. As mentioned above already, we don’t need any relativistic correction, but the relativistic formula above does come in handy to understand the next bit. What’s the next bit about?

Well… It turns out that we actually do have to use the total energy – including (the energy equivalent to) the rest mass of our electron – in the de Broglie relation E = hf.

WHAT!?

If you think a few seconds about the math of this – so we’d use γm0c2 instead of (1/2)m0v2 (so we use the speed of light instead of the speed of our particle) – you’ll realize we’ll be getting some astronomical frequency (we got that already but so here we are talking some kind of truly fantastic frequency) and, hence, combining that with the wavelength we’d derive from the other de Broglie equation (p = h/λ) we’d surely get some kind of totally unreal speed. Whatever it is, it will surely have nothing to do with our electron, does it?

Let’s go through the math.

The wavelength is just the same as that one given by p = h/λ, so we have λ = 0.33 nanometer. Don’t worry about this. That’s what it is indeed. Check it out online: it’s about a thousand times smaller than the wavelength of (visible) light but that’s OK. We’re talking something real here. That’s why electron microscopes can ‘see’ stuff that light microscopes can’t: their resolution is about a thousand times higher indeed.

But so when we take the first equation once again (E =hf) and calculate the frequency from f = γm0c2/h, we get an frequency in the neighborhood of 12.34×1019 herz. So that gives a velocity of v = fλ = 4.1×1010 meter per second (m/s). But… THAT’S MORE THAN A HUNDRED TIMES THE SPEED OF LIGHT. Surely, we must have got it wrong.

We don’t. The velocity we are calculating here is the phase velocity vp of our matter wave – and IT’S REAL! More in general, it’s easy to show that this phase velocity is equal to vp = fλ = E/p = (γm0c2/h)·(h/γm0v) = c2/v. Just fill in the values for c and v (3×108 and 2.2×106 respectively and you will get the same answer.

But that’s not consistent with relativity, is it? It is: phase velocities can be (and, in fact, usually are – as evidenced by our real-life example) superluminal as they say – i.e. much higher than the speed of light. However, because they carry no information – the wave packet shape is the ‘information’, i.e. the (approximate) location of our electron – such phase velocities do not conflict with relativity theory. It’s like amplitude modulation, like AM radiowaves): the modulation of the amplitude carries the signal, not the carrier wave.

The group velocity, on the other hand, can obviously not be faster than and, in fact, should be equal to the speed of our particle (i.e. the electron). So how do we calculate that? We don’t have any formula ω(k) here, do we? No. But we don’t need one. Indeed, we can write:

v= ∂ω/∂k = ∂(E/ ħ)/∂(p/ ħ) = ∂E/∂p

[Do you see why we prefer the ∂ symbol instead of the d symbol now? ω is a function of k but it’s – first and foremost – a function of E, so a partial derivative sign is quite appropriate.]

So what? Well… Now you can use either the relativistic or non-relativistic relation between E and p to get a value for ∂E/∂p. Let’s take the non-relativistic one first (E = p2/2m) : ∂E/∂p = ∂(p2/2m)/∂p = p/m = v. So we get the velocity of our electron! Just like we wanted. 🙂 As for the relativistic formula (E = (p2c+ m02c4)1/2), well… I’ll let you figure that one out yourself. [You can also find it online in case you’d be desperate.]

Wow! So there we are. That was quite something! I will let you digest this for now. It’s true I promised to ‘point out the differences between matter waves and light waves’ in my introduction but this post has been lengthy enough. I’ll save those ‘differences’ for the next post. In the meanwhile, I hope you enjoyed and – more importantly – understood this one. If you did, you’re a master! A real one! 🙂