The wavefunction of a zero-mass particle

Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link. In fact, I actually made some (small) mistakes when writing the post below.

Original post:

I hope you find the title intriguing. A zero-mass particle? So I am talking a photon, right? Well… Yes and no. Just read this post and, more importantly, think about this story for yourself. 🙂

One of my acquaintances is a retired nuclear physicist. We mail every now and then—but he has little or no time for my questions: he usually just tells me to keep studying. I once asked him why there is never any mention of the wavefunction of a photon in physics textbooks. He bluntly told me photons don’t have a wavefunction—not in the sense I was talking at least. Photons are associated with a traveling electric and a magnetic field vector. That’s it. Full stop. Photons do not have a ψ or φ function. [I am using ψ and φ to refer to position or momentum wavefunction. You know both are related: if we have one, we have the other.] But then I never give up, of course. I just can’t let go out of the idea of a photon wavefunction. The structural similarity in the propagation mechanism of the electric and magnetic field vectors E and B just looks too much like the quantum-mechanical wavefunction. So I kept trying and, while I don’t think I fully solved the riddle, I feel I understand it much better now. Let me show you the why and how.

I. An electromagnetic wave in free space is fully described by the following two equations:

  1. B/∂t = –∇×E
  2. E/∂t = c2∇×B

We’re making abstraction here of stationary charges, and we also do not consider any currents here, so no moving charges either. So I am omitting the ∇·E = ρ/ε0 equation (i.e. the first of the set of four equations), and I am also omitting the j0 in the second equation. So, for all practical purposes (i.e. for the purpose of this discussion), you should think of a space with no charges: ρ = 0 and = 0. It’s just a traveling electromagnetic wave. To make things even simpler, we’ll assume our time and distance units are chosen such that = 1, so the equations above reduce to:

  1. B/∂t = –∇×E
  2.  E/∂t = ∇×B

Perfectly symmetrical! But note the minus sign in the first equation. As for the interpretation, I should refer you to previous posts but, briefly, the ∇× operator is the curl operator. It’s a vector operator: it describes the (infinitesimal) rotation of a (three-dimensional) vector field. We discussed heat flow a couple of times, or the flow of a moving liquid. So… Well… If the vector field represents the flow velocity of a moving fluid, then the curl is the circulation density of the fluid. The direction of the curl vector is the axis of rotation as determined by the ubiquitous right-hand rule, and its magnitude of the curl is the magnitude of rotation. OK. Next  step.

II. For the wavefunction, we have Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ, which relates two complex-valued functions (∂ψ/∂t and ∇2ψ). Complex-valued functions consist of a real and an imaginary part, and you should be able to verify this equation is equivalent to the following set of two equations:

  1. Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ)
  2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ)

[Two complex numbers a + ib and c + id are equal if, and only if, their real and imaginary parts are the same. However, note the −i factor in the right-hand side of the equation, so we get: a + ib = −i·(c + id) = d −ic.] The Schrödinger equation above also assumes free space (i.e. zero potential energy: V = 0) but, in addition – see my previous post – they also assume a zero rest mass of the elementary particle (E0 = 0). So just assume E= V = 0 in de Broglie’s elementary ψ(θ) = ψ(x, t) = eiθ = a·e−i[(E+ p2/(2m) + V)·t − p∙x]/ħ wavefunction. So, in essence, we’re looking at the wavefunction of a massless particle here. Sounds like nonsense, doesn’t it? But… Well… That should be the wavefunction of a photon in free space then, right? 🙂

Maybe. Maybe not. Let’s go as far as we can.

The energy of a zero-mass particle

What m would we use for a photon? It’s rest mass is zero, but it’s got energy and, hence, an equivalent mass. That mass is given by the m = E/cmass-energy equivalence. We also know a photon has momentum, and it’s equal to its energy divided by c: p = m·c = E/c. [I know the notation is somewhat confusing: E is, obviously, not the magnitude of E here: it’s energy!] Both yield the same result. We get: m·c = E/c ⇔ m = E/c⇔ E = m·c2.

OK. Next step. Well… I’ve always been intrigued by the fact that the kinetic energy of a photon, using the E = m·v2/2 = E = m·c2/2 formula, is only half of its total energy E = m·c2. Half: 1/2. That 1/2 factor is intriguing. Where’s the rest of the energy? It’s really a contradiction: our photon has no rest mass, and there’s no potential here, but its total energy is still twice its kinetic energy. Quid?

There’s only one conclusion: just because of its sheer existence, it must have some hidden energy, and that hidden energy is also equal to E = m·c2/2, and so the kinetic and hidden energy add up to E = m·c2.

Huh? Hidden energy? I must be joking, right?

Well… No. No joke. I am tempted to call it the imaginary energy, because it’s linked to the imaginary part of the wavefunction—but then it’s everything but imaginary: it’s as real as the imaginary part of the wavefunction. [I know that sounds a bit nonsensical, but… Well… Think about it: it does make sense.]

Back to that factor 1/2. You may or may not remember it popped up when we were calculating the group and the phase velocity of the wavefunction respectively, again assuming zero rest mass, and zero potential. [Note that the rest mass term is mathematically equivalent to the potential term in both the wavefunction as well as in Schrödinger’s equation: (E0·t +V·t = (E+ V)·t, and V·ψ + E0·ψ = (V+E0)·ψ—obviously!]

In fact, let me quickly show you that calculation again: the de Broglie relations tell us that the k and the ω in the ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction (i.e. the spatial and temporal frequency respectively) are equal to k = p/ħ, and ω = E/ħ. If we would now use the kinetic energy formula E = m·v2/2 – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) as:

vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂[p2/2m]/∂p = 2p/2m = p/m = v

[Don’t tell me I can’t treat m as a constant when calculating ∂ω/∂k: I can. Think about it.] Now the phase velocity. The phase velocity of our ei(kx − ωt) is only half of that. Again, we get that 1/2 factor:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = (p2/2m)/p = p/2m = v/2

Strange, isn’t it? Why would we get a different value for the phase velocity here? It’s not like we have two different frequencies here, do we? You may also note that the phase velocity turns out to be smaller than the group velocity, which is quite exceptional as well! So what’s the matter?

Well… The answer is: we do seem to have two frequencies here while, at the same time, it’s just one wave. There is only one k and ω here but, as I mentioned a couple of times already, that ei(kx − ωt) wavefunction seems to give you two functions for the price of one—one real and one imaginary: ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt). So are we adding waves, or are we not? It’s a deep question. In my previous post, I said we were adding separate waves, but now I am thinking: no. We’re not. That sine and cosine are part of one and the same whole. Indeed, the apparent contradiction (i.e. the different group and phase velocity) gets solved if we’d use the E = m∙v2 formula rather than the kinetic energy E = m∙v2/2. Indeed, assuming that E = m∙v2 formula also applies to our zero-mass particle (I mean zero rest mass, of course), and measuring time and distance in natural units (so c = 1), we have:

E = m∙c2 = m and p = m∙c2 = m, so we get: E = m = p

Waw! What a weird combination, isn’t it? But… Well… It’s OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units, and so c = c2 = 1. So… Well… No worries!] The point is: that E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get:

vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂p/∂p = 1

So that’s the velocity of our zero-mass particle (c, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get:

vp = ω/k = (E/ħ)/(p/ħ) = E/p = p/p = 1

Same result! No factor 1/2 here! Isn’t that great? My ‘hidden energy theory’ makes a lot of sense. 🙂 In fact, I had mentioned a couple of times already that the E = m∙v2 relation comes out of the de Broglie relations if we just multiply the two and use the v = λ relation:

  1. f·λ = (E/h)·(h/p) = E/p
  2. v = λ ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v) ⇒ E = m·v2

But so I had no good explanation for this. I have one now: the E = m·vis the correct energy formula for our zero-mass particle. 🙂

The quantization of energy and the zero-mass particle

Let’s now think about the quantization of energy. What’s the smallest value for E that we could possible think of? That’s h, isn’t it? That’s the energy of one cycle of an oscillation according to the Planck-Einstein relation (E = h·f). Well… Perhaps it’s ħ? Because… Well… We saw energy levels were separated by ħ, rather than h, when studying the blackbody radiation problem. So is it ħ = h/2π? Is the natural unit a radian (i.e. a unit distance), rather than a cycle?

Neither is natural, I’d say. We also have the Uncertainty Principle, which suggests the smallest possible energy value is ħ/2, because ΔxΔp = ΔtΔE = ħ/2.

Huh? What’s the logic here?

Well… I am not quite sure but my intuition tells me the quantum of energy must be related to the quantum of time, and the quantum of distance.

Huh? The quantum of time? The quantum of distance? What’s that? The Planck scale?

No. Or… Well… Let me correct that: not necessarily. I am just thinking in terms of logical concepts here. Logically, as we think of the smallest of smallest, then our time and distance variables must become count variables, so they can only take on some integer value n = 0, 1, 2 etcetera. So then we’re literally counting in time and/or distance units. So Δx and Δt are then equal to 1. Hence, Δp and ΔE are then equal to Δp = ΔE = ħ/2. Just think of the radian (i.e. the unit in which we measure θ) as measuring both time as well as distance. Makes sense, no?

No? Well… Sorry. I need to move on. So the smallest possible value for m = E = p would be ħ/2. Let’s substitute that in Schrödinger’s equation, or in that set of equations Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ) and Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ). We get:

  1. Re(∂ψ/∂t) = −(ħ/2m)·Im(∇2ψ) = −(2ħ/2ħ)·Im(∇2ψ) = −Im(∇2ψ)
  2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ) = (2ħ/2ħ)·Re(∇2ψ) = Re(∇2ψ)

Bingo! The Re(∂ψ/∂t) = −Im(∇2ψ) and Im(∂ψ/∂t) = Re(∇2ψ) equations were what I was looking for. Indeed, I wanted to find something that was structurally similar to the ∂B/∂t = –∇×and E/∂t = ∇×B equations—and something that was exactly similar: no coefficients in front or anything. 🙂

What about our wavefunction? Using the de Broglie relations once more (k = p/ħ, and ω = E/ħ), our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) now becomes:

ei(kx − ωt) = ei(ħ·x/2 − ħ·t/2)/ħ = ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2]

Hmm… Interesting! So we’ve got that 1/2 factor now in the argument of our wavefunction! I really feel I am close to squaring the circle here. 🙂 Indeed, it must be possible to relate the ∂B/∂t = –∇×E and ∂E/∂t = c2∇×B to the Re(∂ψ/∂t) = −Im(∇2ψ) and Im(∂ψ/∂t) = Re(∇2ψ) equations. I am sure it’s a complicated exercise. It’s likely to involve the formula for the Lorentz force, which says that the force on a unit charge is equal to E+v×B, with v the velocity of the charge. Why? Note the vector cross-product. Also note that ∂B/∂t and  ∂E/∂t are vector-valued functions, not scalar-valued functions. Hence, in that sense, ∂B/∂t and  ∂E/∂t and not like the Re(∂ψ/∂t) and/or Im(∂ψ/∂t) function. But… Well… For the rest, think of it: E and B are orthogonal vectors, and that’s  how we usually interpret the real and imaginary part of a complex number as well: the real and imaginary axis are orthogonal too!

So I am almost there. Who can help me prove what I want to prove here? The two propagation mechanisms are the “same-same but different”, as they say in Asia. The difference between the two propagation mechanisms must also be related to that fundamental dichotomy in Nature: the distinction between bosons and fermions. Indeed, when combining two directional quantities (i.e. two vectors), we like to think there are four different ways of doing that, as shown below. However, when we’re only interested in the magnitude of the result (and not in its direction), then the first and third result below are really the same, as are the second and fourth combination. Now, we’ve got pretty much the same in quantum math: we can, in theory, combine complex-valued amplitudes in four different ways but, in practice, we only have two (rather than four) types of behavior only: photons versus bosons.

vector addition

Is our zero-mass particle just the electric field vector?

Let’s analyze that ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2] wavefunction some more. It’s easy to represent it graphically. The following animation does the trick:

Animation

I am sure you’ve seen this animation before: it represents a circularly polarized electromagnetic wave… Well… Let me be precise: it presents the electric field vector (E) of such wave only. The B vector is not shown here, but you know where and what it is: orthogonal to the E vector, as shown below—for a linearly polarized wave.

emwave2

Let’s think some more. What is that ei(x/2 − t/2) function? It’s subject to conceiving time and distance as countable variables, right? I am tempted to say: as discrete variables, but I won’t go that far—not now—because the countability may be related to a particular interpretation of quantum physics. So I need to think about that. In any case… The point is that x can only take on values like 0, 1, 2, etcetera. And the same goes for t. To make things easy, we’ll not consider negative values for x right now (and, obviously, not for t either). So we’ve got a infinite set of points like:

  • ei(0/2 − 0/2) = cos(0) + i∙sin(0)
  • ei(1/2 − 0/2) = cos(1/2) + i∙sin(1/2)
  • ei(0/2 − 1/2) = cos(−1/2) + i∙sin(−1/2)
  • ei(1/2 − 1/2) = cos(0) + i∙sin(0)

Now, I quickly opened Excel and calculated those cosine and sine values for x and t going from 0 to 14 below. It’s really easy. Just five minutes of work. You should do yourself as an exercise. The result is shown below. Both graphs connect 14×14 = 196 data points, but you can see what’s going on: this does effectively, represent the elementary wavefunction of a particle traveling in spacetime. In fact, you can see its speed is equal to 1, i.e. it effectively travels at the speed of light, as it should: the wave velocity is v = f·λ = (ω/2π)·(2π/k) = ω/k = (1/2)·(1/2) = 1. The amplitude of our wave doesn’t change along the x = t diagonal. As the Last Samurai puts it, just before he moves to the Other World: “Perfect! They are all perfect!” 🙂

graph imaginarygraph real

In fact, in case you wonder how the quantum vacuum could possibly look like, you should probably think of these discrete spacetime points, and some complex-valued wave that travels as it does in the illustration above.

Of course, that elementary wavefunction above does not localize our particle. For that, we’d have to add a potentially infinite number of such elementary wavefunctions, so we’d write the wavefunction as ∑ ajeiθj functions. [I use the symbol here for the subscript, rather than the more conventional i symbol for a subscript, so as to avoid confusion with the symbol used for the imaginary unit.] The acoefficients are the contribution that each of these elementary wavefunctions would make to the composite wave. What could they possibly be? Not sure. Let’s first look at the argument of our elementary component wavefunctions. We’d inject uncertainty in it. So we’d say that m = E = p is equal to

m = E = p = ħ/2 + j·ħ with j = 0, 1, 2,…

That amounts to writing: m = E = p = ħ/2, ħ, 3ħ/2, 2ħ, 5/2ħ, etcetera. Waw! That’s nice, isn’t it? My intuition tells me that our acoefficients will be smaller for higher j, so the aj(j) function would be some decreasing function. What shape? Not sure. Let’s first sum up our thoughts so far:

  1. The elementary wavefunction of a zero-mass particle (again, I mean zero rest mass) in free space is associated with an energy that’s equal to ħ/2.
  2. The zero-mass particle travels at the speed of light, obviously (because it has zero rest mass), and its kinetic energy is equal to E = m·v2/2 = m·c2/2.
  3. However, its total energy is equal to E = m·v= m·c2: it has some hidden energy. Why? Just because it exists.
  4. We may associate its kinetic energy with the real part of its wavefunction, and the hidden energy with its imaginary part. However, you should remember that the imaginary part of the wavefunction is as essential as its real part, so the hidden energy is equally real. 🙂

So… Well… Isn’t this just nice?

I think it is. Another obvious advantage of this way of looking at the elementary wavefunction is that – at first glance at least – it provides an intuitive understanding of why we need to take the (absolute) square of the wavefunction to find the probability of our particle being at some point in space and time. The energy of a wave is proportional to the square of its amplitude. Now, it is reasonable to assume the probability of finding our (point) particle would be proportional to the energy and, hence, to the square of the amplitude of the wavefunction, which is given by those aj(j) coefficients.

Huh?

OK. You’re right. I am a bit too fast here. It’s a bit more complicated than that, of course. The argument of probability being proportional to energy being proportional to the square of the amplitude of the wavefunction only works for a single wave a·eiθ. The argument does not hold water for a sum of functions ∑ ajeiθj. Let’s write it all out. Taking our m = E = p = ħ/2 + j·ħ = ħ/2, ħ, 3ħ/2, 2ħ, 5/2ħ,… formula into account, this sum would look like:

a1ei(x − t)(1/2) + a2ei(x − t)(2/2) + a3ei(x − t)(3/2) + a4ei(x − t)(4/2) + …

But—Hey! We can write this as some power series, can’t we? We just need to add a0ei(x − t)(0/2) = a0, and then… Well… It’s not so easy, actually. Who can help me? I am trying to find something like this:

power series

Or… Well… Perhaps something like this:

power series 2

Whatever power series it is, we should be able to relate it to this one—I’d hope:

power series 3

Hmm… […] It looks like I’ll need to re-visit this, but I am sure it’s going to work out. Unfortunately, I’ve got no more time today, I’ll let you have some fun now with all of this. 🙂 By the way, note that the result of the first power series is only valid for |x| < 1. 🙂

Note 1: What we should also do now is to re-insert mass in the equations. That should not be too difficult. It’s consistent with classical theory: the total energy of some moving mass is E = m·c2, out of which m·v2/2 is the classical kinetic energy. All the rest – i.e. m·c2 − m·v2/2 – is potential energy, and so that includes the energy that’s ‘hidden’ in the imaginary part of the wavefunction. 🙂

Note 2: I really didn’t pay much attentions to dimensions when doing all of these manipulations above but… Well… I don’t think I did anything wrong. Just to give you some more feel for that wavefunction ei(kx − ωt), please do a dimensional analysis of its argument. I mean, k = p/ħ, and ω = E/ħ, so check the dimensions:

  • Momentum is expressed in newton·second, and we divide it by the quantum of action, which is expressed in newton·meter·second. So we get something per meter. But then we multiply it with x, so we get a dimensionless number.
  • The same is true for the ωt term. Energy is expressed in joule, i.e. newton·meter, and so we divide it by ħ once more, so we get something per second. But then we multiply it with t, so… Well… We do get a dimensionless number: a number that’s expressed in radians, to be precise. And so the radian does, indeed, integrate both the time as well as the distance dimension. 🙂

Light

I started the two previous posts attempting to justify why we need all these mathematical formulas to understand stuff: because otherwise we just keep on repeating very simplistic but nonsensical things such as ‘matter behaves (sometimes) like light’, ‘light behaves (sometimes) like matter’ or, combining both, ‘light and matter behave like wavicles’. Indeed: what does ‘like‘ mean? Like the same but different? 🙂 However, I have not said much about light so far.

Light and matter are two very different things. For matter, we have quantum mechanics. For light, we have quantum electrodynamics (QED). However, QED is not only a quantum theory about light: as Feynman pointed out in his little but exquisite 1985 book on quantum electrodynamics (QED: The Strange Theory of Light and Matter), it is first and foremost a theory about how light interacts with matter. However, let’s limit ourselves here to light.

In classical physics, light is an electromagnetic wave: it just travels on and on and on because of that wonderful interaction between electric and magnetic fields. A changing electric field induces a magnetic field, the changing magnetic field then induces an electric field, and then the changing electric field induces a magnetic field, and… Well, you got the idea: it goes on and on and on. This wonderful machinery is summarized in Maxwell’s equations – and most beautifully so in the so-called Heaviside form of these equations, which assume a charge-free vacuum space (so there are no other charges lying around exerting a force on the electromagnetic wave or the (charged) particle whom’s behavior we want to study) and they also make abstraction of other complications such as electric currents (so there are no moving charges going around either).

I reproduced Heaviside’s Maxwell equations below as well as an animated gif which is supposed to illustrate the dynamics explained above. [In case you wonder who’s Heaviside? Well… Check it out: he was quite a character.] The animation is not all that great but OK enough. And don’t worry if you don’t understand the equations – just note the following:

  1. The electric and magnetic field E and B are represented by perpendicular oscillating vectors.
  2. The first and third equation (∇·E = 0 and ∇·B = 0) state that there are no static or moving charges around and, hence, they do not have any impact on (the flux of) E and B.
  3. The second and fourth equation are the ones that are essential. Note the time derivatives (∂/∂t): E and B oscillate and perpetuate each other by inducing new circulation of B and E.

Heaviside form of Maxwell's equations

The constants μ and ε in the fourth equation are the so-called permeability (μ) and permittivity (ε) of the medium, and μ0 and ε0 are the values for these constants in a vacuum space. Now, it is interesting to note that με equals 1/c2, so a changing electric field only produces a tiny change in the circulation of the magnetic field. That’s got something to do with magnetism being a ‘relativistic’ effect but I won’t explore that here – except for noting that the final Lorentz force on a (charged) particle F = q(E + v×B) will be the same regardless of the reference frame (moving or inertial): the reference frame will determine the mixture of E and B fields, but there is only one combined force on a charged particle in the end, regardless of the reference frame (inertial or moving at whatever speed – relativistic (i.e. close to c) or not). [The forces F, E and B on a moving (charged) particle are shown below the animation of the electromagnetic wave.] In other words, Maxwell’s equations are compatible with both special as well as general relativity. In fact, Einstein observed that these equations ensure that electromagnetic waves always travel at speed c (to use his own words: “Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body.”) and it’s this observation that led him to develop his special relativity theory.

Electromagneticwave3Dfromside

325px-Lorentz_force_particle

The other interesting thing to note is that there is energy in these oscillating fields and, hence, in the electromagnetic wave. Hence, if the wave hits an impenetrable barrier, such as a paper sheet, it exerts pressure on it – known as radiation pressure. [By the way, did you ever wonder why a light beam can travel through glass but not through paper? Check it out!] A very oft-quoted example is the following: if the effects of the sun’s radiation pressure on the Viking spacecraft had been ignored, the spacecraft would have missed its Mars orbit by about 15,000 kilometers. Another common example is more science fiction-oriented: the (theoretical) possibility of space ships using huge sails driven by sunlight (paper sails obviously – one should not use transparent plastic for that). 

I am mentioning radiation pressure because, although it is not that difficult to explain radiation pressure using classical electromagnetism (i.e. light as waves), the explanation provided by the ‘particle model’ of light is much more straightforward and, hence, a good starting point to discuss the particle nature of light:

  1. Electromagnetic radiation is quantized in particles called photons. We know that because of Max Planck’s work on black body radiation, which led to Planck’s relation: E = hν. Photons are bona fide particles in the so-called Standard Model of physics: they are defined as bosons with spin 1, but zero rest mass and no electric charge (as opposed to W bosons). They are denoted by the letter or symbol γ (gamma), so that’s the same symbol that’s used to denote gamma rays. [Gamma rays are high-energy electromagnetic radiation (i.e. ‘light’) that have a very definite particle character. Indeed, because of their very short wavelength – less than 10 picometer (10×10–12 m) and high energy (hundreds of KeV – as opposed to visible light, which has a wavelength between 380 and 750 nanometer (380-750×10–9 m) and typical energy of 2 to 3 eV only (so a few hundred thousand times less), they are capable of penetrating through thick layers of concrete, and the human body – where they might damage intracellular bodies and create cancer (lead is a more efficient barrier obviously: a shield of a few centimeter of lead will stop most of them. In case you are not sure about the relation between energy and penetration depth, see the Post Scriptum.]
  2. Although photons are considered to have zero rest mass, they have energy and, hence, an equivalent relativistic mass (m = E/c2) and, therefore, also momentum. Indeed, energy and momentum are related through the following (relativistic) formula: E = (p2c+ m02c4)1/2 (the non-relativistic version is simply E = p2/2m0 but – quite obviously – an approximation that cannot be used in this case – if only because the denominator would be zero). This simplifies to E = pc or p = E/c in this case. This basically says that the energy (E) and the momentum (p) of a photon are proportional, with c – the velocity of the wave – as the factor of proportionality.
  3. The generation of radiation pressure can then be directly related to the momentum property of photons, as shown in the diagram below – which shows how radiation force could – perhaps – propel a space sailing ship. [Nice idea, but I’d rather bet on nuclear-thermal rocket technology.]

Sail-Force1

I said in my introduction to this post that light and matter are two very different things. They are, and the logic connecting matter waves and electromagnetic radiation is not straightforward – if there is any. Let’s look at the two equations that are supposed to relate the two – the de Broglie relation and the Planck relation:

  1. The de Broglie relation E = hassigns a de Broglie frequency (i.e. the frequency of a complex-valued probability amplitude function) to a particle with mass m through the mass-energy equivalence relation E = mc2. However, the concept of a matter wave is rather complicated (if you don’t think so: read the two previous posts): matter waves have little – if anything – in common with electromagnetic waves. Feynman calls electromagnetic waves ‘real’ waves (just like water waves, or sound waves, or whatever other wave) as opposed to… Well – he does stop short of calling matter waves unreal but it’s obvious they look ‘less real’ than ‘real waves’. Indeed, these complex-valued psi functions (Ψ) – for which we have to square the modulus to get the probability of something happening in space and time, or to measure the likely value of some observable property of the system – are obviously ‘something else’! [I tried to convey their ‘reality’ as well as I could in my previous post, but I am not sure I did a good job – not all really.]
  2. The Planck relation E = hν relates the energy of a photon – the so-called quantum of light (das Lichtquant as Einstein called it in 1905 – the term ‘photon’ was coined some 20 years later it is said) – to the frequency of the electromagnetic wave of which it is part. [That Greek symbol (ν) – it’s the letter nu (the ‘v’ in Greek is amalgamated with the ‘b’) – is quite confusing: it’s not the v for velocity.]

So, while the Planck relation (which goes back to 1905) obviously inspired Louis de Broglie (who introduced his theory on electron waves some 20 years later – in his PhD thesis of 1924 to be precise), their equations look the same but are different – and that’s probably the main reason why we keep two different symbols – f and ν – for the two frequencies.

Photons and electrons are obviously very different particles as well. Just to state the obvious:

  1. Photons have zero rest mass, travel at the speed of light, have no electric charge, are bosons, and so on and so on, and so they behave differently (see, for example, my post on Bose and Fermi, which explains why one cannot make proton beam lasers). [As for the boson qualification, bosons are force carriers: photons in particular mediate (or carry) the electromagnetic force.]
  2. Electrons do not weigh much and, hence, can attain speeds close to light (but it requires tremendous amounts of energy to accelerate them very near c) but so they do have some mass, they have electric charge (photons are electrically neutral), and they are fermions – which means they’re an entirely different ‘beast’ so to say when it comes to combining their probability amplitudes (so that’s why they’ll never get together in some kind of electron laser beam either – just like protons or neutrons – as I explain in my post on Bose and Fermi indeed).

That being said, there’s some connection of course (and that’s what’s being explored in QED):

  1. Accelerating electric charges cause electromagnetic radiation (so moving charges (the negatively charged electrons) cause the electromagnetic field oscillations, but it’s the (neutral) photons that carry it).
  2. Electrons absorb and emit photons as they gain/lose energy when going from one energy level to the other.
  3. Most important of all, individual photons – just like electrons – also have a probability amplitude function – so that’s a de Broglie or matter wave function if you prefer that term.

That means photons can also be described in terms of some kind of complex wave packet, just like that electron I kept analyzing in my previous posts – until I (and surely you) got tired of it. That means we’re presented with the same type of mathematics. For starters, we cannot be happy with assigning a unique frequency to our (complex-valued) de Broglie wave, because that would – once again – mean that we have no clue whatsoever where our photon actually is. So, while the shape of the wave function below might well describe the E and B of a bona fide electromagnetic wave, it cannot describe the (real or imaginary) part of the probability amplitude of the photons we would associate with that wave.

constant frequency waveSo that doesn’t work. We’re back at analyzing wave packets – and, by now, you know how complicated that can be: I am sure you don’t want me to mention Fourier transforms again! So let’s turn to Feynman once again – the greatest of all (physics) teachers – to get his take on it. Now, the surprising thing is that, in his 1985 Lectures on Quantum Electrodynamics (QED), he doesn’t really care about the amplitude of a photon to be at point x at time t. What he needs to know is:

  1. The amplitude of a photon to go from point A to B, and
  2. The amplitude of a photon to be absorbed/emitted by an electron (a photon-electron coupling as it’s called).

And then he needs only one more thing: the amplitude of an electron to go from point A to B. That’s all he needs to explain EVERYTHING – in quantum electrodynamics that is. So that’s partial reflection, diffraction, interference… Whatever! In Feynman’s own words: “Out of these three amplitudes, we can make the whole world, aside from what goes on in nuclei, and gravitation, as always!” So let’s have a look at it.

I’ve shown some of his illustrations already in the Bose and Fermi post I mentioned above. In Feynman’s analysis, photons get emitted by some source and, as soon as they do, they travel with some stopwatch, as illustrated below. The speed with which the hand of the stopwatch turns is the angular frequency of the phase of the probability amplitude, and it’s length is the modulus -which, you’ll remember, we need to square to get a probability of something, so for the illustration below we have a probability of 0.2×0.2 = 4%. Probability of what? Relax. Let’s go step by step.

Stopwatch

Let’s first relate this probability amplitude stopwatch to a theoretical wave packet, such as the one below – which is a nice Gaussian wave packet:

example of wave packet

This thing really fits the bill: it’s associated with a nice Gaussian probability distribution (aka as a normal distribution, because – despite its ideal shape (from a math point of view), it actually does describe many real-life phenomena), and we can easily relate the stopwatch’s angular frequency to the angular frequency of the phase of the wave. The only thing you’ll need to remember is that its amplitude is not constant in space and time: indeed, this photon is somewhere sometime, and that means it’s no longer there when it’s gone, and also that it’s not there when it hasn’t arrived yet. 🙂 So, as you long as you remember that, Feynman’s stopwatch is a great way to represent a photon (or any particle really). [Just think of a stopwatch in your hand with no hand, but then suddenly that hand grows from zero to 0.2 (or some other random value between 0 and 1) and then shrinks back from that random value to 0 as the photon whizzes by. […] Or find some other creative interpretation if you don’t like this one. :-)]

Now, of course we do not know at what time the photon leaves the source and so the hand of the stopwatch could be at 2 o’clock, 9 o’clock or whatever: so the phase could be shifted by any value really. However, the thing to note is that the stopwatch’s hand goes around and around at a steady (angular) speed.

That’s OK. We can’t know where the photon is because we’re obviously assuming a nice standardized light source emitting polarized light with a very specific color, i.e. all photons have the same frequency (so we don’t have to worry about spin and all that). Indeed, because we’re going to add and multiply amplitudes, we have to keep it simple (the complicated things should be left to clever people – or academics). More importantly, it’s OK because we don’t need to know the exact position of the hand of the stopwatch as the photon leaves the source in order to explain phenomena like the partial reflection of light on glass. What matters there is only how much the stopwatch hand turns in the short time it takes to go from the front surface of the glass to its back surface. That difference in phase is independent from the position of the stopwatch hand as it reaches the glass: it only depends on the angular frequency (i.e. the energy of the photon, or the frequency of the light beam) and the thickness of the glass sheet. The two cases below present two possibilities: a 5% chance of reflection and a 16% chance of reflection (16% is actually a maximum, as Feynman shows in that little book, but that doesn’t matter here).

partial reflection

But – Hey! – I am suddenly talking amplitudes for reflection here, and the probabilities that I am calculating (by adding amplitudes, not probabilities) are also (partial) reflection probabilities. Damn ! YOU ARE SMART! It’s true. But you get the idea, and I told you already that Feynman is not interested in the probability of a photon just being here or there or wherever. He’s interested in (1) the amplitude of it going from the source (i.e. some point A) to the glass surface (i.e. some other point B), and then (2) the amplitude of photon-electron couplings – which determine the above amplitudes for being reflected (i.e. being (back)scattered by an electron actually).

So what? Well… Nothing. That’s it. I just wanted you to give some sense of de Broglie waves for photons. The thing to note is that they’re like de Broglie waves for electrons. So they are as real or unreal as these electron waves, and they have close to nothing to do with the electromagnetic wave of which they are part. The only thing that relates them with that real wave so to say, is their energy level, and so that determines their de Broglie wavelength. So, it’s strange to say, but we have two frequencies for a photon: E= hν and E = hf. The first one is the Planck relation (E= hν): it associates the energy of a photon with the frequency of the real-life electromagnetic wave. The second is the de Broglie relation (E = hf): once we’ve calculated the energy of a photon using E= hν, we associate a de Broglie wavelength with the photon. So we imagine it as a traveling stopwatch with angular frequency ω = 2πf.

So that’s it (for now). End of story.

[…]

Now, you may want to know something more about these other amplitudes (that’s what I would want), i.e. the amplitude of a photon to go from A to B and this coupling amplitude and whatever else that may or may not be relevant. Right you are: it’s fascinating stuff. For example, you may or may not be surprised that photons have an amplitude to travel faster or slower than light from A to B, and that they actually have many amplitudes to go from A to B: one for each possible path. [Does that mean that the path does not have to be straight? Yep. Light can take strange paths – and it’s the interplay (i.e. the interference) between all these amplitudes that determines the most probable path – which, fortunately (otherwise our amplitude theory would be worthless), turns out to be the straight line.] We can summarize this in a really short and nice formula for the P(A to B) amplitude [note that the ‘P’ stands for photon, not for probability – Feynman uses an E for the related amplitude for an electron, so he writes E(A to B)].

However, I won’t make this any more complicated right now and so I’ll just reveal that P(A to B) depends on the so-called spacetime interval. This spacetime interval (I) is equal to I = (z2– z1)+ (y2– y1)+ (x2– x1)– (t2– t1)2, with the time and spatial distance being measured in equivalent units (so we’d use light-seconds for the unit of distance or, for the unit of time, the time it takes for light to travel one meter). I am sure you’ve heard about this interval. It’s used to explain the famous light cone – which determines what’s past and future in respect to the here and now in spacetime (or the past and present of some event in spacetime) in terms of

  1. What could possibly have impacted the here and now (taking into account nothing can travel faster than light – even if we’ve mentioned some exceptions to this already, such as the phase velocity of a matter wave – but so that’s not a ‘signal’ and, hence, not in contradiction with relativity)?
  2. What could possible be impacted by the here and now (again taking into account that nothing can travel faster than c)?

In short, the light cone defines the past, the here, and the future in spacetime in terms of (potential) causal relations. However, as this post has – once again – become too long already, I’ll need to write another post to discuss these other types of amplitudes – and how they are used in quantum electrodynamics. So my next post should probably say something about light-matter interaction, or on photons as the carriers of the electromagnetic force (both in light as well as in an atom – as it’s the electromagnetic force that keeps an electron in orbit around the (positively charged) nucleus). In case you wonder, yes, that’s Feynman diagrams – among other things.

Post scriptum: On frequency, wavelength and energy – and the particle- versus wave-like nature of electromagnetic waves

I wrote that gamma waves have a very definite particle character because of their very short wavelength. Indeed, most discussions of the electromagnetic spectrum will start by pointing out that higher frequencies or shorter wavelengths – higher frequency (f) implies shorter wavelength (λ) because the wavelength is the speed of the wave (c in this case) over the frequency: λ = c/f – will make the (electromagnetic) wave more particle-like. For example, I copied two illustrations from Feynman’s very first Lectures (Volume I, Lectures 2 and 5) in which he makes the point by showing

  1. The familiar table of the electromagnetic spectrum (we could easily add a column for the wavelength (just calculate λ = c/f) and the energy (E = hf) besides the frequency), and
  2. An illustration that shows how matter (a block of carbon of 1 cm thick in this case) looks like for an electromagnetic wave racing towards it. It does not look like Gruyère cheese, because Gruyère cheese is cheese with holes: matter is huge holes with just a tiny little bit of cheese ! Indeed, at the micro-level, matter looks like a lot of nothing with only a few tiny specks of matter sprinkled about!

And so then he goes on to describe how ‘hard’ rays (i.e. rays with short wavelengths) just plow right through and so on and so on.

  electromagnetic spectrumcarbon close-up view

Now it will probably sound very stupid to non-autodidacts but, for a very long time, I was vaguely intrigued that the amplitude of a wave doesn’t seem to matter when looking at the particle- versus wave-like character of electromagnetic waves. Electromagnetic waves are transverse waves so they oscillate up and down, perpendicular to the direction of travel (as opposed to longitudinal waves, such as sound waves or pressure waves for example: these oscillate back and forth – in the same direction of travel). And photon paths are represented by wiggly lines, so… Well, you may not believe it but that’s why I stupidly thought it’s the amplitude that should matter, not the wavelength.

Indeed, the illustration below – which could be an example of how E or B oscillates in space and time – would suggest that lower amplitudes (smaller A’s) are the key to ‘avoiding’ those specks of matter. And if one can’t do anything about amplitude, then one may be forgiven to think that longer wavelengths – not shorter ones – are the key to avoiding those little ‘obstacles’ presented by atoms or nuclei in some crystal or non-crystalline structure. [Just jot it down: more wiggly lines increase the chance of hitting something.] But… Both lower amplitudes as well as longer wavelengths imply less energy. Indeed, the energy of a wave is, in general, proportional to the square of its amplitude and electromagnetic waves are no exception in this regard. As for wavelength, we have Planck’s relation. So what’s wrong in my very childish reasoning?

Cosine wave concepts

As usual, the answer is easy for those who already know it: neither wavelength nor amplitude have anything to do with how much space this wave actually takes as it propagates. But of course! You didn’t know that? Well… Sorry. Now I do. The vertical y axis might measure E and B indeed, but the graph and the nice animation above should not make you think that these field vectors actually occupy some space. So you can think of electromagnetic waves as particle waves indeed: we’ve got ‘something’ that’s traveling in a straight line, and it’s traveling at the speed of light. That ‘something’ is a photon, and it can have high or low energy. If it’s low-energy, it’s like a speck of dust: even if it travels at the speed of light, it is easy to deflect (i.e. scatter), and the ’empty space’ in matter (which is, of course, not empty but full of all kinds of electromagnetic disturbances) may well feel like jelly to it: it will get stuck (read: it will be absorbed somewhere or not even get through the first layer of atoms at all). If it’s high-energy, then it’s a different story: then the photon is like a tiny but very powerful bullet – same size as the speck of dust, and same speed, but much and much heavier. Such ‘bullet’ (e.g. a gamma ray photon) will indeed have a tendency to plow through matter like it’s air: it won’t care about all these low-energy fields in it.

It is, most probably, a very trivial point to make, but I thought it’s worth doing so.

[When thinking about the above, also remember the trivial relationship between energy and momentum for photons: p = E/c, so more energy means more momentum: a heavy truck crashing into your house will create more damage than a Mini at the same speed because the truck has much more momentum. So just use the mass-energy equivalence (E = mc2) and think about high-energy photons as armored vehicles and low-energy photons as mom-and-pop cars.]