**Post scriptum note added on 11 July 2016**: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately read the more recent *exposé *on the matter that is being presented here, which you can find by clicking on the provided link.

**Original post**:

This post is, essentially, a continuation of my previous post, in which I juxtaposed the following images:

Both are the same, and then they’re not. The illustration on the right-hand side is a regular quantum-mechanical wavefunction, i.e. an *amplitude *wavefunction. You’ve seen that one before. In this case, the x-axis represents time, so we’re looking at the wavefunction at some particular point in space. ]You know we can just switch the dimensions and it would all look the same.] The illustration on the left-hand side looks similar, but it’s *not *an amplitude wavefunction. The animation shows how the *electric* field vector (**E**) of an electromagnetic wave travels through space. Its shape is the same. So it’s the same *function*. Is it also the same reality?

Yes and no. And I would say: more no than yes—in this case, at least. Note that the animation does *not *show the accompanying magnetic field vector (**B**). That vector is equally essential in the electromagnetic propagation mechanism according to Maxwell’s equations, which—let me remind you—are equal to:

- ∂
**B**/∂t = –∇×**E** - ∂
**E**/∂t = ∇×**B**

In fact, I should write the second equation as ∂**E**/∂t = *c*^{2}∇×**B**, but then I assume we measure time and distance in equivalent units, so *c* = 1.

You know that **E** and **B** are two aspects of one and the same thing: if we have one, then we have the other. To be precise, **B** is always orthogonal to **E **in the direction that’s given by the right-hand rule for the following vector cross-product: **B** = *e*** _{x}**×

**E**, with

*e***the unit vector pointing in the x-direction (i.e. the direction of propagation). The reality behind is illustrated below for a**

_{x}*linearly*polarized electromagnetic wave.

The **B** = *e*** _{x}**×

**E**equation is equivalent to writing

**B**=

*i*·

**E**, which is equivalent to:

**B** = *i*·**E** = *e ^{i}*

^{(π/2)}·

*e*

^{i}^{(kx − ωt)}= cos(kx − ωt + π/2) +

*i*·sin(kx − ωt + π/2)

= −sin((kx − ωt) + *i*·cos(kx − ωt)

Now, **E** and **B** have only two components: E_{y }and E_{z}, and B_{y }and B_{z}. That’s only because we’re looking at some *ideal *or *elementary *electromagnetic wave here but… Well… Let’s just go along with it. 🙂 It is then easy to prove that the equation above amounts to writing:

- B
_{y }= cos(kx − ωt + π/2) = −sin(kx − ωt) = −E_{z} - B
_{z }= sin(kx − ωt + π/2) = cos(kx − ωt) = E_{y}

We should now think of E_{y} and E_{z }as the real and imaginary part of some wavefunction, which we’ll denote as ψ** _{E}** =

*e*

^{i}^{(kx − ωt)}. So we write:

**E** = (E_{y}, E_{z}) = E_{y} + *i*·E_{z }= cos(kx − ωt) + *i*∙sin(kx − ωt) = *Re*(ψ** _{E}**) +

*i*·

*Im*(ψ

**) = ψ**

_{E}**=**

_{E}*e*

^{i}^{(kx − ωt)}

What about **B**? We just do the same, so we write:

**B** = (B_{y}, B_{z}) = B_{y} + *i*·B_{z }= ψ** _{B}** =

*i*·

**E**=

*i*·ψ

**= −sin(kx − ωt) +**

_{E}*i*∙sin(kx − ωt) = −

*Im*(ψ

**) +**

_{E}*i*·

*Re*(ψ

**)**

_{E}Now we need to prove that ψ** _{E}** and ψ

**are regular wavefunctions, which amounts to proving Schrödinger’s equation, i.e. ∂ψ/∂t =**

_{B}*i*·(ħ/m)·∇

^{2}ψ, for

*both*ψ

**and ψ**

_{E}**. [Note I use the Schrödinger’s equation for a zero-mass spin-zero particle here, which uses the ħ/m factor rather than the ħ/(2m) factor.] To prove that ψ**

_{B}**and ψ**

_{E}**are regular wavefunctions, we should prove that:**

_{B}*Re*(∂ψ/∂t) = −(ħ/m)·_{E}*Im*(∇^{2}ψ) and_{E}*Im*(∂ψ/∂t) = (ħ/m)·_{E}*Re*(∇^{2}ψ), and_{E}*Re*(∂ψ/∂t) = −(ħ/m)·_{B}*Im*(∇^{2}ψ) and_{B}*Im*(∂ψ/∂t) = (ħ/m)·_{B}*Re*(∇^{2}ψ)._{B}

Let’s do the calculations for the second pair of equations. The time derivative on the left-hand side is equal to:

∂ψ** _{B}**/∂t = −

*i*ω·

*ie*

^{i}^{(kx − ωt) }= ω·[cos(kx − ωt) +

*i*·sin(kx − ωt)] = ω·cos(kx − ωt) +

*i*ω·sin(kx − ωt)

The second-order derivative on the right-hand side is equal to:

∇^{2}ψ** _{B }**= ∂

^{2}ψ

**/∂x**

_{B}^{2 }=

*i*·k

^{2}·

*e*

^{i}^{(kx − ωt) }= k

^{2}·cos(kx − ωt) +

*i*·k

^{2}·sin(kx − ωt)

So the two equations for ψ** _{B }**are equivalent to writing:

*Re*(∂ψ/∂t) = −(ħ/m)·_{B}*Im*(∇^{2}ψ) ⇔ ω·cos(kx − ωt) = k_{B}^{2}·(ħ/m)·cos(kx − ωt)*Im*(∂ψ/∂t) = (ħ/m)·_{B}*Re*(∇^{2}ψ) ⇔ ω·sin(kx − ωt) = k_{B}^{2}·(ħ/m)·sin(kx − ωt)

So we see that both *conditions* are fulfilled if, and *only* if, ω = k^{2}·(ħ/m).

Now, we also demonstrated in that post of mine that Maxwell’s equations imply the following:

- ∂B
_{y}/∂t = –(∇×**E**)_{y}= ∂E_{z}/∂x = ∂[sin(kx − ωt)]/∂x =*k*·cos(kx − ωt) =*k*·E_{y} - ∂B
_{z}/∂t = –(∇×**E**)_{z}= – ∂E_{y}/∂x = – ∂[cos(kx − ωt)]/∂x =*k*·sin(kx − ωt) =*k*·E_{z}

Hence, using those B_{y }= −E_{z }and B_{z }= E_{y }equations above, we can also calculate these derivatives as:

- ∂B
_{y}/∂t = −∂E_{z}/∂t = −∂sin(kx − ωt)/∂t = ω·cos(kx − ωt) = ω·E_{y} - ∂B
_{z}/∂t = ∂E_{y}/∂t = ∂cos(kx − ωt)/∂t = −ω·[−sin(kx − ωt)] = ω·E_{z}

In other words, Maxwell’s equations imply that ω = k, which is consistent with us measuring time and distance in equivalent units, so the phase velocity is *c *= 1 = ω/k.

So far, so good. We basically established that the propagation mechanism for an electromagnetic wave, as described by Maxwell’s equations, is fully coherent with the propagation mechanism—if we can call it like that—as described by Schrödinger’s equation. We also established the following equalities:

- ω = k
- ω = k
^{2}·(ħ/m)

The second of the two *de Broglie *equations tells us that k = p/ħ, so we can *combine *these two equations and re-write these two conditions as:

ω/k = 1 = k·(ħ/m) = (p/ħ)·(ħ/m) = p/m ⇔ p = m

What does this imply? The p here is the momentum: p = m·*v*, so this condition implies *v *must be equal to 1 too, so the wave velocity is equal to the speed of light. Makes sense, because we actually *are *talking light here. 🙂 In addition, because it’s light, we also know E/p = *c *= 1, so we have – once again – the general E = p = m equation, which we’ll need!

OK. Next. Let’s write the Schrödinger wave equation for both wavefunctions:

- ∂ψ
/∂t =_{E}*i*·(ħ/m)·∇_{E}^{2}ψ, and_{E} - ∂ψ
/∂t =_{B}*i*·(ħ/m)·∇_{B}^{2}ψ._{B}

* Huh? *What’s m

**and**

_{E}*m*

**? We should only associate one mass concept with our electromagnetic wave, shouldn’t we? Perhaps. I just want to be on the safe side now. Of course, if we distinguish m**

_{E}**and m**

_{E}**, we should probably also distinguish p**

_{B}**and p**

_{E}**, and E**

_{B}**and E**

_{E}**as well, right? Well… Yes. If we accept this line of reasoning, then the mass factor in Schrödinger’s equations is pretty much like the 1/**

_{B}*c*

^{2}= μ

_{0}ε

_{0}factor in Maxwell’s (1/

*c*

^{2})·∂

**E**/∂t = ∇×

**B**equation: the mass factor appears as a property of the medium, i.e. the

*vacuum*here! [Just check my post on physical constants in case you wonder what I am trying to say here, in which I explain why and how

*c*

*defines*the (properties of the) vacuum.]

To be consistent, we should also distinguish p** _{E}** and p

**, and E**

_{B}**and E**

_{E}**, and so we should write ψ**

_{B}**and ψ**

_{E }**as:**

_{B}- ψ
=_{E}*e*^{i}^{(kEx − ωEt)}, and - ψ
=_{B}*e*^{i}^{(kBx − ωBt)}.

* Huh? *Yes. I know what you think: we’re talking one photon—or one electromagnetic wave—so there can be only one energy, one momentum and, hence, only one k, and one ω. Well… Yes and no. Of course, the following identities should hold: k

**= k**

_{E}**and, likewise, ω**

_{B}**= ω**

_{E }**. So… Yes. They’re the same: one k and one ω. But then… Well…**

_{B}*Conceptually*, the two k’s and ω’s are different. So we write:

- p
= E_{E}= m_{E}, and_{E} - p
= E_{B}= m_{B}._{B}

The obvious question is: can we just add them up to find the *total *energy and momentum of our photon? The answer is obviously positive: E = E** _{E}** + E

**, p = p**

_{B}**+ p**

_{E}**and m = m**

_{B}**+ m**

_{E}**.**

_{B}Let’s check a few things now. How does it work for the phase and group velocity of ψ** _{E }**and ψ

**? Simple:**

_{B}*v*_{g}= ∂ω/∂k_{E}= ∂[E_{E}/ħ]/∂[p_{E}/ħ] = ∂E_{E}/∂p_{E}= ∂p_{E}/∂p_{E}= 1_{E}*v*_{p}= ω/k_{E}= (E_{E}/ħ)/(p_{E}/ħ) = E_{E}/p_{E}= p_{E}/p_{E}= 1_{E}

So we’re fine, and you can check the result for ψ** _{B }**by substituting the subscript

**E**for

**B**. To sum it all up, what we’ve got here is the following:

- We can think of a photon having some energy that’s equal to E = p = m (assuming
*c*= 1), but that energy would be split up in an electric and a magnetic wavefunction respectively: ψand ψ_{E }._{B} - Schrödinger’s equation applies to
*both*wavefunctions, but the E, p and m in those two wavefunctions are the same and not the same: their*numerical*value is the same (p=E_{E}= m_{E}= p_{E}=E_{B}= m_{B}), but they’re_{B}*conceptually*different. They must be: if not, we’d get a phase and group velocity for the wave that doesn’t make sense.

Of course, the phase and group velocity for the *sum *of the ψ** _{E }**and ψ

**waves must also be equal to**

_{B }*c*. This is obviously the case, because we’re adding waves with the same phase and group velocity

*c*, so there’s no issue with the dispersion relation.

So let’s insert those p** _{E}** =E

**= m**

_{E}**= p**

_{E}**=E**

_{B}**= m**

_{B}**values in the two wavefunctions. For ψ**

_{B}**, we get:**

_{E}ψ** _{E }**=

*e*

^{i}^{[kEx − ωEt) }=

*e*

^{i}^{[(pE/ħ)·x − (EE/ħ)·t]}

^{ }

You can do the calculation for ψ** _{B }**yourself. Let’s simplify our life a little bit and assume we’re using Planck units, so ħ = 1, and so the wavefunction simplifies to ψ

**=**

_{E }*e*

^{i}^{·(pE·x − EE·t)}. We can now add the components of

**E**and

**B**using the summation formulas for sines and cosines:

1. B_{y }+ E_{y} = cos(p** _{B}**·x − E

**·t + π/2) + cos(p**

_{B}**·x − E**

_{E}**·t) = 2·cos[(p·x − E·t + π/2)/2]·cos(π/4) = √2·cos(p·x/2 − E·t/2 + π/4)**

_{E}2. B_{z }+ E_{z} = sin(p** _{B}**·x − E

**·t+π/2) + sin(p**

_{B}**·x − E**

_{E}**·t) = 2·sin[(p·x − E·t + π/2)/2]·cos(π/4) = √2·sin(p·x/2 − E·t/2 + π/4)**

_{E}Interesting!* *We find a

*composite*wavefunction for our photon which we can write as:

**E** + **B** = ψ** _{E }**+ ψ

**=**

_{B }**E**+

*i*·

**E**= √2·

*e*

^{i}^{(p·x/2 − E·t/2 + π/4) }= √2·

*e*

^{i}^{(π/4)}·

*e*

^{i}^{(p·x/2 − E·t/2) }= √2·

*e*

^{i}^{(π/4)}·

**E**

What a great result! It’s easy to double-check, because we can see the **E** + *i*·**E** = √2·*e ^{i}*

^{(π/4)}·

**E**formula implies that 1 +

*i*should equal √2·

*e*

^{i}^{(π/4)}. Now that’s easy to prove, both geometrically (just do a drawing) or formally: √2·

*e*

^{i}^{(π/4)}= √2·cos(π/4) +

*i*·sin(π/4

*e*

^{i}^{(π/4)}= (√2/√2) +

*i*·(√2/√2) = 1 +

*i*. We’re

**🙂**

*bang on!*We can double-check once more, because we should get the same from adding **E** and **B** = *i*·**E**, right? Let’s try:

**E** + **B** = **E** + *i*·**E **= cos(p** _{E}**·x − E

**·t) +**

_{E}*i*·sin(p

**·x − E**

_{E}**·t) +**

_{E}*i*·cos(p

**·x − E**

_{E}**·t) − sin(p**

_{E}**·x − E**

_{E}**·t)**

_{E}= [cos(p** _{E}**·x − E

**·t) – sin(p**

_{E}**·x − E**

_{E}**·t)] +**

_{E}*i*·[sin(p

**·x − E**

_{E}**·t) – cos(p**

_{E}**·x − E**

_{E}**·t)]**

_{E}Indeed, we can see we’re going to obtain the same result, because the −sinθ in the real part of our *composite *wavefunction is equal to cos(θ+π/2), and the −cosθ in its imaginary part is equal to sin(θ+π/2). So the sum above is the same sum of cosines and sines that we did already.

So our electromagnetic wavefunction, i.e. the wavefunction for the *photon*, is equal to:

ψ = ψ** _{E }**+ ψ

**= √2·**

_{B }*e*

^{i}^{(p·x/2 − E·t/2 + π/4)}= √2·

*e*

^{i}^{(π/4)}·

*e*

^{i}

^{(p·x/2 − E·t/2) }

What about the √2 factor in front, and the π/4 term in the argument itself? No sure. It must have something to do with the way the magnetic force works, which is *not *like the electric force. Indeed, remember the Lorentz formula: the force on some *unit* charge (q = 1) will be equal to **F** = **E** + ** v**×

**B**. So… Well… We’ve got another cross-product here and so the geometry of the situation is quite complicated: it’s

*not*like adding two forces

**F**

_{1 }and

**F**

_{2 }to get some combined force

**F**=

**F**

_{1 }and

**F**

_{2}.

In any case, we need the energy, and we know that its proportional to the square of the amplitude, so… Well… We’re spot on: the square of the √2 factor in the √2·cos product and √2·sin product is 2, so that’s *twice*… Well… What? *Hold on a minute! *We’re actually taking the *absolute *square of the **E** + **B** = ψ** _{E }**+ ψ

**=**

_{B }**E**+

*i*·

**E**= √2·

*e*

^{i}^{(p·x/2 − E·t/2 + π/4)}

^{ }wavefunction here. Is that

*legal*? I must assume it is—although… Well… Yes. You’re right. We should do some more explaining here.

We know that we usually measure the energy as some *definite *integral, from t = 0 to some other point in time, or over the *cycle *of the oscillation. So what’s the *cycle *here? Our combined wavefunction can be written as √2·*e ^{i}*

^{(p·x/2 − E·t/2 + π/4)}

^{ }= √2·

*e*

^{i}^{(θ/2 + π/4)}, so a full cycle would correspond to θ going from 0 to 4π here, rather than from 0 to 2π. So that explains the √2 factor in front of our wave equation.

** Bingo! **If you were looking for an interpretation of the Planck energy and momentum, here it is. And, while everything that’s written above is not easy to understand, it’s close to the ‘intuitive’ understanding to quantum mechanics that we were looking for, isn’t it? The quantum-mechanical propagation model explains

*everything*now. 🙂 I only need to show one more thing, and that’s the different

*behavior*of bosons and fermions:

- The amplitudes of identitical
*bosonic*particles interfere with a positive sign, so we have Bose-Einstein statistics here. As Feynman writes it:**(amplitude direct) + (amplitude exchanged)**. - The amplitudes of identical
*fermionic*particles interfere with a negative sign, so we have Fermi-Dirac statistics here:**(amplitude direct) − (amplitude exchanged)**.

I’ll think about it. I am sure it’s got something to do with that **B**= *i*·**E** formula or, to put it simply, with the fact that, when bosons are involved, we get two wavefunctions (ψ** _{E }**and ψ

**) for the price of one. The reasoning should be something like this:**

_{B}**I.** For a massless *particle* (i.e. a zero-mass *fermion*), our wavefunction is just ψ = *e ^{i}*

^{(p·x − E·t)}. So we have no √2 or √2·

*e*

^{i}^{(π/4) }factor in front here. So we can just add any number of them – ψ

_{1}

**+ ψ**

_{ }_{2}

**+ ψ**

_{ }_{3}

**+ … – and then take the absolute square of the amplitude to find a probability density, and we’re done.**

_{ }**II.** For a photon (i.e. a zero-mass *boson*), our wavefunction is √2·*e ^{i}*

^{(π/4)}·

*e*

^{i}^{(p·x − E·t)/2}, which – let’s introduce a new symbol – we’ll denote by φ, so φ = √2·

*e*

^{i}^{(π/4)}·

*e*

^{i}^{(p·x − E·t)/2}. Now, if we add any number of these, we get a similar sum but with that √2·

*e*

^{i}^{(π/4) }factor in front, so we write: φ

_{1}

**+ φ**

_{ }_{2}

**+ φ**

_{ }_{3}

**+ … = √2·**

_{ }*e*

^{i}^{(π/4)}·(ψ

_{1}

**+ ψ**

_{ }_{2}

**+ ψ**

_{ }_{3}

**+ …). If we take the absolute square now, we’ll see the probability density will be equal to**

_{ }*twice*the density for the ψ

_{1}

**+ ψ**

_{ }_{2}

**+ ψ**

_{ }_{3}

**+ … sum, because**

_{ }|√2·*e ^{i}*

^{(π/4)}·(ψ

_{1}

**+ ψ**

_{ }_{2}

**+ ψ**

_{ }_{3}

**+ …)|**

_{ }^{2}= |√2·

*e*

^{i}^{(π/4)}|

^{2}·|ψ

_{1}

**+ ψ**

_{ }_{2}

**+ ψ**

_{ }_{3}

**+ …)|**

_{ }^{2}

**=**2·|ψ

_{1}

_{ }+ ψ

_{2}

_{ }+ ψ

_{3}

_{ }+ …)|

^{2}

So… Well… I still need to connect this to Feynman’s **(amplitude direct) ± (amplitude exchanged)** formula, but I am sure it can be done.

Now, we haven’t tested the *complete *√2·*e ^{i}*

^{(π/4)}·

*e*

^{i}^{(p·x − E·t)/2}

^{ }wavefunction. Does it respect Schrödinger’s ∂ψ/∂t =

*i*·(1/m)·∇

^{2}ψ or, including the 1/2 factor, the ∂ψ/∂t =

*i*·[1/2m)]·∇

^{2}ψ equation? [Note we assume, once again, that ħ = 1, so we use Planck units once more.] Let’s see. We can calculate the derivatives as:

- ∂ψ/∂t = −√2·
*e*^{i}^{(π/4)}·*e*^{−i∙[p·x − E·t]/2}·(*i*·E/2) - ∇
^{2}ψ = ∂^{2}[√2·*e*^{i}^{(π/4)}*·e*^{−i∙[p·x − E·t]/2}]/∂x^{2 }= √2·*e*^{i}^{(π/4)}*·*∂[√2·*e*^{i}^{(π/4)}*·e*^{−i∙[p·x − E·t]/2}·(*i*·p/2)]/∂x = −√2·*e*^{i}^{(π/4)}*·e*^{−i∙[p·x − E·t]/2}·(p/4)^{2}

So Schrödinger’s equation becomes:

−*i*·√2·*e ^{i}*

^{(π/4)}·

*e*

^{−i∙[p·x − E·t]/2}·(

*i*·E/2) = −

*i*·(1/m)·√2·

*e*

^{i}^{(π/4)}

*·e*

^{−i∙[p·x − E·t]/2}·(p

*/4) ⇔ 1/2 = 1/4!?*

^{2}That’s funny ! It doesn’t work ! The E and m and p* ^{2}* are OK because we’ve got that E = m = p equation, but we’ve got problems with yet another factor 2. It only works when we use the 2/m coefficient in Schrödinger’s equation.

So… Well… There’s no choice. That’s what we’re going to do. The Schrödinger equation for the photon is ∂ψ/∂t = *i*·(2/m)·∇^{2}ψ !

It’s a very subtle point. This is all great, and *very *fundamental stuff! Let’s now move on to Schrödinger’s *actual* equation, i.e. the ∂ψ/∂t = *i*·(ħ/2m)·∇^{2}ψ equation.

**Post scriptum on the Planck units:**

If we measure time and distance in equivalent units, say seconds, we can re-write the quantum of action as:

1.0545718×10^{−34 }N·m·s = (1.21×10^{44 }N)·(1.6162×10^{−35 }m)·(5.391×10^{−44}* *s)

⇔ (1.0545718×10^{−34}/2.998×10^{8}) N·s^{2} = (1.21×10^{44 }N)·(1.6162×10^{−35}/2.998×10^{8} s)(5.391×10^{−44}* *s)

⇔ (1.21×10^{44 }N) = [(1.0545718×10^{−34}/2.998×10^{8})]/[(1.6162×10^{−35}/2.998×10^{8} s)(5.391×10^{−44}* *s)] N·s^{2}/s^{2}

You’ll say: what’s this? Well… Look at it. We’ve got a much easier formula for the Planck force—much easier than the standard formulas you’ll find on Wikipedia, for example. If we re-interpret the symbols ħ and *c *so they denote the *numerical *value of the quantum of action and the speed of light in standard SI units (i.e. newton, meter and second)—so ħ and *c *become dimensionless, or *mathematical *constants only, rather than *physical *constants—then the formula above can be written as:

F_{P} *newton* = (ħ/*c*)/[(*l*_{P}/*c*)·t_{P}]* newton *⇔ F_{P} = ħ/(*l*_{P}·t_{P})

Just double-check it: 1.0545718×10^{−34}/(1.6162×10^{−35}·5.391×10^{−44}) = 1.21×10^{44}. *Bingo!*

You’ll say: what’s the point? The point is: our model is complete. We don’t need the other physical constants – i.e. the Coulomb, Boltzmann and gravitational constant – to calculate the Planck units we need, i.e. the Planck force, distance and time units. It all comes out of our elementary wavefunction! All we need to explain the Universe – or, let’s be more modest, quantum mechanics – is two numerical constants (*c* and ħ) and Euler’s formula (which uses π and *e*, of course). That’s it.

If you don’t think that’s a great result, then… Well… Then you’re not reading this. 🙂

Pingback: The wavefunction as an oscillation of spacetime | Reading Feynman

Pingback: The Poynting vector for the matter-wave | Reading Feynman

Pingback: The state(s) of a photon | Reading Feynman