It’s quite easy to get lost in all of the math when talking quantum mechanics. In this post, I’d like to freewheel a bit. I’ll basically try to relate the wavefunction we’ve derived for the electron orbitals to the more speculative posts I wrote on how to *interpret *the wavefunction. So… Well… Let’s go. 🙂

If there is one thing you should remember from all of the stuff I wrote in my previous posts, then it’s that the wavefunction for an electron orbital – ψ(* x*,

*t*), so that’s a complex-valued function in

*two*variables (position and time) – can be written as the product of two functions in

*one*variable:

ψ(* x*,

*t*) =

*e*

^{−i·(E/ħ)·t}·

*f*(

*)*

**x**In fact, we wrote *f*(* x*) as ψ(

*), but I told you how confusing that is: the ψ(*

**x***) and ψ(*

**x***,*

**x***t*) functions are, obviously,

*very*different. To be precise, the

*f*(

*) = ψ(*

**x***) function basically provides some envelope for the two-dimensional*

**x***e*

^{iθ}=

*e*

^{−i·(E/ħ)·t}=

*cos*θ +

*i*·

*sin*θ oscillation – as depicted below (θ = −(E/ħ)·

*t*= ω·

*t*with ω = −E/ħ).When analyzing this animation – look at the movement of the green, red and blue dots respectively – one cannot miss the equivalence between this oscillation and the movement of a mass on a spring – as depicted below.The

*e*

^{−i·(E/ħ)·t}function just gives us

*two*springs for the price of one. 🙂 Now, you may want to imagine some kind of elastic medium – Feynman’s famous drum-head, perhaps 🙂 – and you may also want to think of all of this in terms of superimposed waves but… Well… I’d need to review if that’s really relevant to what we’re discussing here, so I’d rather

*not*make things too complicated and stick to basics.

First note that the amplitude of the two linear oscillations above is normalized: the maximum displacement of the object from equilibrium, in the positive *or* negative direction, which we may denote by *x* = ±A, is equal to one. Hence, the energy formula is just the sum of the potential and kinetic energy: T + U = (1/2)·A^{2}·m·ω^{2} = (1/2)·m·ω^{2}. But so we have *two *springs and, therefore, the energy in this two-dimensional oscillation is equal to E = *2*·(1/2)·m·ω^{2} = m·ω^{2}.

This formula is structurally similar to Einstein’s E = m·*c*^{2} formula. Hence, one may want to assume that the *energy *of some particle (an electron, in our case, because we’re discussing electron orbitals here) is just the two-dimensional motion of its *mass*. To put it differently, we might also want to think that **the oscillating real and imaginary component of our wavefunction each store one half of the total energy of our particle**.

However, the interpretation of this rather bold statement is not so straightforward. First, you should note that the ω in the E = m·ω^{2} formula is an *angular *velocity, as opposed to the *c *in the E = m·*c*^{2} formula, which is a *linear* velocity. Angular velocities are expressed in *radians *per second, while linear velocities are expressed in *meter *per second. However, while the *radian *measures an angle, we know it does so by measuring a *length*. Hence, if our distance unit is 1 m, an angle of 2π *rad* will correspond to a length of 2π *meter*, i.e. the circumference of the unit circle. So… Well… The two velocities may *not *be so different after all.

There are other questions here. In fact, the other questions are probably more relevant. First, we should note that the ω in the E = m·ω^{2} can take on any value. For a mechanical spring, ω will be a function of (1) the *stiffness *of the spring (which we usually denote by k, and which is typically measured in *newton* (N) per *meter*) and (2) the mass (m) on the spring. To be precise, we write: ω^{2} = k/m – or, what amounts to the same, ω = √(k/m). Both k and m are *variables* and, therefore, ω can really be anything. In contrast, we know that *c *is a constant: *c *equals 299,792,458 meter per second, to be precise. So we have this rather remarkable expression: *c* = √(E/m), and it is valid for *any *particle – our electron, or the proton at the center, or our hydrogen atom as a whole. It is also valid for more complicated atoms, of course. In fact, it is valid for *any *system.

Hence, we need to take another look at the energy *concept *that is used in our ψ(* x*,

*t*) =

*e*

^{−i·(E/ħ)·t}·

*f*(

*) wavefunction. You’ll remember (if not, you*

**x***should*) that the E here is equal to E

_{n }= −13.6 eV, −3.4 eV, −1.5 eV and so on, for

*n*= 1, 2, 3, etc. Hence, this energy concept is rather particular. As Feynman puts it: “The energies are negative because we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for

*n*= 1, and increases toward zero with increasing

*n*.”

Now, this is the *one and only *issue I have with the standard physics story. I mentioned it in one of my previous posts and, just for clarity, let me copy what I wrote at the time:

Feynman gives us a rather casual explanation [on choosing a zero point for measuring energy] in one of his very first *Lectures *on quantum mechanics, where he writes the following: “If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like *a*·*e*^{−iωt}, with ħ·ω = E = m·*c*^{2}. Hence, we can write the amplitude for the two states, for example as:

*e*^{−i(E1/ħ)·t} and *e*^{−i(E2/ħ)·t}

And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be:

*e*^{−i(E1+A)·t/ħ} and *e*^{−i(E2+A)·t/ħ}

All of his amplitudes would be multiplied by the same factor *e*^{−i(A/ħ)·t}, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy M_{s}·*c*^{2}, where M_{s} is the mass of all the *separate* pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount M_{g}·*c*^{2}, where M_{g} is the mass of the whole atom *in the ground* state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.”

It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount M_{g}·*c*^{2}, where M_{g} is the mass of the whole atom *in the ground* state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: **if we shift the origin of our energy scale**, it does not make any difference in regard to the *probabilities *we calculate**, **but** it obviously does make a difference in terms of our wavefunction itself. **To be precise, **its** ** density in time will be very different.** Hence, if we’d want to give the wavefunction some

*physical*meaning – which is what I’ve been trying to do all along – it

*does*make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy.

So… Well… There you go. If we’d want to try to interpret our ψ(* x*,

*t*) =

*e*

^{−i·(En/ħ)·t}·

*f*(

*) function as a two-dimensional oscillation of the*

**x***mass*of our electron, the energy concept in it – so that’s the E

*in it – should include*

_{n }*all*pieces. Most notably, it should also include the electron’s

*rest*energy, i.e. its energy when it is

*not*in a bound state. This rest energy is equal to 0.511 MeV. […]

**: 0.511**

*Read this again**mega*-electronvolt (10

^{6}eV), so that’s huge as compared to the tiny energy values we mentioned so far (−13.6 eV, −3.4 eV, −1.5 eV,…).

Of course, this gives us a rather phenomenal order of magnitude for the oscillation that we’re looking at. Let’s quickly calculate it. We need to convert to SI units, of course: 0.511 MeV is about 8.2×10^{−14} *joule* (J), and so the associated *frequency *is equal to ν = E/h = (8.2×10^{−14} J)/(6.626×10^{−34} J·s) ≈ 1.23559×10^{20} cycles per second. Now, I know such number doesn’t say all that much: just note it’s the same order of magnitude as the frequency of gamma rays and… Well… No. I won’t say more. You should try to think about this for yourself. [If you do, think – for starters – about the difference between *bosons *and *fermions*: matter-particles are fermions, and photons are bosons. Their *nature *is very different.]

The corresponding *angular *frequency is just the same number but multiplied by 2π (one cycle corresponds to 2π *radians *and, hence, ω = 2π·ν = 7.76344×10^{20} *rad* per second. Now, if our green dot would be moving around the origin, along the circumference of our unit circle, then its horizontal and/or vertical velocity would approach the same value. Think of it. We have this *e*^{iθ} = *e*^{−i·(E/ħ)·t} = *e*^{i·ω·t} = *cos*(ω·*t*) + *i*·*sin*(ω·*t*) function, with ω = E/ħ. So the *cos*(ω·t) captures the motion along the horizontal axis, while the *sin*(ω·*t*) function captures the motion along the vertical axis. Now, the velocity along the *horizontal *axis as a function of time is given by the following formula:

*v*(*t*) = d[x(*t*)]/d*t* = d[*cos*(ω·*t*)]/d*t* = −ω·*sin*(ω·*t*)

Likewise, the velocity along the *vertical *axis is given by *v*(*t*) = d[*sin*(ω·*t*)]/d*t* = ω·*cos*(ω·*t*). These are interesting formulas: they show the velocity (*v*) along one of the two axes is always *less *than the angular velocity (ω). To be precise, the velocity *v **approaches *– or, in the limit, is equal to – the angular velocity ω when ω·*t *is equal to ω·*t *= 0, π/2, π or 3π/2. So… Well… 7.76344×10^{20} *meter* per second!? That’s like 2.6 *trillion *times the speed of light. So that’s not possible, of course!

That’s where the *amplitude *of our wavefunction comes in – our envelope function *f*(* x*): the green dot does

*not*move along the unit circle. The circle is much tinier and, hence, the oscillation should

*not*exceed the speed of light. In fact, I should probably try to prove it oscillates

*at*the speed of light, thereby respecting Einstein’s universal formula:

*c* = √(E/m)

Written like this – rather than as you know it: E = m·*c*^{2} – this formula shows **the speed of light is just a property of spacetime**, just like the ω = √(k/m) formula (or the ω = √(1/

*LC*) formula for a resonant AC circuit) shows that ω, the

*natural*frequency of our oscillator, is a characteristic of the

*system*.

Am I absolutely certain of what I am writing here? No. My level of understanding of physics is still that of an undergrad. But… Well… It all makes a lot of sense, doesn’t it? 🙂

Now, I said there were a *few* obvious questions, and so far I answered only one. The other obvious question is why energy would appear to us as mass in motion *in two dimensions only*. Why is it an oscillation in a plane? We might imagine a third spring, so to speak, moving in and out from us, right? Also, energy *densities *are measured per unit *volume*, right?

Now *that*‘s a clever question, and I must admit I can’t answer it right now. However, I do suspect it’s got to do with the fact that the wavefunction depends on the orientation of our reference frame. If we rotate it, it changes. So it’s like we’ve lost one degree of freedom already, so only two are left. Or think of the third direction as the direction of *propagation *of the wave. 🙂 Also, we should re-read what we wrote about the Poynting vector for the matter wave, or what Feynman wrote about probability *currents*. Let me give you some appetite for that by noting that we can re-write *joule *per *cubic* meter (J/m^{3}) as *newton *per *square *meter: J/m^{3} = N·m/m^{3} = N/m^{2}. [Remember: the unit of energy is force times distance. In fact, looking at Einstein’s formula, I’d say it’s kg·m^{2}/s^{2} (mass times a squared velocity), but that simplifies to the same: kg·m^{2}/s^{2} = [N/(m/s^{2})]·m^{2}/s^{2}.]

I should probably also remind you that there is no three-dimensional equivalent of Euler’s formula, and the way the kinetic and potential energy of those two oscillations works together is rather unique. Remember I illustrated it with the image of a V-2 engine in previous posts. There is no such thing as a V-3 engine. [Well… There actually is – but not with the third cylinder being positioned *sideways*.]

But… Then… Well… Perhaps we should think of some weird combination of *two *V-2 engines. The illustration below shows the superposition of two *one-*dimensional waves – I think – one traveling east-west and back, and the other one traveling north-south and back. So, yes, we may to think of Feynman’s drum-head again – but combining *two*-dimensional waves – *two *waves that *both *have an imaginary as well as a real dimension

Hmm… Not sure. If we go down this path, we’d need to add a third dimension – so w’d have a super-weird V-6 engine! As mentioned above, the wavefunction does depend on our reference frame: we’re looking at stuff from a certain *direction* and, therefore, we can only see what goes up and down, and what goes left or right. We can’t see what comes near and what goes away from us. Also think of the particularities involved in measuring angular momentum – or the magnetic moment of some particle. We’re measuring that along one direction only! Hence, it’s probably no use to imagine we’re looking at *three *waves simultaneously!

In any case… I’ll let you think about all of this. I do feel I am on to something. I am convinced that my interpretation of the wavefunction as an *energy propagation *mechanism, or as *energy itself* – as a two-dimensional oscillation of mass – makes sense. 🙂

Of course, I haven’t answered one *key *question here: what *is *mass? What is that green dot – **in reality**, that is? At this point, we can only waffle – probably best to just give its standard definition: mass is a measure of *inertia*. A resistance to acceleration or deceleration, or to changing direction. But that doesn’t say much. I hate to say that – in many ways – all that I’ve learned so far has *deepened *the mystery, rather than solve it. The more we understand, the less we understand? But… Well… That’s all for today, folks ! Have fun working through it for yourself. 🙂

**Post scriptum**: I’ve simplified the wavefunction a bit. As I noted in my post on it, the complex exponential is actually equal to *e*^{−i·[(E/ħ)·t − }^{m·φ]}, so we’ve got a phase shift because of *m*, the quantum number which denotes the *z*-component of the angular momentum. But that’s a minor detail that shouldn’t trouble or worry you here.

Pingback: An introduction to virtual particles (2) | Reading Feynman

Pingback: Thinking again… | Reading Feynman

Pingback: Revisiting waves | An inquiry into the nature of things

Pingback: Some thoughts on the Nature of Reality | Reading Feynman